In collaboration with Samsung, researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) programmed the first 2D neural network using materials that are only a few nanometers thick (or less). The materials, often made up of a single sheet of atoms, made it possible for the machine vision processor to capture, store, and recognize more than 1,000 images.
Traditionally, these two-dimensional materials are used for simple digital logic circuits. More complex projects such as artificial intelligence were not possible until recently.
“This work highlights an unprecedented advance in the functional complexity of 2D electronics,” said Donhee Ham, the Gordon McKay Professor of Electrical Engineering and Applied Physics at SEAS and senior author of the paper. “We have performed both front-end optical image sensing and back-end image recognition in one, 2D, material platform.”
The framework, which is further discussed on a blog post on Harvard University’s website, allows the technology to work both as an eye to view the image and a brain to store the image at a single glance.
To test it out, the team exposed the machine vision processor to 1,000 images of handwritten digits, 94 per cent of which it was able to recognize and identify successfully.
“Through capturing of optical images into electrical data like the eye and optic nerve, and subsequent recognition of this data like the brain via in-memory computing, our optoelectronic processor emulates the two core functions of human vision,” explained Henry Hinton, co-author of the paper and SEAS graduate student.
The team behind the innovation sees it as basic building blocks for a wide range of applications. Future projects are mapped out for the team to produce more advanced high-resolution imaging systems.
Add Comment