The image recognition technology that underlies today’s autonomous cars and aerial drones depends on artificial intelligence: the computers essentially teach themselves to recognize objects like a dog, a pedestrian crossing the street or a stopped car. The problem is that the computers running the artificial intelligence algorithms are currently too large and slow for future applications like handheld medical devices.
Now, researchers at Stanford University have devised a new type of artificially intelligent camera system that can classify images faster and more energy efficiently, and that could one day be built small enough to be embedded in the devices themselves, something that is not possible today. The work was published in the August 17 Nature Scientific Reports.
“That autonomous car you just passed has a relatively huge, relatively slow, energy intensive computer in its trunk,” said Gordon Wetzstein, an assistant professor of electrical engineering at Stanford, who led the research. Future applications will need something much faster and smaller to process the stream of images, he said.
Wetzstein and Julie Chang, a graduate student and first author on the paper, took a step toward that technology by marrying two types of computers into one, creating a hybrid optical-electrical computer designed specifically for image analysis.
“Some future version of our system would be especially useful in rapid decision-making applications, like autonomous vehicles,” Wetzstein said.
Read the entire Stanford News Story by Andrew Myers HERE.
Watch Gordon’s Wetzstein Presentation “Light Field Technology: The Future of VR and 3D Displays”
Feature Image Credit: Nvidia