A new camera that builds on technology first described by Stanford researchers more than 20 years ago could generate the kind of information-rich images that robots need to navigate the world.
As technology stands now, robots have to move around, gathering different perspectives, if they want to understand certain aspects of their environment, such as movement and material composition of different objects.
This camera could allow them to gather much the same information in a single image. Researchers also see this being used in autonomous vehicles and augmented and virtual reality technologies.
“It’s at the core of our field of computational photography,” said Gordon Wetzstein assistant professor of electrical engineering. ”It’s a convergence of algorithms and optics that’s facilitating unprecedented imaging systems.”
Read the entire Stanford News article by Taylor Kubota HERE.
The difference between looking through a normal camera and the new design is like the difference between looking through a peephole and a window, the scientists said.
The team includes Donald Dansereau, a postdoctoral fellow in electrical engineering along with colleagues from the University of California, San Diego.