High Speed 3D Shape Digitization Using Projected Light Patterns

From The Theme
SENSING AND COMPUTING

WHAT IF
What if we could quickly determine and track the 3D shape of a moving or deforming object?

High Speed 3D Main Image

WHAT WE SET OUT TO DO
We set out to address one of the central problems in computer vision – the ability digitize an object at high speed. Rapid digitization has applications in film and digital video, with advantages such as: faster scanning, acquiring moving scenes, and interweaving the shape acquisition with regular motion recording. Our proposed prototype system would perform stripe tracking and triangulation in real-time, and would store the resulting stream of range images and analyze it offline.

WHAT WE FOUND
In order to acquire the shape, we built a high-speed projector to project time-coded stripe patterns into a scene, and then read them with a synchronized high-speed camera. An algorithm enabled us to align the range points and merge the scans at high speed, creating a full 3D model. Our tools have the capability to acquire the shape of an object at a rate of 240Hz, getting up to 6000 points per frame. This high speed enables users to scan moving objects or to incorporate the real-time range data in other systems such as trackers or light-field viewers.

LEARN MORE
Real-Time 3D Model Acquisition (2002) Rusinkiewicz, S., Hall-Holt, O., Levoy, M. Proc. SIGGRAPH 2002, ACM Transactions on Graphics 21(3),

PEOPLE BEHIND THE PROJECT
Marc LevoyMarc Levoy is the VMware Founders Professor of Computer Science and Electrical Engineering, Emeritus. He received a Bachelor’s and Master’s in Architecture from Cornell University in 1976 and 1978, and a PhD in Computer Science from the University of North Carolina at Chapel Hill in 1989. At Stanford he taught computer graphics and the science of art, and digital photography. Outside of academia, Levoy co-designed the Google book scanner, launched Google’s Street View project, and currently leads a team in Google Research that has worked on Project Glass, the Nexus HDR+ mode, and the Jump light field camera for Google Cardboard.

Mark BolasMark Bolas is a researcher exploring perception, agency and intelligence. His work focuses on creating virtual environments and transducers that fully engage one’s perception and cognition and create a visceral memory of the experience. At the time of this proposal, he was a Lecturer in Stanford’s Product Design program, and co-founder of Fakespace Labs in Mountain View, California, building instrumentation for research labs to explore virtual reality. In 2017, he is the Associate Director for mixed reality research and development at the Institute for Creative Technologies at USC, and an Associate Professor of Interactive Media in the Interactive Media Division of the USC School of Cinematic Arts.

Ian McDowallIan McDowall is CEO and co-founder of Fakespace Labs in Mountain View, California, building instrumentation for research labs to explore virtual reality. He is also a Fellow at Intuitive Surgical, working on intelligent robotic technologies.