From The Theme
SENSING AND COMPUTING
What if we could use assistive technology to significantly increase the mobility and independence of those who are blind or visually impaired?
WHAT WE SET OUT TO DO
We set out to develop, build and test a portable system that facilitates navigation by blind people. Like the inverse of SONAR, Blind Navigator (BN) used video to sense the environment and then present an “acoustic image” to the human operator. The proposed acoustic image was intended to be a real-time, 3-D representation of the users environment accompanied by meaningful sound effects and/or speech as acoustic cues.
WHAT WE FOUND
We created a lab to begin development and testing of this system. We developed, tested and built a prototype system that identifies red and green traffic lights, and audibly alerts the user to their presence and direction. When a traffic light is encountered, the system says, for example, “Red traffic light, left 20 degrees,” with this spatial audio appearing to emanate from the corresponding direction of the actual traffic light. The system consists of 2 video cameras, audio headphones, and a laptop computer running Windows 2000, DirectX, OpenCV, and our application software in C++.
1st North American Summer School in Logic, Language, and Information (NASSLLI) with the 11th Logic, Language, and Computation Colloquium – held at Stanford University, June, 2002
PEOPLE BEHIND THE PROJECT
Larry Leifer is Professor of Mechanical Engineering and Founding Director for the Center for Design Research at Stanford University. Dr. Leifer’s engineering design thinking research is focused on instrumenting design teams to understand, support, and improve design practice and theory. Speciﬁc issues include: design-team research methodology, global team dynamics, innovation leadership, interaction design, design-for-wellbeing, and adaptive mechatronic systems. Dr. Leifer received his BS in Engineering Science, his MS in Product Design and his PhD in Biomedical Engineering from Stanford University.
David Grossman is a senior scientist/engineer with extensive management, technical, and startup experience. His background includes 3-D modeling, machine vision, and robotics. A Fellow of the IEEE for contributions to robotics, he managed research groups of up to 120 people at IBM, designing and developing software ranging from robotics and machine vision to natural language processing and other areas of AI. He subsequently co-founded LiveCapital, where he served in various engineering leadership capacities. He holds a number of patents and has published over 60 papers. He has worked in Physics at Princeton, in Computer Science at USC, and in Computer Science and Electrical Engineering at Stanford. He has BA, MA, and PhD degrees in physics from Harvard University.