mediaX Thought Leader Helps Design Auto-focus Lenses That Focus on What You See
Gordon Wetzstein uses eye-tracking technology to automatically control lenses that are designed to restore proper vision in people who would ordinarily need progressive lenses.
#mediaX2019 Conference Presentations are Available
Relive the April 25th, 2019 “Digital Communities and the Augmented Human Experience” discussions from Stanford thought leaders and industry experts that delved into ideas of what builds community in 2019 and beyond.
Harnessing the Psychological Power of Virtual Reality to Enhance Leadership in High Diversity Teams in STEM
From The Theme POTENTIAL, PERFORMANCE AND PRODUCTIVITY WHAT IF What if We could use VR to enhance leadership training and capacity for a diverse workforce? WHAT WE SET OUT TO DO We set out to explore whether professional success in a virtual environment can have “spill over” effects in the real world, and to measure […]
Haptic Tether for Human Robot Communication
From The Theme SMART OFFICE WORKFLOWS WHAT IF What if we could help mobile robots navigate crowded environments using haptic communication? WHAT WE SET OUT TO DO We set out to explore the use of haptic communication to improve human-mobile robot interaction, especially in crowded environments, such as offices and shopping malls. We developed and […]
Making Noise Intentional: Capturing and Designing Robotic Sonic Expressions
From The Theme SMART OFFICE WORKFLOWS WHAT IF What if we could better understand how robotic sound influences human perceptions and interactions? WHAT WE SET OUT TO DO We set out to develop design heuristics for the sounds that are a byproduct of interactive objects in the technologically advanced workplace of the future. Specifically, we […]
DIVER (Digital Interactive Video Exploration and Reflection) ROMP (Research on Online Media Personalization)
From The Theme ONLINE MEDIA CONTENT WHAT IF What if we could develop and study a mechanism for managing media content that preserves media companies’ digital rights, while providing consumers with new video expression and sharing functions? WHAT WE SET OUT TO DO We set out to conduct research that would simulate the provision of […]
Interaction Using Situated Spatial Gestures
From The Theme KNOWLEDGE WORKER PRODUCTIVITY WHAT IF What if we could improve interfaces for context, comprehension and human-machine performance, in order to develop machines that could sense and respond to people’s movements and behaviors? WHAT WE SET OUT TO DO We set out to improve human-machine interaction by enabling machines to detect and respond […]
Eye Tracking as an Augmented Input
From The Theme MOBILE AND ALTERNATIVE FORM FACTOR DEVICES WHAT IF What if eye tracker and eye-gaze information could be used to help your devices know where you are looking and what your attention is on? WHAT WE SET OUT TO DO We set out to study the viability of using eye-gaze information to augment […]
High Speed 3D Shape Digitization Using Projected Light Patterns
From The Theme SENSING AND COMPUTING WHAT IF What if we could quickly determine and track the 3D shape of a moving or deforming object? WHAT WE SET OUT TO DO We set out to address one of the central problems in computer vision – the ability digitize an object at high speed. Rapid digitization […]