#mediaX2015 Conference 8:30am-4:30pm
“Writing the Code for Personal Relevance”
Personal relevance is the currency of the experience economy. Context and intention drive digital exchanges in education, commerce and entertainment.
Join us on April 16th from 8:00am-4:30pm in the Mackenzie Room, (3rd Fl. Jen-Hsun Huang Engineering Center) as we dive deeper into…
*Smart communications promise personalized experiences, yet the personalization of communications is compromised by ignorance of context and changes in preferences over time.
*Smart health promises individualized diagnoses and treatments, yet database silos and regulatory dynamics pose challenges that are beyond the domain of any one service, technology or organization.
*Smart entertainment offers to personalize the game experience, make it immersive, make it social. All this beckons, while the boundaries between work and play are more and more ambiguous.
*Smart education promises open learning to the world, and teaching tools that adapt to the user’s interests and learning abilities. At the same time, privacy, politics, funding and access to smart technologies handicap the realization of this vision.
*Smart office technologies promise increased productivity, enhanced working experiences and more leisure time. Yet many the automated systems replacing skilled workers lack judgment and wisdom, multitasking mobile workers are stressed, and forecasts for global employment suggest that unemployment will increase in coming decades.
Technology promises to improve the human experience by personalizing digital experiences.
Writing the code for personal relevance is the frontier.
Join us for an exploration of current research providing key insights into this promising future.
Never Miss An Event; Join Our Email Community
Allan L. Reiss, M.D. Decoding Personal Relevance with Neuroscience. In the first part of this talk, I will describe our research designed to improve understanding of communicative and social complexities in human social interaction with the goal of providing a more complete and nuanced background for understanding human-human interaction. In the second part of the talk I will briefly describe technology-enhanced advances in personalized brain medicine that hold promise for revolutionizing our approach to common (and often devastating) brain disorders such as autism, learning disabilities and dementia.
Larry Leifer, Hu-mimesis: Design Requirements for Personal Relevance. The engineering design thinking paradigm at Stanford University goes back 50 years. It continues to evolve. Today it is on the cusp of becoming the “next new fundamental,” the other side of the equation to physics and math. ME310 is a master class for the development of new product development (NPD) talent. Our principal strategy is to train graduate students in our “instrumented flight simulator (the ME310-Global-Loft). After sharing several NPD case examples I will share key findings from the design thinking research program at CDR.
Chris Chafe, The Sound Stage of the Mind: Imagined Sounds and Inner Voices. Mentally imagining voices & sounds in the "mind's ear" is as much a part of experience as visualizing in the "mind's eye." The vividness of sounds in the imagination varies between individuals but nearly everyone reports spontaneous sound & being able to conjure sounds intentionally. Imagining vocals & other sound has a role in planning even at very short time scales and this discussion is motivated by investigations of musical performance. Reading ahead or thinking ahead in sound can be a conscious part of playing or singing. When the next part of a passage is in the mind quasi-acoustically, is there something to be said about the presentation itself.
John Mitchell, Personal Learning at Scale. Over the last three years, we at Stanford have experimented widely with online learning activities, at scale and in campus classes. While the early Stanford MOOCs of 2011 consisted of recorded lecture segments, relatively simple automated quizzes and discussion, inventive instructors have explored a range of new forms, including distributed learning activities around the globe and digital laboratories in humanities classes. In this short overview discussion, we will look at some examples, observe selected trends, and ponder a few questions about the future.
James Landay, From On-Body to Out-Of-Body User Experience. There are many urgent problems facing the planet: a degrading environment, a healthcare system in crisis, and educational systems that are failing to produce creative, innovative thinkers to solve tomorrow’s problems. I will illustrate how we are addressing these grand challenges in our research by building systems that balance innovative on-body user interfaces with novel activity inference technology. These systems have helped individuals stay fit, led families to be more sustainable in their everyday lives, and supported learners in acquiring second languages.
Monica Lam, Personal Sharing Reinvented. Sharing is broken today. To share today, we have to get our friends to join some social network, share according to the rules of that network, while giving up ownership of our data. Why can't we just share anything we want with any group of friends, directly from our phones to theirs, without worrying about creepy ads? Omlet is an open platform, being rolled out on mobile devices, that makes personal sharing simple, pure, and easy. Released software prototypes and further information can be found on http://mobisocial.stanford.edu.
Byron Reeves, Constructing a Personal Media Day: Switching Between Work and Play on a Laptop Computer. Familiar technology — laptops, tablets and smartphones — increasingly consolidate very different media experiences on single devices. With the availability of a broad range of content on one device, and pause buttons that allow easy switching between work and play, we are now able to experience radically different material without sensing that we’ve missed a thing. Our research group has been tracking how people use personal laptop computers by examining moment-by-moment changes in activity over the course of multiple days.
Gordon Wetzstein, Personalizing Light Fields. What if your mobile phone’s display would correct your vision deficiency instead of your glasses? Light field display technology can assess and correct the user’s vision. In this talk, we discuss a wide range of unconventional applications that are facilitated by light field technology, a novel inexpensive computational display technology. Light field displays are expected to be “the future of 3D displays”, although many believe that the recent hype about stereoscopic 3D displays is over.
Joyce Westerink, Psychophysiology for Personalized Mood Adaptation. Psychophysiology studies how physiological signals like heart beats or skin conductance can reflect your state of mind. With wearable technologies they can be used to continuously measure how people feel in their everyday life. I will discuss how monitoring physiology can serve to make people more aware of what exactly is affecting their personal state of mind, for instance what causes stress. Taking personalization one step further, we can also use the psychophysiological signals in a closed loop in order to continuously adapt the user’s mood, e.g. through optimized music selection
Lee Zlotoff, Once Upon A Time...Of You: The Transformative Potential of Personalized Fiction. In this presentation, I'll attempt to demonstrate that humans are a narrative species and that, underlying all our choices and decisions, there is, in fact, a story. What’s more, are the stories that are most likely to affect us fiction rather than non-fiction? Assuming that fictional stories might be the most likely to inspire change, what then could be the opportunities and potential of using current advances in technology to produce personalized fiction? How would such a thing work? How might it affect behavior and could it, in fact, become the basis for a new approach to personal or cultural transformation?