November 1

Human AI Collaboration: A Dynamic Frontier

Human AI Collaboration: A Dynamic Frontier
Partnerships Between Human and Artificial Intelligence
November 1, 2017 8:30a-5:30p
Stanford University, Mackenzie Room (3rd FL Jen-Hsun Huang Engineering Center)

In a few decades, we’ve gone from machines that can execute a plan to machines that can plan. We’ve gone from computers as servants to computers as collaborators and team members.

In the best of circumstances, collaboration and teamwork present challenges. Even teams of highly competent people struggle to clarify goals, understand each other in conversations, define roles and responsibilities, and adapt when necessary. Determining what we want from collaboration is sometimes the hardest task.

Establishing confidence and trust in team members can make or break a project, and this is equally true of the relationships we have with our digital assistants and AI collaborators. The expanding capabilities and applications of intelligent machines call for a more sophisticated understanding of the relationships between people and AI.

AI began by understanding actions as humans performed them. Routine tasks with predictable decision points became computer-controlled through programs based on extracting expertise through observation or questioning human experts. Programmers captured the “how” of human behavior in rules that machines could follow. Automated machines could do it faster, with fewer errors, without fatigue. Humans can explain this type of AI.

Enter machine learning – the capacity of computers to leverage massive amounts of data to act without specific human instruction. By looking at examples, extracting the patterns, turning them into rules, and applying those rules, machine learning now captures the “what” of human behavior to provide artificially intelligent answers for complex tasks – such as visual perception, speech recognition, translation and even decision-making,. AI now does things that humans find difficult to explain.

Join us on November 1, 2017 as we go farther into the creation and harnessing of artificial intelligence and ask three important questions:
1. On which tasks will machines with AI be able to out-perform humans?
2. What do we know about people and technology that will help us establish confidence, certainty and collaboration in the new partnerships between human and artificial intelligence?
And, most importantly:
3. How can intelligent machines truly enhance the human experience?

CLICK HERE for the complete schedule.

Never Miss An Event; Join Our Email Community

Presenters

Neil Jacobstein

Neil Jacobstein is a mediaX Distinguished Visiting Scholar. He studies how the exponentially increasing velocity and complexity of technology requires augmented decision making in industry and government. He was a Senior Research Fellow in the Reuters Digital Vision Program at Stanford University. Jacobstein Chairs the AI and Robotics Track at Singularity University, headquartered at NASA Research Park. He is a past President of Singularity University.

Peter Norvig is a Director of Research at Google Inc; previously he directed Google's core search algorithms group. He is co-author of Artificial Intelligence: A Modern Approach, the leading textbook in the field, and co-teacher of an Artificial Intelligence class that signed up 160,000 students, helping to kick off the current round of massive open online classes. He is a fellow of the AAAI, ACM, California Academy of Science and American Academy of Arts & Sciences. Peter is also a mediaX Distinguished Visiting Scholar.

Poppy Crum

Poppy Crum is Chief Scientist at Dolby Laboratories. She also holds an appointment as Adjunct Professor at Stanford University in the Center for Computer Research in Music and Acoustics and the Program in Symbolic Systems. At Dolby, Poppy directs the growth of internal science. She is responsible for integrating neuroscience and sensory data science into algorithm design, technological development, and technology strategy. At Stanford, her work focuses on the impact and feedback potential of new technologies including gaming and immersive environments on neuroplasticity and learning.

John Willinsky GSE

John Willinsky is Khosla Family Professor of Education, at the Stanford Graduate School of Education and Professor (Part-Time) Publishing Studies, SFU and Distinguished Scholar in Residence, SFU Library. John started the Public Knowledge Project (PKP) in 1998 at the University of British Columbia in an effort to create greater public and global access to research and scholarship through the use of new publishing. technologies. Some of his current research interest are Scholarly Communication, Sociology of Knowledge and Technology and Literacy.

David Bailey

David Bailey is a leading figure in high-performance scientific computing and computational mathematics, with over 200 published papers and six books. In the field of mathematical finance, his best-known paper is "Pseudo-mathematics and financial charlatanism: The effects of backtest overfitting on out-of-sample performance.” Bailey has received awards from the IEEE Computer Society, the Association for Computing Machinery, the American Mathematical Society and the Mathematical Association of America.

Pat Langley

Pat Langley serves as Director of the Institute for the Study of Learning and Expertise and as Honorary Professor of Computer Science at the University of Auckland. He has contributed to artificial intelligence and cognitive science for more than 35 years, having published over 200 papers and five books on these topics. Professor Langley developed some of the earliest computational approaches to scientific knowledge discovery, and he was an early champion of both experimental studies of machine learning and its application to real-world problems.

John Perry

John Perry is Henry Waldgrave Stuart Professor of Philosophy Emeritus at Stanford University, and co-director of the CEC at CSLI. He has made significant contributions to philosophy in the fields of logic, philosophy of language, metaphysics, and philosophy of mind. He is known for his work on situation semantics (together with Jon Barwise), reflexivity, indexicality, personal identity, and self-knowledge. He has authored several books, including most recently, Reference and Reflexivity.

Emma Brunskill

Emma Brunskill is an assistant professor of computer science at Stanford University. Her lab works to advance the foundations of reinforcement learning and artificial intelligence, with a particular focus on the technical challenges that arise when we construct interactive systems that amplify human potential. She has received multiple national awards for outstanding work as a young faculty member and her group has received numerous best paper nominations for their research.

Daniel Rubin

Daniel Rubin is Associate Professor of Biomedical Data Science, Radiology, Medicine (Biomedical Informatics Research), and Ophthalmology (courtesy) at Stanford University. His NIH-funded research program focuses on quantitative imaging and integrating imaging data with clinical and molecular data to discover imaging phenotypes that can predict the underlying biology, define disease subtypes, and personalize treatment. He has published over 160 scientific publications in biomedical imaging informatics and radiology.

Amy Kruse is the Chief Scientific Officer of the Platypus Institute, an applied neuroscience research organization that translates cutting-edge neuroscience discoveries into practical tools and programs which enhance the human experience. Dr. Kruse’s primary focus at the Platypus Institute is a project entitled “Human 2.0” – a multi-faceted initiative that helps selected individuals and teams leverage neurotechnology to generate meaningful competitive advantages. Her ultimate goal with the Human 2.0 project is to create a vibrant, widespread neurotechnology industry that allows humanity to upgrade the human brain and, thereby, the human condition.

Ajay Chander

Ajay Chander leads R&D teams in imagining and building new human-centric technologies and products. His work has spanned digital healthcare and wellness, software security, and behavior design. Currently, Dr. Chander directs the Digital Life Lab at Fujitsu Labs of America, which builds solutions that acknowledge and leverage the “humans-in-the-loop” in an increasingly digitally dense world. At Fujitsu, Dr. Chander also provides technical and thought/strategy leadership for all aspects of the interplay between technology and the human experience, with a focus on human-centric systems and solutions.

Elizabeth Arredondo AI Conversations Panel Series

Elizabeth Arredondo is a writer focused on creating compelling characters for television and interactive mediums. She is currently designing the personality, backstory, and the conversations for a robotic wellness coach named Mabu. Mabu is the latest effort in social robotics from Cory Kidd, formerly of MIT’s Media Lab. After earning her MFA in Writing for Screen and TV from USC’s School for Cinematic Arts in 2005, Elizabeth received a feature film writing fellowship and participated in NBC’s “Writers on the Verge” program. Elizabeth worked as a staff writer on the primetime CBS drama COLD CASE. She has also worked with a network to develop an original pilot.

Erik Vinkhuyzen

Erik Vinkhuyzen is a senior researcher at Nissan Research Center in Silicon Valley, where he is a member of the Human-Centered System group. He brings a social scientific perspective to the development of self-driving vehicle technologies. His studies have spanned a range of work settings and technologies from call center work and CRM systems, to clinicians working with electronic medical records, from copy shop employees using copiers and printers. His current work focuses on the interaction of autonomous vehicles with other road users, and the interactional challenges for non-drivers of autonomous vehicles.

Cathy Pearl Panelist

Cathy Pearl is the author of the O’Reilly book “Designing Voice User Interfaces”. She is VP of User Experience at Sensely, making healthcare more effective and accessible with a virtual nurse avatar. During her time at Nuance and Microsoft, she designed VUIs for banks, airlines, healthcare companies, and Ford SYNC, and at Volio she built a conversational iPad app that has Esquire Magazine’s style columnist advise users on what they should wear on a first date. She has a BS in Cognitive Science from UC San Diego, and an MS in Computer Science from Indiana University.

Mariana Lin Panelist

Mariana Lin has been a writer for the past 15 years. She previously worked as a principal creative at Siri, developing personality and voice internationally, and currently consults in AI writing. Her writing has appeared in publications such as New York magazine, GQ, Picture, The Huffington Post, the BBC, the Mississippi Review, and the New Guard Literary Review, and she has won awards for poetry and voice branding. She has varying levels of proficiency in French, Chinese, classic Greek, ASL, LAMP (Language Acquisition through Motor Planning), and is interested in the interplay of language, culture, and AI identity.

Daniel Padgett

Daniel Padgett is a Conversation Design Lead for Google Assistant, creating engaging user experiences for Home, Pixel, Wear, and more. For the past 15 years, he has been leveraging language technologies to develop user-centered solutions for major brands like Allstate, Nike, Target, and Cisco. Just prior to Google, he led service design efforts as Director of Customer Service Experience at Walgreens, deploying proactive and personalized contact solutions for roughly 100 million customers at more than 8200 stores.