Insights from Embodied and Non-embodied AI Panel

Elizabeth WilseyElizabeth Wilsey (Moderator) is the Community Network Specialist at mediaX at Stanford University. In this role she supports member management, relationship development, and the internal and external community of mediaX. She specializes in relationship cultivation and logistic coordination, with experience both academia and theatrical production. Prior to arriving at Stanford, she worked as an Academic Conference planner for faculty across many disciplines at the University of Notre Dame, including sites in London, Rome, Dublin, and Jerusalem.

Annabell HoAnnabell Ho is a PhD candidate in the Department of Communication at Stanford University. Her research focuses on the psychological effects of interacting with chatbots compared to other people, particularly in the context of emotional disclosure. She is largely interested in how the psychological dynamics that occur between people change when the interaction partner is a computer instead. In her talk, Annabell looks at:

1. When people talk about their feelings to a chatbot instead of another person, they can feel just as good after getting supportive responses and just as bad after getting unsupportive responses.
2. But sometimes the identity of the partner does matter. We need to better understand when that’s the case or not to make sure that supportive AI is safe and effective.
3. AI should be built with care to provide truly validating and not invalidating responses, since invalidating responses can be harmful even if people know the partner is a computer who doesn’t “know any better”.
4. AI should be built with careful thought about how it would impact the person’s relationships (with other people around them and with the AI itself).

Mariana Lin PanelistMariana Lin is the Head of Character for Sophia of Hanson Robotics, and was formerly creative director and lead writer behind Siri, overseeing personality and voice. A writer and editor for over 15 years, she authors a column Artificial Intelligentsia in the Paris Review on creative writing for AI, and writes poetry. She is currently collaborating on study on authenticity and AI at the Stanford Graduate School of Business. Lover of languages and mother to a non-verbal son, she is interested in the interplay of culture, personality, and human-AI communications. In her presentation, Marianna examines:

1. In crafting AI personalities, we have an opportunity to look at human culture and identity from a new perspective. What would an alien being think of the way humans categorize and organize ourselves?
2. Embodied AI needs to take more visual care in being broadly culturally appealing and address / challenge biases. The more anthropomorphic the AI is, the more this is both a concern and opportunity.
3. Emotions are a way to bridge differences across cultures in AI interactions. They underlie the unifying human experience.

Cory Kidd PanelistCory Kidd is the founder and CEO of Catalia Health. The company develops a hardware and software platform that uses a combination of psychology and artificial intelligence to engage patients through interactive conversations. These conversations happen through mobile, web, and interactive robotic interfaces; together these interfaces create a relationship that can reach patients at any time they need support. Dr. Kidd has been working in healthcare technology for nearly two decades. In this talk, Cory discusses:

1. We start with what a doctor or nurse might ask a patient about or inform them of and build conversations from there. Using both doctors and nurses is helpful because each has a different relationship with a patient.
2. Using humor in a healthcare discussion is very controversial — just ask some of our customers who hate the idea! — but we think it’s important.
3. We created an identity for Mabu that somewhat relies on existing constructs of health caregivers, but also somewhat different. There are benefits and detriments to each of those.

John OstremJohn Ostrem is the CEO and co-founder of AvatarMind, a company that makes the iPal social robot for children’s education, elder care, and retail/hospitality. He has over 25 years’ experience developing new technology, bringing products to market and founding new companies. He founded China MobileSoft in 2001, a company that developed Linux-based telecommunications software for the Chinese and International markets, and has served as its Chairman, CEO, and CTO. In this talk, John addresses:

1. What is a social robot?
2. The iPal is a humanoid robot.
3. What is needed to make robots fit into society in such a way that they can interact effectively with people and be accepted as part of everyday life?