Human Learning, After Machine Learning Panel

Keith Devlin is a co-founder and Executive Director of the university's H-STAR institute, a co-founder of the Stanford mediaX research network, and a Senior Researcher at CSLI. He is a World Economic Forum Fellow, a Fellow of the American Association for the Advancement of Science, and a Fellow of the American Mathematical Society. His current research is focused on the use of different media to teach and communicate mathematics to diverse audiences. In this connection, he is a co-founder and President of an educational technology company, BrainQuake, that creates mathematics learning video games.

John Perry is Henry Waldgrave Stuart Professor of Philosophy Emeritus at Stanford University, and co-director of the CEC at CSLI. He has made significant contributions to philosophy in the fields of logic, philosophy of language, metaphysics, and philosophy of mind. He is known for his work on situation semantics (together with Jon Barwise), reflexivity, indexicality, personal identity, and self-knowledge. He has authored several books, including most recently, Reference and Reflexivity. John discusses...

1. AI and Philosophy: personal recollections of a troubled relationship.
2. Computers as persons: the issue of personal identity.
3. Computer as persons: the issue of consciousness.
4. A Proposed Constitutional Amendment.

Pat Langley serves as Director of the Institute for the Study of Learning and Expertise and as Honorary Professor of Computer Science at the University of Auckland. He has contributed to artificial intelligence and cognitive science for more than 35 years, having published over 200 papers and five books on these topics. Professor Langley developed some of the earliest computational approaches to scientific knowledge discovery, and he was an early champion of both experimental studies of machine learning and its application to real-world problems. Pat examines...

1. Many key ideas in AI and machine learning had their origin in theories of human cognition and learning.
2. People can collaborate more effectively with AI systems that:
- Adopt human-like representations and knowledge
- Follow social norms of interaction and dialogue
- Reason about the beliefs and goals of those they aid

3. This “cognitive systems” approach differs from recent AI bandwagons, but it has a long and successful history.

Emma Brunskill is an assistant professor of computer science at Stanford University. Her lab works to advance the foundations of reinforcement learning and artificial intelligence, with a particular focus on the technical challenges that arise when we construct interactive systems that amplify human potential. She has received multiple national awards for outstanding work as a young faculty member and her group has received numerous best paper nominations for their research. Emma examins...

1. Human in the loop reinforcement learning is about making algorithms and automated agents that assist people and can leverage human expert input.
2. Having a person in the loop introduces new technical challenges.
3. Could impact education, health, consumer marketing and many others.

John Willinsky is Khosla Family Professor of Education, at the Stanford Graduate School of Education and Professor (Part-Time) Publishing Studies, SFU and Distinguished Scholar in Residence, SFU Library. John started the Public Knowledge Project (PKP) in 1998 at the University of British Columbia in an effort to create greater public and global access to research and scholarship through the use of new publishing. technologies. Some of his current research interest are Scholarly Communication, Sociology of Knowledge and Technology and Literacy. John looks at...

1. Elon Musk warns about “something seriously dangerous happening” with AI. The danger I see with machine learning is that it gives rise to the qualified back-formation “human learning,” for which machine learning does not bode particularly well, even as it is always about to surpass it.
2. Machine learning’s success brings renewed attention to training-as-learning with the emphasis on efficient and speedy pattern recognition; this is considered an educationally reduced form of learning (“drill and kill”) compared to fostering understanding, inquiry, questioning, and imagination.
3. More seriously, the achievement of deep learning enable reduced programming, supervision, and training data, while doing even less, despite being modeled on our neurophysiology, to inform our understanding of learning processes.
4. If machine learning proceeds without a greater concern for our learning, the combination of proprietary and untraceable learning suggests (a) less of a human-AI collaboration and more parallel play; (b) areas of the human experience that will be left to machine learning to direct; and (c) gains in learning at the expense of knowledge, apart from that machine learning.