Upcoming Event

November 3

Taxonomies and Ontologies for the Human Experience in Digital Environments

Bechtel Conference Center, Stanford University (tentative)
No Cost Registration Required (Registration Opening Soon)

Ontologies have been developed to describe the world as it is, as Plato says, “to carve nature at its joints.” Such frameworks can reduce complexity in order to make sense of the world; they can facilitate communication. They can support prediction, describe actions, and help select interventions to accomplish practical goals. They are instrumentally pragmatic; they are essential to science. The delineation of boundaries, and the links between them, can be particularly useful in scenarios of ambiguity and unknown or evolving relationships – among people, ideas, tools, or actions.

Taxonomies, then, provide the classifications within ontological frameworks. Many of the ontologies and taxonomies in use today were developed before the dawn of the digital world. As we create and occupy the digital world, the need arises for new language and taxonomies to reference new phenomena, actions and interactions.

Language and thought shape the way we think and understand the structure of the world. If language does not accurately track the structure of the world, scientists may misinterpretation research results or practitioners may fail in their practical successes. Innovations in language follow the recognition of new phenomena, processes and actions.

AND here’s the conundrum. The human experience is not divided into clear categories, which many scientists need for their studies. Researchers can’t stand outside the world of the human to observe and validate that the ontologies and taxonomies are accurate. Data about the human experience is context-dependent; the digital and physical environments are not wholly separate. Measurement methods often highlight some differences and suppress others. Researchers trade-off between descriptive and prescriptive goals to determine the best available ontology for a specific purpose. And sometimes new ones are needed.

In this Symposium, mediaX brings together a multidisciplinary selection of scientists and researchers studying the human experience in digital environments. Through established frameworks, concept-proving projects, and practical issues, this program will explore how we might better describe the human experience in digital environments.

Presenters

Bruce McCandliss GSE

Bruce McCandliss, is the head of the Educational Neuroscience Initiative at Stanford University where he is a professor in the Graduate School of Education and the Department of Psychology (by courtesy).  His research uses the tools of developmental cognitive neuroscience to study individual differences and educational transformations in key cognitive skills such as attention, literacy, and mathematics. In 2014, he accepted full professorship at Stanford University where in 2018, he launched the Educational Neuroscience Initiative which endeavors to bring together elementary school education and neuroscience research to understand how the brain changes with learning.

Carla Pugh is Professor of Surgery at Stanford University School of Medicine. She is also the Director of the Technology Enabled Clinical Improvement (T.E.C.I.) Center. Her clinical area of expertise is Acute Care Surgery. She is the first surgeon in the United States to obtain a PhD in Education. Her goal is to use technology to change the face of medical and surgical education. Her research involves the use of simulation and advanced engineering technologies to develop new approaches for assessing and defining competency in clinical procedural skills. Dr. Pugh holds three patents on the use of sensor and data acquisition technology to measure and characterize hands-on clinical skills. Currently, over two hundred medical and nursing schools are using one of her sensor-enabled training tools for their students and trainees. Her work has received numerous awards from medical and engineering organizations.

Russell Poldrack is the Albert Ray Lang Professor of Psychology and Professor, by courtesy, of Computer Science. The Poldrack Lab is based in the Department of Psychology at Stanford University. Our lab uses the tools of cognitive neuroscience to understand how decision making, executive control, and learning and memory are implemented in the human brain. We also develop neuroinformatics tools and resources to help researchers make better sense of data. Studies in our laboratory focus on healthy individuals, but we collaborate with other groups who are interested in studying neuropsychiatric disorders, including schizophrenia, bipolar disorder, ADHD, and drug addiction.

Nick Haber is an Assistant Professor at the Stanford Graduate School of Education, and by courtesy, Computer Science. After receiving his PhD in mathematics on Partial Differential Equation theory, he worked on Sension, a company that applied computer vision to online education. He then co-founded the Autism Glass Project at Stanford, a research effort that employs wearable technology and computer vision in a tool for children with autism. Aside from such work on learning and therapeutic tools, he and his research group develop artificial intelligence systems meant to mimic and model the ways people learn early in life, exploring their environments through play, social interaction, and curiosity.

Nilam Ram is a Professor in the Departments of Communication and Psychology at Stanford University. Nilam’s research grows out of a history of studying change. Generally, Nilam studies how short-term changes (e.g., processes such as learning, information processing, emotion regulation, etc.) develop across the life span, and how longitudinal study designs contribute to generation of new knowledge. Current projects include examinations of age-related change in children’s self- and emotion-regulation; patterns in minute-to-minute and day-to-day progression of adolescents’ and adults’ emotions; and change in contextual influences on well-being during old age. He is developing a variety of study paradigms that use recent developments in data science and the intensive data streams arriving from social media, mobile sensors, and smartphones to study change at multiple time scales.

Dennis Wall is Associate Professor of Pediatrics, Psychiatry and Biomedical Data Sciences at Stanford Medical School. He leads a lab in Pediatric Innovation focused on developing methods in biomedical informatics to disentangle complex conditions that originate in childhood and perpetuate through the life course, including autism and related developmental delays. For over a decade, first on faculty at Harvard and now at Stanford University, and as healthcare has shifted increasingly to the use of digital technologies for data capture and finer resolutions of genomic scale, Dr. Wall has innovated, adapted and deployed bioinformatic strategies to enable precise and personalized interpretation of high resolution molecular and phenotypic data. Dr. Wall has pioneered the use of machine learning and artificial intelligence for fast, quantitative and mobile detection of neurodevelopmental disorders in children, as well as the use of use of machine learning systems on wearable devices, such as Google Glass, for real-time “exclinical” therapy.

Jason Yeatman is an Assistant Professor in the Graduate School of Education and Division of Developmental and Behavioral Pediatrics at Stanford University. Dr. Yeatman completed his PhD in Psychology at Stanford where he studied the neurobiology of literacy and developed new brain imaging methods for studying the relationship between brain plasticity and learning. As the director of the Brain Development and Education Lab, the overarching goal of his research is to understand the mechanisms that underlie the process of learning to read, how these mechanisms differ in children with dyslexia, and to design literacy intervention programs that are effective across the wide spectrum of learning differences.

Maxi Heitmeyer is a PhD Candidate in the Department of Psychological and Behavioral Science at the London School of Economics and Political Science. His research tries to improve our understanding of how people use their smart devices and social media in everyday life. To do this, he uses sophisticated digital video ethnography techniques (SEBE) to study how users interact with their devices in naturally occurring contexts, what routines and behavioral patterns they have developed, and how this influences their decision-making processes, particularly regarding the use of time and the direction of attention. Maxi’s research interests revolve around ICT use, smartphone & social media addiction.