Three Algorithmists Meet Robin Hood
Bruce Cahan is a Consulting Professor in Stanford’s School of Engineering, where he designs and applies new theories for creating a financial and insurance marketplace that improves regional quality of life systems. He is also CEO and co-founder of Urban Logic, a nonprofit that harnesses finance and technology to change how systems think, act and feel. He is an Ashoka Fellow and a CodeX Fellow at Stanford’s Center for Legal Informatics. Mr. Cahan was trained as an international finance lawyer at Weil Gotshal & Manges in NYC (10 years) and as merchant banker at Asian Oceanic in Hong Kong (2 years).
In this talk, Bruce travels into the future and then looks back from 2050 to imagine three tribes of AI practitioners evolving: Receivers gathering and selling all-purpose “big data”, Amplifiers using AI to broadcast and seek conformity to any behavioral action or opinion that government or commercial parties want to fund, and Tuners who ask whether with all the Receivers and Amplifiers actions the result benefits the common man, woman, child or small business affected and manipulated by them. Bruce will explain these Algorithmists’ Dilemma and propose a Robin Hood Clinic to research, teach and clinically practice AI in service to humankind. Bruce dives deeper into…
1. Ethically Testing and Challenging AI – Medical schools use teaching hospitals and law schools use law clinics to check that the diagnostic and treatment options their students and faculty learn and practice actually help real human beings. Does AI need a Robin Hood AI Clinic to perform a similar pedagogical function that iterates and improves the likelihood that real human issues and opportunities are addressed holistically by AI?
2. Identity Ownership and Control – Ultimately, the “big data” that AI depends on, homogenizes, recombines, authenticates as reusable and shares references an individual, organization or asset owned or used by an individual or organization. How should such data and the rights to control, sell, correct and challenge all of the uses of it be monitored through AI built for that purpose, as augmenting regulatory technologies?
3. Researchers’ Access to Big Data – Too often, academic and other research papers rely on datasets that are not shared and often as regards personally identifying information, cannot be shared. However, through corporate arrangements with government and intergovernmental data sharing arrangements, the data is available to a subset of researchers who build hypotheses, public policies and corporate products in reliance on such data, and in pursuit of pre-ordained outcomes. How can AI be made a tool for anonymized data access, and tracking which research relied on which data?
Read more on this topic from Bruce’s Blog.