Dealing with Errors, Fairness, Explainability and Testing of Machine Learnt Models
Rama Akkiraju is a Director, Distinguished Engineer, and Master Inventor at IBM’s Watson Division where she leads the AI mission of enabling natural, personalized and compassionate conversations between computers and humans. In her career, Rama has worked on agent-based decision support systems, electronic market places, and semantic Web services, for which she led a World-Wide-Web (W3C) standard. Rama has co-authored 4 book chapters, and over 50 technical papers. Rama has over dozen issued patents and 20+ pending. She is the recipient of 3 best paper awards in AI and Operations Research. In this talk, Rama discusses these points:
1. AI Models rarely get it right in first iteration. Plan for continuous improvement.
2. Be diligent with Error Analysis and fix the ones that matter, if you can’t fix them all with your resources.
3. Feel free to declare your biases in training data, that way users know what they are getting.
4. Must you build opaque statistical deep-learning models and try to explain their predictions after-the-fact? or should you build transparent models that might take time and effort but are explainable from the get-go?
5. Humans are an integral part of building AI models. Their personalities, their background all make their way into AI models via labeling, data selection, data source identification, feature selection etc.