Everyone seems to be talking about responsible AI these days—but what does “responsible” actually mean, and how should AI/ML product teams incorporate ethics into the development lifecycle? This talk will focus on the organizational processes that support the development of responsible AI systems. It will cover the key features of responsible AI that are important to evaluate at each stage of the development lifecycle, and how we can operationalize abstract concepts like fairness into concrete assessment plans. Ian Eisenberg will share best practices from the field and tactical approaches that you can begin using today. He will focus on “fairness” as an exemplar of Responsible AI considerations, and describe the process through which a team can go from contextualizing their AI system to assessing it.
Head of Data Science building Responsible AI