Learn to build large language model applications: vector databases, langchain, fine tuning and prompt engineering. Learn more

Building Responsible AI: best practices across the product development lifecycle


Everyone seems to be talking about responsible AI these days—but what does “responsible” actually mean, and how should AI/ML product teams incorporate ethics into the development lifecycle? This talk will focus on the organizational processes that support the development of responsible AI systems. It will cover the key features of responsible AI that are important to evaluate at each stage of the development lifecycle, and how we can operationalize abstract concepts like fairness into concrete assessment plans. Ian Eisenberg will share best practices from the field and tactical approaches that you can begin using today. He will focus on “fairness” as an exemplar of Responsible AI considerations, and describe the process through which a team can go from contextualizing their AI system to assessing it.

Ian Eisenberg - Data Science Dojo
Ian Eisenberg

Head of Data Science building Responsible AI

Ian Eisenberg is a Head of Data Science at Credo AI, where he leads the development of Lens, a Responsible AI Assessment Framework. Ian believes safe AI requires systems-level approaches that draw on technical, social, and regulatory advancements. His interest in AI started as a neuroscientist at Stanford, which grew into a focus on responsible AI through his involvement with the Effective Altruism movement. Prior to joining Credo AI, Ian built recommender systems at Triplebyte and has been a researcher at Stanford, the NIH, Columbia, and Brown University. He received his Ph.D. from Stanford University and BS from Brown University.

We are looking for passionate people willing to cultivate and inspire the next generation of leaders in tech, business, and data science. If you are one of them get in touch with us!