Interested in a hands-on learning experience for developing LLM applications?
Join our LLM Bootcamp today and Get 5% Off for a Limited Time!

Explore AI and Ethics to Deploy Trustworthy Generative AI

Agenda

Generative AI models and applications are being rapidly deployed across several industries, but some ethical and social considerations need to be addressed.

These concerns include lack of interpretability, bias, discrimination, privacy, lack of model robustness, fake and misleading content, copyright implications, plagiarism, and environmental impact associated with training and inference of generative AI models.

In this talk, we first provide a brief overview of Generative AI, motivate the need for adopting responsible AI principles when developing and deploying LLMs and other generative AI models, and provide a roadmap for thinking about responsible AI for generative AI in practice.

We’ll also focus on real-world LLM use cases, such as evaluating LLMs for robustness, security, bias, and more.

By providing real-world generative AI use cases, lessons learned, and best practices, this talk will enable practitioners to build more reliable and trustworthy generative AI applications.

Krishnaram - Deploying Generative AI
Krishnaram Kenthapadi

Chief AI Officer & Chief Scientist at Fiddler AI

Krishnaram Kenthapadi is the Chief AI Officer & Chief Scientist at Fiddler AI, an enterprise startup focused on responsible AI and ML monitoring. Formerly, he served as a Principal Scientist at Amazon AWS AI, leading initiatives in fairness, explainability, privacy, and model understanding. Prior to Amazon, he led similar efforts at LinkedIn AI and represented LinkedIn on Microsoft’s AI and Ethics in Engineering and Research (AETHER) Advisory Board. Krishnaram earned his Ph.D. in Computer Science from Stanford University in 2006.

We are looking for passionate people willing to cultivate and inspire the next generation of leaders in tech, business, and data science. If you are one of them get in touch with us!