Generative AI models and applications are being rapidly deployed across several industries, but some ethical and social considerations need to be addressed.
These concerns include lack of interpretability, bias, discrimination, privacy, lack of model robustness, fake and misleading content, copyright implications, plagiarism, and environmental impact associated with training and inference of generative AI models.
In this talk, we first provide a brief overview of Generative AI, motivate the need for adopting responsible AI principles when developing and deploying LLMs and other generative AI models, and provide a roadmap for thinking about responsible AI for generative AI in practice.
We’ll also focus on real-world LLM use cases, such as evaluating LLMs for robustness, security, bias, and more.
By providing real-world generative AI use cases, lessons learned, and best practices, this talk will enable practitioners to build more reliable and trustworthy generative AI applications.
Chief AI Officer & Chief Scientist at Fiddler AI