fbpx
until LLM Bootcamp: In-Person (Seattle) and Online Learn more

Monitoring LLMs in Production with Hugging Face & WhyLabs

Agenda

The ability to effectively monitor and manage Large Language Models (LLMs) like GPT from OpenAI has become essential in the rapidly advancing field of AI. WhyLabs has created a powerful new tool, LangKit, to ensure LLM applications are monitored continuously and operated responsibly.

Join our live event designed to equip you with the knowledge and skills to use LangKit with Hugging Face models. Guided by a team of experienced AI practitioners, you’ll learn how to evaluate, troubleshoot, and monitor large language models more effectively.

This live session will cover how to:
-Understand: Evaluate user interactions to monitor prompts, responses, and user interactions.
-Guardrail: Configure acceptable limits to indicate things like malicious prompts, toxic responses, hallucinations, and jailbreak attempts.
-Detect: Set up monitors and alerts to help prevent undesirable behavior.

Sage Elliott-WhyLabs-LLMs
Sage Elliott

Technical Evangelist-AI and MLOps at WhyLabs

Sage Elliott enjoys breaking down the barrier to AI observability, talking to amazing people in the Robust & Responsible AI community, and teaching workshops on machine learning. Sage has worked in hardware and software engineering roles at various startups for over a decade.

We are looking for passionate people willing to cultivate and inspire the next generation of leaders in tech, business, and data science. If you are one of them get in touch with us!

Resources