Learn to build large language model applications: vector databases, langchain, fine tuning and prompt engineering. Learn more

Hallucination-Free LLMs: Strategies for Monitoring and Mitigation


Building Reliable LLMs: Practical Strategies for Mitigating Hallucinations

The talk will cover why and how to monitor LLMs deployed to production, aiming for their enhanced performance and efficiency.

We will focus on the state-of-the-art solutions for detecting hallucinations, split into two types:

1. Uncertainty Quantification
2. LLM self-evaluation

In the Uncertainty Quantification part, we will discuss algorithms to leverage token probabilities to estimate the quality of model responses. This includes simple accuracy estimation and more advanced methods for estimating Semantic Uncertainty or any classification metric.

In the LLM self-evaluation part, we will cover using (potentially the same) LLM to quantify the quality of the answer. We will also cover state-of-the-art algorithms such as SelfCheckGPT and LLM-eval.

You will build an intuitive understanding of the LLM monitoring methods, their strengths and weaknesses, and learn how to easily set up an LLM monitoring system, creating hallucination-free LLMs.

Wojtek Kuberski

CTO at NannyML

Wojtek Kuberski is an AI professional and entrepreneur with a master’s in AI from KU Leuven. He founded Prophecy Labs, a consultancy specializing in machine learning, before assuming his current role as a co-founder and CTO of NannyML. NannyML is an OSS for ML monitoring and silent ML failure detection. At NannyML, he leads the research and product teams, contributing to novel algorithms in model monitoring.

We are looking for passionate people willing to cultivate and inspire the next generation of leaders in tech, business, and data science. If you are one of them get in touch with us!