For a hands-on learning experience to develop LLM applications, join our LLM Bootcamp today.
First 6 seats get an early bird discount of 30%! So hurry up!

Validating ML Models: Avoid Common Data and Model Pitfalls

Agenda

Avoiding Common ML Pitfalls: Validating Models and Data

Building good and stable ML models is hard. The wide variety of challenges in the process includes biases when building or splitting the datasets, data leakages, data quality, and integrity issues, drifts, model performance stability, and many more.

In this session we’ll explore these types of challenges, give real-life examples of such faults, and suggest a structure for building tests for these types of issues, to enable validating them efficiently.

We’ll include a hands-on demonstration of running validation tests during the ML research phase (which you can follow along by running it locally).

By the end of this session, you’ll have the knowledge about which issues to look out for in order to avoid critical problems, along with the tools for how to do so efficiently.

Shir Chorev
Shir Chorev

Co-founder and CTO of Deepchecks

Shir is the co-founder and CTO of Deepchecks, an MLOps startup for continuous validation of ML models and data. Previously, Shir worked at the Prime Minister’s Office and at Unit 8200, conducting and leading research in various Machine Learning and Cyber related problems. Shir has a B.Sc. in Physics from the Hebrew University, which she obtained as part of the Talpiot excellence program, and an M.Sc. in Electrical Engineering from Tel Aviv University. Shir was selected as a featured honoree in the Forbes Europe 30 under 30 class of 2021.

We are looking for passionate people willing to cultivate and inspire the next generation of leaders in tech, business, and data science. If you are one of them get in touch with us!

Resources