Building good and stable ML models is hard. The wide variety of challenges in the process includes biases when building or splitting the datasets, data leakages, data quality, and integrity issues, drifts, model performance stability, and many more.
In this session we’ll explore these types of challenges, give real-life examples of such faults, and suggest a structure for building tests for these types of issues, to enable validating them efficiently.
We’ll include a hands-on demonstration of running validation tests during the ML research phase (which you can follow along by running it locally).
By the end of this session, you’ll have the knowledge about which issues to look out for in order to avoid critical problems, along with the tools for how to do so efficiently.
Co-founder and CTO of Deepchecks
We are looking for passionate people willing to cultivate and inspire the next generation of leaders in tech, business, and data science. If you are one of them get in touch with us!