It is a significant challenge to develop high-performance models, and then to ensure that model performance is maintained in production. Additionally, models often decay due to changes in real-world environments. Everyone wants a high-performing model. But what does that mean? What should you be measuring? How can you identify when your model may be degrading or encountering potential issues because of data or concept drift? What are the root causes of these issues and how can you iterate to improve your model? The lecture will cover conceptual foundations as well as a demonstration of concepts with actual models.
This talk will cover:
– What points in ML model development should you be testing?
– What types of performance testing should be done?
– What types of drift should you test for?
– How can you systematically test, debug, and monitor your models?
We are looking for passionate people willing to cultivate and inspire the next generation of leaders in tech, business, and data science. If you are one of them get in touch with us!