Hands-on Introduction to Interpreting Machine Learning Models
Interpretable machine learning is needed because machine learning by itself is incomplete as a solution. The complex problems we solve with machine learning can’t be solved with traditional software development precisely because we don’t understand all of the problem space we’re trying to solve. By explaining a model’s decisions, we can cover gaps in our understanding of the problem, and increase trust in the products we build with AI. In this session, we will cover the importance of model interpretation and explain various methods and their classifications, including feature importance, feature summary, and local explanations.
Data Scientist and Author
We are looking for passionate people willing to cultivate and inspire the next generation of leaders in tech, business, and data science. If you are one of them get in touch with us!