At Firefly.ai we use autoML to build models for users from various industries. These models can achieve high scores, but this usually comes at the price of complex models that are hard to interpret. Understanding how our models make their predictions is important for quality control, revealing patterns in the data, etc.
Interpretability of ML models is a relatively new field that is researched through a variety of aspects. In this session I will review how we applied techniques such as sensitivity analysis, PDP & ICE, and LIME for explaining the predictions of generic, black-box models at Firefly.ai. Using two real-world use cases from the banking and real-estate industries, I will describe the pros and cons of each method, differences between the methods, and production considerations.