“Performance > Interpretability”

Christoph Molnar has good thread refuting the most common arguments against implementing interpretable machine learning.   I recommend reading the whole thing.

In this post we will just focus on one aspect of his thread:

Screenshot 2018-02-25 08.02.16

Christoph suggests several circumstances where it is a mistake to sacrifice interpretability to improve model performance metrics.  Of course there is only so much you can fit in a tweet so his list is incomplete.  However, it is a good place to start, let’s consider each of his points:

Can’t capture 100% of your problem’s definition in a single loss function

Having a calculable loss function is important to making machine learning algorithms practical.  Minimizing the loss function is a common model performance metric.  Tuning to achieve this minimization is a standard part of developing a model.

However, in general it doesn’t make sense to sacrifice interpretability to achieve loss function optimization.  Because the loss function does not fully capture the upside value creation and downside risks of applying the model.  In most use cases, explainability can contribute to the value creation and minimize the risks in ways that are not reflected in the loss function.  This is particularly true when dealing with messy human systems such as business and healthcare use cases.

Keep in mind that an ML model is typically only one sub-system in a bigger process which might include other models, traditional software components and human processes.  Optimizing that one subsystems loss function is not equivalent to optimizing the result for the entire system.

Also keep in mind that our loss function is typically only an approximation and simplification of the underlying reality we are modeling.

We need to think beyond model metrics  and remember that enterprise utility outweighs model performance.

The training data is imperfect

Multiple ways that your training data can be lacking.  It might not accurately reflect the full range of variability we will see when applying our model.  It may have leakage from the target variable.  It may not have features present to maximize generalizability.  Etc. etc.

In almost all these cases, explainability helps us recognize and correct the nature of the training data limitations.

Care to learn something about the problem

Explainability helps us make connections between the patterns found by machine learning and our causal insights into the real world systems we are modeling.  Making connections between models and insights is a powerful way to generate value from our projects.

Leave a Reply