The current practice of machine learning is much like being the blind button sorter. A surprising accomplishment, impressive skill and speed, but without context it can be meaningless. Why were buttons mixed up to start? How will they be used after they are sorted. Are we sorting them for shape when the real need is to sort them for material or color or wear or … ?
Machine learning is impressive at pattern recognition but that is only one part of creating real value.
In a machine learning project we are creating models of our underlying reality. We need to be constantly making connections between those models and that reality. Asking ourselves: How does our knowledge of those real world processes improve our ability to build useful models? How does our experience building those models give us new insights into how the real world works and how we can change it?
With black box models it is much harder to create those connections, with explainable models it is much easier.
Here are three examples of when insights matter: