It is easy to imagine the only benefit of explainability is to placate the users – to quiet down the squawking geese who are along for the ride.
That is a tempting idea, it is also wrong.
Yes, explainability reduces the squawks by addressing both softer human concerns (fears, cognitive biases, established practices, etc) and hard constraints to adoption (regulatory, legal, etc.).
However, in addition explainability can help you make models that are more broadly generalizable, more robust and easier to troubleshoot. Real benefits that go beyond quieting down the geese.
The team behind LIME (“local interpretable model-agnostic explanations”) gave an example of one of these benefits. Their paper describes an experiment which demonstrated how explanations enabled non-data scientists to select the more generalizable model from two alternatives. In this particular case the more generalizable model was the one that had lower performance as measured by the typical metrics and therefore would have been rejected if no explanation had been provided. This small experiment was more illustrative than definitive, but we believe it points us down a productive path.