Why XAI: pattern vs. action

Cato liked to point out that there is often “a slip between mouth and morsel”.  

In machine learning we might say that we are often “confounded between pattern and action”.   Machine learning is amazing at recognizing patterns and often horrible at recognizing the consequence of taking action based on those patterns.  Machine learning does little to help you identify casualty, feedback loops and side effects.  

Yet we don’t create machine learning systems just for the aesthetic satisfaction of recognizing previously unseen patterns.  We do it to receive real world benefits by using these systems change our course of action.  Therefore, this step of translating pattern recognition to action recommendations is critical.  

Consider this high-stakes example of a system built to predict likelihood of pneumonia complications in the hope that could be used to decide which patients could be sent home and which needed hospitalization.

The neural networks were right more often than any of the other methods. But when the researchers and doctors took a look at the human-readable rules, they noticed something disturbing: One of the rules instructed doctors to send home pneumonia patients who already had asthma, despite the fact that asthma sufferers are known to be extremely vulnerable to complications.

The model did what it was told to do: Discover a true pattern in the data. The poor advice it produced was the result of a quirk in that data. It was hospital policy to send asthma sufferers with pneumonia to intensive care, and this policy worked so well that asthma sufferers almost never developed severe complications. Without the extra care that had shaped the hospital’s patient records, outcomes could have been dramatically different.

The hospital anecdote makes clear the practical value of interpretability. “If the rule-based system had learned that asthma lowers risk, certainly the neural nets had learned it, too,” wrote Caruana and colleagues—but the neural net wasn’t human-interpretable, and its bizarre conclusions about asthma patients might have been difficult to diagnose. If there hadn’t been an interpretable model, Malioutov cautions, “you could accidentally kill people.”

Is Artificial Intelligence Permanently Inscrutable?

The neural network described above was not wrong, the potential failure was in the leap from the recognized pattern to the proposed action.  If the hospital did not change their behavior the predictions from the network would have been accurate.  Yet if they had acted on those results, creating a feedback loop and side effects, there would have been horrible consequences.  Once we make the leap from recognition of pattern to guidance on an action we open up a Pandora’s box of risks that the AI alone is not designed to address.  We need human domain experts to assess how to leverage the pattern the AI finds.  Those experts will do a far better job of this if the system delivers an explanation with each result and will struggle mightily if they are dealing with a black box.

3 thoughts on “Why XAI: pattern vs. action

Leave a Reply