XAI flags fragile ML models, before patients suffer

Screenshot 2018-01-20 17.31.37

XAI can help us see when models are finding the right answer for the wrong reasons, which suggests the model won’t generalize and be robust as you apply it to a greater variety of new situations.  XAI transparency can make it is obvious even to non-Data Scientists what corrective action is required.

The Register’s overview of explainable AI references a good example:

In addition, an AI system trained on sample data may appear to be coming up with the right answers, but not always for the right reasons. A recent article in New Scientist highlighted the case of a machine learning tool developed at Vanderbilt University in Tennessee to identify cases of colon cancer from patients’ electronic records. Although it performed well at first, the developer eventually discovered it was picking up on the fact that patients with confirmed cases were sent to a particular clinic rather than clues from their health records.

Having an explainable AI system should also enable such issues to be spotted and fixed during the development phase, rather than only becoming apparent after the system has been in use for a period of time.

Obviously the particular correlation (clinic with colon cancer diagnosis) this AI learned is not going to generalize when the model is applied to other geographies.  Even though the model appears high-performing on the lab bench it has negative production utility, as it would endanger patients in broad use.  A black box approach can hide these performance-utility disconnects while an XAI approach can make them apparent.

Leave a Reply