Why XAI: Outlier vs. Error

… even when the model fails to predict what really happened, its ability to explain the process in an intelligible way is still crucial …
Analytics Lessons Learned from Winston Churchill

For any complicated system it is critical to have effective troubleshooting mechanisms.  Things will go wrong and when they do we need a plan for how we will understand and resolve those issues.

Machine learning systems are inherently probabilistic.  There is a distribution of results and even correctly functioning systems will have occasional outlier results that a human perceives as wrong even though getting such outliers is to be expected.  The challenge is that sometimes the result is actually wrong, there was an error in the data collection or a bug in the code or an unexpected subtle interaction between subsystems.

So how do stakeholders distinguish between expected outliers and true errors.  With a black box system that is tough to do.  With an explainable system our stakeholders can look at the explanation that comes with a result and confirm its reasonableness.  Often looking at the explanation directly guides us were to go next with our troubleshooting:

  • is there faulty data?
  • is there correct data that was not represented in original training set?
  • is there a fault in the logic?
  • are we making too big a leap from recognized pattern to proposed action?
  • etc.  etc.

Recognizing these issues is much tougher when dealing with a black box.

2 thoughts on “Why XAI: Outlier vs. Error

Leave a Reply