Why XAI?

The DARPA team articulated the goals of XAI: “enable human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners”.

Many of today’s machine learning algorithms fail to achieve these goals because the suffer from the ” … black box problem … able to make statistically sound decisions, but they can’t easily explain how they made them”**

Explainability improves machine learning in three dimensions: adoption, robustness and insight.


The finest machine learning system in the world is of only academic value if users ignore it’s results.  The quality of explanation delivered has a huge impact on real world adoption.  Read more …


Black boxes are brittle.  It is hard to make an opaque system robust and adaptable.  If you don’t understand how it works in situation X how can you possibly project how it might work in situation Y.  Read more …


Machine learning algorithms “model” real world behavior.  Insights into how that reality operates enable us to build better models.  Building useful models generates new insights.  Maintaining this positive exchange between model and insight is only possible when the results from the model come with explanations that are meaningful to a domain expert.  Read more …


3 thoughts on “Why XAI?

Leave a Reply