Why XAI: Foreseeing failure

Given the right encouragement we can be surprisingly good at foreseeing potential points of failure.  That has lead to the common practice of doing a premortem – a pre-launch risk assessment based on prospective hindsight.

As we apply machine learning to making more consequential decisions and we ask humans to take responsibility for those decisions and as we continue to see significant psychological barriers to adoption of AI, doing premortems on our projects becomes more valuable.  It both reduces the risk of failure and increases the team’s confidence.

XAI approaches lend themselves to projecting possible failures far more than black box systems ever will.

However there is an issue with black box systems:  “Nobody knows quite how they work. And that means no one can predict when they might fail.” 

… “What machines are picking up on are not facts about the world,” Batra says. “They’re facts about the dataset.” That the machines are so tightly tuned to the data they are fed makes it difficult to extract general rules about how they work. More importantly, he cautions, if you don’t know how it works, you don’t know how it will fail. And when they do they fail, in Batra’s experience, “they fail spectacularly disgracefully.” 

Is Artificial Intelligence Permanently Inscrutable

 

One thought on “Why XAI: Foreseeing failure

Leave a Reply