NY Times OpEd makes case for explainable AI

Screenshot 2018-01-25 16.39.29Vijay Pande’s article at first focuses on the limits of our own decision making which makes one suspect he is headed for the “humans can’t explain themselves why should AIs” fallacy.

However, in the end he focuses on a more productive argument: that AIs should be able to be interrogated and interpreted.  Exactly the argument for applying explainable AI techniques.

The irony is that compared with human intelligence, A.I. is actually the more transparent of intelligences.  Unlike the human mind, A.I. can — and should — be interrogated and interpreted.  Like the ability to audit and refine models and expose knowledge gaps in deep neural nets and the debugging tools that will inevitably be built … there are many technologies that could help interpret artificial intelligence in a way we can’t interpret the human brain
Artificial Intelligence’s ‘Black Box’ Is Nothing to Fear

Vijay goes on to point out the need for a new form of cooperation between AI systems and humans:

… we will soon see the creation of a category of human professionals who don’t have to make the moment-to-moment decisions themselves but instead manage a team of A.I. workers — just like commercial airplane pilots who engage autopilots to land in poor weather conditions. Doctors will no longer “drive” the primary diagnosis; instead, they’ll ensure that the diagnosis is relevant and meaningful for a patient and oversee when and how to offer more clarification and more narrative explanation …
Artificial Intelligence’s ‘Black Box’ Is Nothing to Fear

Inherently this kind of cooperation requires the interpretability and interrogation capabilities that will come with XAI.

Leave a Reply