Why XAI: Responsibility

Machine learning systems recognize patterns but it is up to humans to decide whether to act on those patterns and to take responsibility for the consequences.  The humans holding that responsibility are not the data scientists, they are the domain experts, the Doctors, the Judges, the business executives, the legislators.  Wide adoption of AI absolutely requires effective communication to those stakeholders, which in turn requires explainability.

“Even if a machine made perfect decisions, a human would still have to take responsibility for them — and if the machine’s rationale was beyond reckoning, that could never happen.” – Can A.I. Be Taught to Explain Itself?

2 thoughts on “Why XAI: Responsibility

Leave a Reply