Judge AIs as aides not alternatives

Screenshot 2018-01-22 10.55.24

As portrayed in this article Peter Norvig gets many things right.  For example, the “output of machine learning systems [can be] a more useful probe for fairness” than recapitulating a model’s internal mechanics.

However, he unfortunately perpetuates one harmful notion.  The idea that you should evaluate an AI as an alternative to a human instead of as an aide to that human.

But efforts to crack open the black box hit a snag yesterday, as the research director of arguably the world’s biggest AI powerhouse, Google, cast doubt on the value of explainable AI.

After all, Peter Norvig suggested, humans aren’t very good at explaining their decision-making either.
– Google’s research chief questions value of ‘Explainable AI’

If you think of an AI as a replacement for a human than it is tempting to say: “humans can’t explain their decision-making effectively, neither can neural networks, why hold the AI to a higher standard?”

For certain moonshot projects (such as autonomous vehicles) the AI is acting as a replacement for a human.  However, the vast majority of AI projects will not be these moonshoots, they will be smaller targeted machine learning efforts to augment and aide humans in their decision making.  Helping to decide how to optimize a production line or when to allow a pneumonia patient to go home.  For these projects explainability is a requirement.  It is the human who has ultimate responsibility for the final decision, it is a human who has to deal with the consequences, is is a human’s cognitive biases that need to be overcome, it is a human that might wind up in court or in front of their regulator.

One thought on “Judge AIs as aides not alternatives

Leave a Reply