“Your watch knows you’ll get sick, but it doesn’t know why”

Screenshot 2018-02-08 12.04.49

The Hustle news summary included this today:  “Your watch knows you’ll get sick, but it doesn’t know why

As use of AI spreads we will be seeing more and more articles like this.  Written by the general interest media for a broad audience and focused on the practical realities of a specific application of machine learning.

If we read this article from XAI perspective here are the takeaways:

Laymen are recognizing the black box problem:

As the general press and public are becoming more aware of specific applications of AI they are recognizing the black box problem even when they aren’t familiar with the terms “black box” and “XAI”.  The headline here makes this point well.

Pace of commercialization is accelerating:

Innovators are aggressively moving to take machine learning applications into production use.  This is true even when the stakes as high, as with diabetes diagnosis.  Cardiogram is promoting this use of AI even as some in scientific community are suggesting caution:

“This combines features of the black box of algorithms and the black box of biology … It’s unconvincing and shaky. At best it would be considered hypothesis-generating.”
— Eric Topol quoted in With AI, Your Apple Watch Could Flag Signs of Diabetes

Need to distinguish pattern recognition from action recommendation:

The Cardiogram co-founder makes a fair point:

Ballinger is quick to counter these kinds of criticisms. If your wearable tells you you’re at increased risk for diabetes, and you go to the doctor and get diagnosed by traditional means, then you’re still getting the standard quality of care, he says. So what if it’s a black box that gets you in the door?
— With AI, Your Apple Watch Could Flag Signs of Diabetes

However this argument assumes that the naive end-user understands what is being communicated to them.  Remember what is said is not always what is heard.

If the message being delivered by the AI is “go see your doctor to explore if you are at risk of diabetes” that could be helpful and appropriate.

However, if the message is “there is an 85% chance you have diabetes” the typical patient will translate that to “I have diabetes” and if the patient then acts based on that assumption that could be harmful.

There is a lot of opportunity for things to go wrong between a machine learning model correctly identifying a pattern and a human taking appropriate action.  Getting value from AI is often confounded as you go from pattern to action.

You only need to look at this headline “The Apple Watch can detect diabetes with an 85% accuracy, Cardiogram study says” to see how they typical reaction to this AI’s results might go straight to “I am diabetic” while skipping the key step of “without assuming anything yet, go consult your Doctor”.

An explanation would be very helpful:

While we agree that if the right message is successfully delivered even a black box system can be valuable.  Obviously it would be even more valuable to have an explanation of what is going on inside the black box.

With an explanation Cardiogram could

Leave a Reply