Serendipity, heatmap explanations and medical insights

Screenshot 2018-02-19 12.31.20

By looking at the human eye, Google’s algorithms were able to predict whether someone had high blood pressure or was at risk of a heart attack or stroke
—  Washington Post

One of the most powerful things about machine learning is the ability to see patterns and make connections that are not at obvious to humans.  For example, it is not obvious that we should look at the retina to assess heart disease risk.  Google’s project found this connection as a side effect of a project that was working on predicting eye diseases. Now we have the potential for a lower cost less invasive technique for diagnosing heart disease.

Sometimes the surprising connections that pop out of machine learning models are misleading coincidental correlations, sometimes they are genuine new insights.  Interpretable ML techniques allow us distinguish between these and build value through new insights.  In this case for example:

Google’s technique generated a “heatmap” or graphical representation of data which revealed which pixels in an image were the most important for a predicting a specific risk factor. For example, Google’s algorithm paid more attention to blood vessels for making predictions about blood pressure.
—  USA Today

Google then used these heatmaps to gather feedback from human domain experts.

It was good to see that the team made the investment to provide and validate explainations as part of their project.  We expect the trend towards providing explanations to continue to accelerate.

For those interested in the detail behind their technique, but who don’t have a paid subscription to “Nature Biomedical Engineering”, the study PDF is also available from this link:

To better understand how the neural network models arrived at the predictions, we used a deep learning technique called soft attention 30–32 a different neural network model with fewer parameters compared to Inception-v3. These small models are less powerful than Inception-v3, and were used only for generating attention heatmaps and not for the best performances results observed with Inception-v3. For each prediction shown in Figure 2, a separate model with identical architecture was trained. The models were trained on the same training data as the Inception-v3 network described above, and the same early stopping criteria were used.
—  Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning

Leave a Reply