“Think like me, embrace the black box”

Machine learning practitioners must adapt our systems to fit the expectations of the world, rather than expect the world to adapt to our expectations.  Delivering explainable AI is part of meeting those expectations.

When you just finish overcoming a big challenge you don’t want to hear “Thanks very much, your excellent work now makes it urgent to solve this next big challenge” and you certainly don’t want to find out “By the way, this next challenge, which will take a ton of work to resolve, will seem pointless from your perspective at the lab bench”

This is what is happening as machine learning is moving from the lab bench to the real world.  As an industry we have climbed a huge hill to provide the big data, scalable computing power and clever new algorithms that allow us to demonstrate the value of AI.  Now we are being told it is not enough that the models work but they have to work in a way that is interpretable, generalizable and robust. Which means they can’t continue to be black boxes.

Making the black boxes deliver results was a big challenge, making them not be black boxes will be another big challenge.

A natural response from a data scientist. who has sweated over tuning his black box, is “Sorry non-data scientist you just don’t get it, you only want transparency because of your own insecurities, if you only thought more like me your would be free of this irrational doubt and accept the black box.”  Unfortunately that response just doesn’t cut it.

If we want to machine learning to be adopted broadly than we need to evolve it to address the world as it is, rather than expect the world to change to suite us.  Only when we open the black box so we can communicate clearly, demonstrate robustness and generate insights will we have systems that fully live up to their potential.

One thought on ““Think like me, embrace the black box”

Leave a Reply