Strategic advisors (Bain, Accenture, etc) are sending two messages to the C-suite:
- Leveraging your data via AI must be part of your strategy
- To future-proof against regulatory and social backlash it must be responsible trustworthy AI
As Accenture puts it:
… we believe that companies will be at a competitive advantage if they embrace Explainable AI in order to future-proof their AI systems from a regulatory point of view … the reality is that consumers do not want to be faced with an indifferent “AI shrug” and will demand explanations, seek recourse or vote with their feet …
PWC makes a similar point in their 2018 AI roadmap:
Many black boxes will open … We expect organizations to face growing pressure from end users and regulators to deploy AI that is explainable, transparent, and provable. That may require vendors to share some secrets. It may also require users of deep learning and other advanced AI to deploy new techniques that can explain previously incomprehensible AI.
Big organizations are listening this advice. We are seeing firms such as Capital One committing to XAI initiatives. The big vendors that obsessively cater to the C-suite are following suit, with Oracle’s growing investment in XAI being an example.
Management is starting to require this level of transparency even if it makes it harder to max AUC: delivering business utility outweighs maximizing model performance.