It is currently unclear how far regulatory regimes will go in requiring explanations from machine learning driven decisions. However, just as the “right to be forgotten” went from being scoffed at to being a real world requirement, we can expect the “right to an explanation” to become a need we have to meet. The more we preemptively address this need the less likely it will be that detailed and inflexible regulatory rules will be put in place to force compliance.
The first place real regulation will appear is likely the EU, though the details of how the existing law gets translated into specific enforceable regulation are still hazy.
Even governments are starting to show concern about the increasing influence of inscrutable neural-network oracles. The European Union recently proposed to establish a “right to explanation,” which allows citizens to demand transparency for algorithmic decisions. The legislation may be difficult to implement, however, because the legislators didn’t specify exactly what “transparency” means. It’s unclear whether this omission stemmed from ignorance of the problem, or an appreciation of its complexity.
– Is Artificial Intelligence Permanently Inscrutable
…. summer of 2018, the European Union may require that companies be able to give users an explanation for decisions that automated systems reach …
– The Dark Secret at the Heart of AI
… provide the data subject with the following further information necessary to ensure fair and transparent processing … the existence of automated decision-making, including profiling … meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject …
– EU Privacy Regulations