AI explanations that will stand up in Court

Screenshot 2018-02-05 11.16.29

What kind of explanation does an AI need to provide to meet the expectations of our legal system?  The authors of “Accountability of AI Under the Law: The Role of Explanation” provide a framework to think about this question.

Focusing on legally operative explanations of specific results they propose the following framework.

When an explanation is required:

  • Impact on others:  “The decision must have been acted on in a way that has an impact on a person other than the decision maker.”
  • Recourse available: “There must be value to knowing if the decision was made erroneously … if there is no recourse for the harm caused, then there is no justi cation for the cost of generating an explanation”
  • Reasonable suspicion of error: “There must be some reason to believe that an error has occurred (or will occur) in the decision-making process.”

Reasons to suspect an error has occurred:

  • Inadequate inputs: If the inputs are judged potentially incomplete or untrustworthy based on the current commonly accepted understanding of causality for this subject matter and trustworthiness of input sources.
  • Inexplicable outcomes:  Such as when there are di fferent decisions for two apparently identical subjects.
  • Distrust in integrity:  Such as when there are pre-existing consequential potential conflicts of interest.

A satisfactory explanation should be able to answer at least one of the following, which ones are required depends on nature of subject domain and decision:

  • Factors considered:  “What were the main factors in a decision? … ideally ordered by significance”
  • Determinative factors: “Would changing a certain factor have changed the decision? … not whether a factor was taken into account at all, but whether it was determinative.”
  • Discriminative logic: “Why did two similar-looking cases get different decisions, or vice versa?”  Provide sufficient insight to assess the consistency and predictability of the decision making.

The authors argue that for most legal purposes we only need to address “local interpretability for a single result” and not “global model interpretability” or “algorithm transparency”.  So their framework focuses on that case.  See Explaining Explanations for more on different types of XAI explanations.

For regulatory rather than litigation purposes the need may differ and global model interpretability may be more important.  For example regulations against systemic bias might care about global methods and results, whereas a specific litigant would be focused on a specific result.  This regulatory scenario is beyond the scope of their paper.

This paper goes on to discuss applicability of generating post-hoc model agnostic explanations.  We will use a subsequent post to comment on that section of their paper.

Leave a Reply