Match Explanation and Audience

Data scientists tend to think in terms of rigorous ideals.  They like clear cut goals and provably correct answers.  So it is natural to think that a given model or a specific result has a single definitive explanation.

However, this fails to take into account the range of stakeholders for the broader system that incorporates the model.  Consider a system that approves mortgage loan applications, the stakeholders include:

  • The original data scientist who selected and tuned the model
  • The maintaining data scientist who monitors and re-tunes the model
  • The devops engineer or system engineer who troubleshoots specific problem reports
  • The product line manager responsible for the mortgage product lines P&L
  • The regulator who wants to insure there is no bias in the system
  • The analyst in the CDO office who is deciding if it is worthwhile to buy supplemental data source to enhance the training inputs
  • The customer support person who deals with the applicant on the phone
  • The applicant themselves

They want different explanations that support different goals:

  • The data scientist wants to validate that her model is generalizable and robust.
  • The support engineer wants clues as to whether a specific erroneous results is a normal outlier or a failure that indicates new bad data flowing in.
  • The regulator does not care about the details of the internals but cares deeply if the system will produce biased outcomes.
  • The customer support person wants details about why a specific claim was denied
  • The denied applicant wants a simplified explanation what to do differently to get approved next time
  • etc. etc.

FICO’s internal XAI framework is an example of this approach providing one type of explanation to their Data Scientists via their analytics workbench while providing a distinct and simpler explanation to their customer service personnel via a custom interface.

One thought on “Match Explanation and Audience

Leave a Reply