AI decision support systems represent a distinct style of AI implementation with a different design approach from AIs aimed delivering a single definitive recommendation.
Stand-alone AI systems that autonomously make final decisions are impressive and from a data scientists perspective more satisfying to build. Humans and complex multi-step systems can be messy to deal with.
However, for consequential decisions you often want a hybrid approach with AIs supporting human decisions makers. Such systems can leverage the strengths of both and better meet the broader goals.
“AI decision support system” is a phrase we will be hearing more and more. These are not systems that make specific decisions, rather they are systems that help humans make better decisions as part of a hybrid human-computer process.
For example consider consider a clinical setting with Doctor’s making decisions about treatment based on their best hypothesis about diagnosis.
Machine learning driven AIs can review a far greater number of historical cases than any human can and recognize which are most similar to the current case. Such an AI has no trouble evaluating a broad range of potential diagnosis including seldom seen syndromes that might not readily come to mind for the typical clinician.
On the other hand human Doctors can pick up subtle signals that never make it into the medical information system. Perhaps a patient is embarrassed to admit to a particular symptom or a nurse notices a patient over reporting their pain levels. The AI would be blind to all such clues that don’t appear in the online chart.
A “decision support” system isn’t just a “decision making” system that gives a human the chance to approve the AIs decision before it is put into practice. In designing a decision support system we often want to take a fundamentally different approach to what results we deliver.
For example in addition to the result itself we may want to deliver an explanation of that result so the responsible human decision maker has some basis for testing and validating that recommendation.
Alternatively we may have the AI not deliver a single recommendation at all but rather deliver the useful constituent elements that would feed the humans decision making. Perhaps suggesting a ranked list of alternative with an explanation of why each received the rank it did. Or perhaps identifying relevant factors to be considered in a decision without making any specific recommendation about what the decision should be. Or perhaps recommending information gathering steps the human should take, a sort of Socratic method that does not presume the answer.