XAI lesson from Google Clips

Screenshot 2018-02-21 14.26.58

Josh Lovejoy shared an insightful post describing what they learned while building Google Clips – a successful machine learning powered consumer product.  Looking at this through an XAI lens we see an example of providing a contrastive explanation that communicates boundary conditions.

Google Clips is an intelligent camera designed to capture candid moments of familiar people and pets. It uses completely on-device machine intelligence to learn to only focus on the people you spend time with, as well as to understand what makes for a beautiful and memorable photograph.
—  The UX of AI

This product was unconventional and the fact that it was powered by adaptive predictive logic was certainly going to be apparent to the users.

Rather than making the system as much of a black box as possible to simplify the user experience they found that exposing more of the ‘workings’, than they originally assumed would be optimal, helped users understand and accept the product.

We made sure that the user had the final say in curation; from the best still frame within a clip to its ideal duration. And we showed users more moments than what we necessarily thought was just right, because by allowing them to look a bit below the ‘water line’ and delete stuff they didn’t want, they actually developed a better understanding of what the camera was looking for, as well as what they could confidently expect it to capture in the future.
—  The UX of AI

By sharing with the user captured ‘moments’ they expect to be rejected and deleted, they are providing a contrastive explanation that illustrates the workings within the black box by providing examples of both sides of the predicted boundary condition.

Leave a Reply