Generation can help validate AI systems

There is a growing list of AIs that can generate data, such as this Microsoft AI that can draw birds from a simple text description.



What’s especially interesting is how the bot can fill in the blanks when specific details aren’t mentioned – basically, it has a little common sense and imagination of its own, thanks to its training data. In the bird example, the bot will usually draw a bird sitting on a tree branch even if that’s not stated in the text, because the images originally fed to it often showed something similar.

What is intriguing from a validation and explainability point of view is that through data generation (creating new images) we are learning insights about the training data (usually shows birds on tree branches).  This raises a number of questions such as: will any AI trained with this same data set generalize to handle cases of birds that are not in trees?

In some sense the examples generated by this AI are revealing the essence of what is communicated by the training data set, which is a type of explanation.

Leave a Reply