New research brings AI algorithms closer to being able to explain themselves

Researchers at the University of California, Berkeley, and the Max Planck Institute for Informatics have made a step towards explainable artificial intelligence (AI) algorithms, by designing a ‘pointing and justification’ system that enables algorithms to point to the data used to make a decision and justify why it was used that way. As explained by Quartz, the system ‘picks an idea from the mind of a machine and translates it for humans. Rather than displaying a decision as a series of mathematical equations, the machine can again do the heavy lifting to interpret its results.’ Although the proposed system only works with a specific scenario (recognizing human actions in pictures), it opens the door towards future general AI algorithms that can explain their actions in a clear and easy to understand manner.