Can Optical Illusions fool Artificial Intelligence too?
Deep neural networks (DNN) have made the news recently for their application as a predictive software used to create convincing fake videos of politicians and celebrities. You may not be aware, however, that you have most likely already used the same technology in a much less controversial form: the translation app on your smart phone.
Both examples being the tip of the iceberg, application of this exciting technology in the forthcoming years promises to add automatic color to black and white photographs, translate a photograph of a menu into the language of your choice and, perhaps most impressively of all, transform medical image analysis - enabling faster and more accurate diagnosis of injuries and disease, and, very possibly, treatment too.
Sounds inspirationally impressive, but what if this technology, based loosely upon the functioning of neural networks in the human brain, is also vulnerable to the same limitations of the brain: misinterpretation and ‘human’ error? Can this technology be ‘deceived’ into making false predictions? And, if so, what are the implications for both DNNs and research into the functioning of biological neural networks in the brain itself?
These are the fascinating questions a team of leading scientists, including Ritsumeikan’s own professor Akiyoshi Kitaoka of the College of Comprehensive Psychology, College of Letters, recently set about providing the answer to.
The team used a DNN-based next-frame video prediction software PredNet as the basis of their research. Capable of unsupervised learning, PredNet was first shown videos of a rotating propeller and then tested as to whether, after a period of learning, it could accurately predict the next frame of a previously unseen, but paused video. It was also shown various still images (control images) to check it would not falsely predict movement in cases where human observers would expect no change.
Satisfied with the predictive qualities of the software, it was then presented with Professor Kitaoka’s renowned ‘Rotating Snake’ image to test whether, in the same way a human observer is deceived, it would be ‘fooled’ into predicting movement.
The results, published in Frontiers in Psychology’ (March 15, 2018), were astonishing.
Not only did PredNet classify Professor Kitaoka’s Rotating Snake image as ‘moving’ rotationally, but, when presented with other versions of the illusion, which employ altered color schemes to elicit rotation in different directions, the software’s predictions again matched those of the human experience.
The results amount, in effect, to the discovery of sensory illusion in the machine, with profound implications for future research – supporting, in the words of the team, ‘the exciting idea that the mechanism assumed by the predictive coding theory is one of basis of motion illusion generation.’
So, if you thought that two dimensional optical illusions were the exclusive domain of childhood - spending hours staring at a page and wondering whether the illusion could be made to appear or disappear, think again. In the form of enabling researchers and programmers alike to factor in expectation for predictive error, they may in fact contribute significantly to a safer, much healthier world for all.
Aside from their more serious application and extensive contribution to cross-disciplinary research spanning various academic domains, Professor Kitaoka’s images have also been adopted by pop and indie music culture too in the form of
You may be familiar with Professor Kitaoka’s strawberry illusion too:
More about Professor Kitaoka:
Research Database page:
Professor Kitaoka's English `About' page:
Professor Kitaoka's Japanese page:
(the latter includes a wealth of other images to challenge your sense of static!)