Making the artificial intelligence more intelligent, scientists have taken a step ahead to train the robots to decipher optical illusions. This advancement will help the robots be more observant like humans and thereby increase their ability.

Now as the artificial intelligence needs to be programmed in a specific way to observe optical illusions, first of all one needs to understand how a human brain processes these illusions. The research on same is ongoing as one of the scientists from Brown university, US has told. Illusions are of various types, and one such type is contextual phenomena or contextual illusion. For such illusions, the processing and perception depends solely upon the context. I.e. the information provided with the illusion itself creates the illusion, may it be about the perception of colour or size or shape.

For example, a white circle with black background will seem to be different in size from the black circle with white background even if it has same radius. “There’s growing consensus that optical illusions are not a bug but a feature. They may represent edge cases for our visual system, but our vision is so powerful in day-to-day life and in recognising objects,”” as Mr. Thomas Serre, an associate professor at the Brown University told. About the research, they began with a dummy of human brain having anatomical and physiological similarities with the human brain. The data perception was similar to the visual cortex of humans, and was supposed to provide the information about how the cortical neurons passed information to each other while processing information such as contextual illusion.

One of the results suggested that some feedback mechanism worked between the neurons and the hypothesised information was transferred through the synapse to the next neuron. These feedback connections are capable of modifying the response of a central neuron as Serre said. Also, they’re absent from the deep learning algorithms as deep learning can not afford to pass on hypothesised information to the system. It works with complex algorithms for identifying images and speech as well. After construction of the model, they made it undergo various optical illusions (contextual ones) and modified the model to respond like the neurophysiological model of a human would respond. And finally the model was able to perceive contextual optical illusions like humans.

In order to check whether any extra complications or unnecessary parts are added to the model, they removed some of the parts and again checked it. This time the perception didn’t match with that of a human which proved that the model lacks any kind of extra machinery and is as simply constructed as possible.

2