When AI Falls for Optical Illusions, It Teaches Us How the Human Brain Sees

Tokyo, Japan, 12 January 2026 – Our eyes do not always tell us the full truth. A clear example is the Moon illusion, where the Moon appears much larger near the horizon than when it is high in the sky, even though its size and distance from Earth barely change. Optical illusions like this remind us that human vision is not a perfect mirror of reality. Instead, it is a smart system that uses shortcuts to make sense of the world quickly.

For a long time, illusions were seen as simple mistakes made by the brain. Today, scientists view them differently. Illusions reveal how the brain filters information, focusing only on what matters most. Processing every detail around us would be overwhelming, so our brain takes in only a small portion and fills in the gaps.

Now, artificial intelligence is helping researchers understand these mental shortcuts in a new way. Surprisingly, some AI systems fall for the same optical illusions as humans. This discovery is changing how scientists think about both machine vision and the human brain.

Modern AI systems rely on deep neural networks, which are inspired by how neurons work in the brain. These systems are excellent at spotting patterns and tiny details that humans often miss. That is why AI has become so useful in areas like medical imaging and early disease detection. But when shown certain optical illusions, some of these systems behave more like humans than expected.

According to Eiji Watanabe, an associate professor of neurophysiology at the National Institute for Basic Biology, studying illusions using AI has a major advantage. Researchers cannot freely experiment on the human brain due to ethical limits, but artificial models can be tested and modified without such restrictions. This allows scientists to explore how perception works in greater detail.

Not all illusions are processed the same way by humans. Studies of people who regained sight later in life show that motion-based illusions are easier to perceive than shape-based ones. This suggests that our brains may learn to process movement earlier or more strongly than shapes. Brain scans using functional magnetic resonance imaging have also shown that different regions of the brain activate when we view different types of illusions.

One challenge in illusion research is that perception is subjective. A famous example is the viral dress photo from 2015, where people strongly disagreed on whether it was blue and black or white and gold. Because scientists often rely on people describing what they see, studying illusions objectively can be difficult.

AI offers a new solution. Many systems, including chatbots like ChatGPT, are built using deep neural networks. In recent experiments, Watanabe and his team tested whether an AI system could experience motion illusions in a way similar to humans.

They used a model called PredNet, which is based on a theory known as predictive coding. This theory suggests that the brain does not just react to visual input but first predicts what it expects to see, then corrects itself when the input differs. PredNet works in a similar way by predicting future video frames based on past ones.

After training PredNet on videos of natural scenes, the researchers showed it the rotating snakes illusion, a static image that appears to move. The AI was fooled by the same versions of the illusion that trick human viewers. This supports the idea that predictive coding plays a key role in how we perceive motion.

However, the AI also showed clear differences. Humans can stop the illusion by focusing on one part of the image, while motion continues in the background. PredNet could not do this because it lacks attention mechanisms. It processes the entire image at once, unlike the human brain, which can shift focus.

Some scientists are taking this research even further by combining AI with ideas from quantum mechanics. Ivan Maksymov, a research fellow at Charles Sturt University, developed an AI model inspired by quantum theory to study ambiguous illusions like the Necker cube and the Rubin vase. His system switched between different interpretations of these images over time, much like human perception does.

This work may also help scientists understand how vision changes in space. Research involving astronauts aboard the International Space Station shows that long periods in microgravity can alter how optical illusions are perceived. Without gravity as a reference point, depth and orientation become harder to judge.

While AI is still far from seeing the world exactly as humans do, these studies show that machines can help us understand ourselves better. By learning when AI gets fooled, scientists are uncovering the hidden rules that guide human perception. And as humans prepare for longer journeys into space, knowing when we can trust our eyes may be more important than ever.

Hot Topics

Related Articles