Blog Neurensics

The AI paradox: smart algorithms, dumb data

Written by Martin de Munnik | Nov 12, 2025 8:53:46 AM

AI and the Brain: we understand enough about the brain to know that AI doesn’t

Artificial Intelligence is rapidly taking over the world of neuromarketing. According to the article Neuro-Marketing Meets AI: How Brain Science is Changing Digital Ads (October 2025), we are standing on the verge of a revolution. Algorithms will soon be able to infer what people feel and what they are likely to buy directly from brain activity.

But how much do we actually know about the brain; and how much could AI possibly understand? We know enough to realize that real decisions happen deeper than what we can see or measure. Anyone familiar with AI knows that it mainly looks for and learns from, patterns. While the brain looks for and learns from, meaning. Properly trained, AI can recognize patterns in data: shapes, colors, words, faces, without knowing what they mean or how they feel. The brain, on the other hand, instantly links the same stimuli to meaning, experience, and emotional value.

Let’s look at what the article with the bold headline actually refers to, and whether it holds up when Ms. Laura J. Bal writes: “AI can analyze facial expressions, track eye movement, and even interpret brainwaves to predict engagement.”
That may be true, but does engagement lead to purchase? Not necessarily; and if we look at the brain activation of the average Effie-winning campaign that we studied with fMRI out of 700 commercials, definitely not. You don’t need to feel engaged with Oral-B to buy a tube of toothpaste. Conversely, you can feel deeply involved in someone’s misfortune without being able or willing to act.

After fifteen years of looking into brains and studying over 1,200 campaigns, we can explain why.

Eye-tracking: eyes look, but the brain sees

Eye-tracking shows where we look, not what we see. It records eye movements and focus but not meaning or emotion. Research (Holmqvist, 2022) shows that fixations correlate with attention, not with preference or choice. Our own eye-tracking data confirm that people mostly look at other people, and that text attracts attention. But looking is not buying. Only when visual information is relevant enough to reach the limbic system does it gain meaning and motivational value. And even then, purchase intent is not guaranteed.

AI models trained on eye-tracking data learn where the eye focuses, not what the brain thinks of it. 

Facial coding: expressions are for communication

Facial expressions are social signals. We don’t smile because we are happy; we smile because we want to communicate that we are happy or appear to be. As Lisa Feldman Barrett (2017) convincingly demonstrated, emotions are not universal reflexes but contextual constructions. We laugh because others laugh (the secret behind the sitcom laugh track). We applaud because others applaud (known as “studio claque” in live shows).

Facial coding therefore detects what people -often unconsciously- want to convey to others; not what they actually feel, and certainly not what they will do. AI trained on such data learns social conventions, not purchase intentions.

EEG: measuring the surface, not the source

EEG measures electrical activity at the surface of the brain. It captures attention and arousal, but not the deeper structures where motivation originates. Venkatraman et al. (2015) concluded that EEG correlates with attention but does not predict behavior when reward activation is not included.

AI that analyzes EEG signals can detect engagement, but not desire. It registers activity, not decisions. It distinguishes engagement, but not whether it is positive or negative. In that sense, even measuring engagement with a brand is of little value. In our MRI studies, we found that engagement can be a strong driver of brand preference, but only when it is positive.

The AI paradox: smart algorithms, dumb data

In essence, most AI systems learn from what people say, click, or rate. They learn from data that Daniel Kahneman called System 2: the conscious, deliberate, rational mind. Yet behavior arises in System 1: fast, intuitive, and emotional.

Almost all AI in the world, and certainly the kind that promises to “make our lives easier” from surprisingly modest data, is trained on System 2 information. It has learned how people explain their behavior, not why they perform it. It predicts what people will say, not what they will do.

Predicting what people will say can have value; but don’t be misled by advisors, bloggers, agencies, and experts who are unaware of the current state of neuroscience.
Only when AI starts learning from implicit, automatic responses, gaining an understanding of the neural patterns of reward and motivation, and learns to judge rather than merely recognize patterns, will it become predictive instead of descriptive.

What the real brain teaches us

Decisions do not arise from reasoning but from valuation. The ventromedial prefrontal cortex weighs reward; the nucleus accumbens predicts value; the insula warns against loss. These processes occur automatically and unconsciously. Measuring them directly with fMRI or validated implicit tests allows us to predict behavior reliably. Not because people say so, but because their brains show so.

With this knowledge, AI certainly holds promise; but only when it learns from the right signals. Eyes look, faces communicate, the cortex calculates; yet the deep brain decides. As long as AI learns from what we show instead of what we feel, it will keep guessing behavior.

And then the question is no longer what AI can learn, but what we want it to learn.

Want to know more? 
Discover how we use neuroscience to go beyond the limits of AI: neuromarketing research at Neurensics