ÌÇÐÄÊÓÆµ


Why do some of us love AI, while others hate it? The answer is in how our brains perceive risk and trust

hating AI
Credit: AI-generated image

From ChatGPT crafting emails, to AI systems recommending TV shows and even helping diagnose disease, the presence of machine intelligence in everyday life is no longer science fiction.

And yet, for all the promises of speed, accuracy and optimization, there's a lingering discomfort. using AI tools. Others feel anxious, suspicious, even betrayed by them. Why?

The answer isn't just about . It's about how we work. We don't understand it, so we don't trust it. Human beings are more likely to trust systems they understand. Traditional tools feel familiar: you turn a key, and a car starts. You press a button, and a lift arrives.

But many AI systems operate as : you type something in, and a decision appears. The logic in between is hidden. Psychologically, this is unnerving. We like to see cause and effect, and we like being able to interrogate decisions. When we can't, we feel disempowered.

This is one reason for what's called algorithm aversion. This is by the marketing researcher Berkeley Dietvorst and colleagues, whose research showed that people often prefer flawed human judgment over algorithmic decision making, particularly after witnessing even a single algorithmic error.

We know, rationally, that AI systems don't have emotions or agendas. But that doesn't stop us from projecting them on to AI systems. When ChatGPT responds "too politely," some users find it eerie. When a recommendation engine gets a little too accurate, it feels intrusive. We begin to suspect manipulation, even though the system has no self.

This is a form of anthropomorphism—that is, attributing humanlike intentions to nonhuman systems. Professors of communication Clifford Nass and Byron Reeves, along with others that we respond socially to machines, even knowing they're not human.

We hate when AI gets it wrong

One curious finding from behavioral science is that we are often more forgiving of than machine error. When a human makes a mistake, we understand it. We might even empathize. But when an algorithm makes a mistake, especially if it was pitched as objective or data-driven, we feel betrayed.

This links to research on , when our assumptions about how something "should" behave are disrupted. It causes discomfort and loss of trust. We trust machines to be logical and impartial. So when they fail, such as misclassifying an image, delivering biased outputs or recommending something wildly inappropriate, our reaction is sharper. We expected more.

The irony? Humans make flawed decisions all the time. But at least we can ask them "why?"

For some, AI isn't just unfamiliar, it's existentially unsettling. Teachers, writers, lawyers and designers are suddenly confronting tools that replicate parts of their work. This isn't just about automation, it's about what makes our skills valuable, and what it means to be human.

This can activate a form of identity threat, by social psychologist Claude Steele and others. It describes the fear that one's expertise or uniqueness is being diminished. The result? Resistance, defensiveness or outright dismissal of the technology. Distrust, in this case, is not a bug—it's a psychological defense mechanism.

Craving emotional cues

Human trust is built on more than logic. We read tone, , hesitation and eye contact. AI has none of these. It might be fluent, even charming. But it doesn't reassure us the way another person can.

This is similar to the discomfort of the uncanny valley, a term coined by Japanese roboticist Masahiro Mori to describe the eerie feeling when something is almost human, but not quite. It looks or sounds right, but something feels off. That emotional absence can be interpreted as coldness, or even deceit.

In a world full of deepfakes and algorithmic decisions, that missing emotional resonance becomes a problem. Not because the AI is doing anything wrong, but because we don't know how to feel about it.

It's important to say, not all suspicion of AI is irrational. Algorithms have been shown to , especially in areas like recruitment, policing and credit scoring. If you've been harmed or disadvantaged by data systems before, you're not being paranoid, you're being cautious.

This links to a broader psychological idea: learned distrust. When institutions or systems repeatedly fail certain groups, skepticism becomes not only reasonable, but protective.

Telling people to "trust the system" rarely works. Trust must be earned. That means designing AI tools that are transparent, interrogable and accountable. It means giving users agency, not just convenience. Psychologically, we what we understand, what we can question and what treats us with respect.

If we want AI to be accepted, it needs to feel less like a black box, and more like a conversation we're invited to join.

Provided by The Conversation

This article is republished from under a Creative Commons license. Read the .The Conversation

Citation: Why do some of us love AI, while others hate it? The answer is in how our brains perceive risk and trust (2025, November 3) retrieved 4 November 2025 from /news/2025-11-ai-brains.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Whether AI empowers or oppresses workers depends on how human managers interpret outputs

2 shares

Feedback to editors