糖心视频


AI meets game theory: How language models perform in human-like social scenarios

How language models perform in human-like social scenarios
Prediction scenarios in the Battle of the Sexes. Credit: Nature Human Behaviour (2025). DOI: 10.1038/s41562-025-02172-y

Large language models (LLMs)鈥攖he advanced AI behind tools like ChatGPT鈥攁re increasingly integrated into daily life, assisting with tasks such as writing emails, answering questions, and even supporting health care decisions. But can these models collaborate with others in the same way humans do? Can they understand social situations, make compromises, or establish trust?

A by researchers at Helmholtz Munich, the Max Planck Institute for Biological Cybernetics, and the University of T眉bingen, reveals that while today's AI is smart, it still has much to learn about .

The research is published in the journal Nature Human Behaviour.

Playing games to understand AI behavior

To find out how LLMs behave in social situations, researchers applied behavioral game theory鈥攁 method typically used to study how people cooperate, compete, and make decisions. The team had various AI models, including GPT-4, engage in a series of games designed to simulate social interactions and assess key factors such as fairness, trust, and cooperation.

The researchers discovered that GPT-4 excelled in games demanding logical reasoning鈥攑articularly when prioritizing its own interests. However, it struggled with tasks that required teamwork and coordination, often falling short in those areas.

"In some cases, the AI seemed almost too rational for its own good," said Dr. Eric Schulz, senior author of the study. "It could spot a threat or a selfish move instantly and respond with retaliation, but it struggled to see the bigger picture of trust, cooperation, and compromise."

Teaching AI to think socially

To encourage more socially aware behavior, the researchers implemented a straightforward approach: they prompted the AI to consider the other player's perspective before making its own decision.

This technique, called Social Chain-of-Thought (SCoT), resulted in significant improvements. With SCoT, the AI became more cooperative, more adaptable, and more effective at achieving mutually beneficial outcomes鈥攅ven when interacting with real human players.

"Once we nudged the model to reason socially, it started acting in ways that felt much more human," said Elif Akata, first author of the study. "And interestingly, human participants often couldn't tell they were playing with an AI."

Applications in health and patient care

The implications of this study reach well beyond game theory. The findings lay the groundwork for developing more human-centered AI systems, particularly in health care settings where social cognition is essential.

In areas like , chronic disease management, and elderly care, effective support depends not only on accuracy and information delivery but also on the AI's ability to build trust, interpret social cues, and foster cooperation. By modeling and refining these , the study paves the way for more socially intelligent AI, with significant implications for and human-AI interaction.

"An AI that can encourage a patient to stay on their medication, support someone through anxiety, or guide a conversation about difficult choices," said Akata. "That's where this kind of research is headed."

More information: Elif Akata et al, Playing repeated games with large language models, Nature Human Behaviour (2025).

Journal information: Nature Human Behaviour

Citation: AI meets game theory: How language models perform in human-like social scenarios (2025, May 31) retrieved 3 June 2025 from /news/2025-05-ai-game-theory-language-human-1.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further


3 shares

Feedback to editors