ÌÇÐÄÊÓÆµ


A geometric link: Convexity may bridge human and machine intelligence

Peeking inside AI brains: Machines learn like us
A new connection between human and machine learning has been discovered: While conceptual regions in human cognition for long have been modeled as convex regions, Tetkova et al. present new evidence that convexity playes a similar role in AI. So-called pretraining by self-supervision leads to convexity of conceptual regions and the more convex the regions are, the better the model wil learn a given specialist task in supervised fine-tuning. Credit: DTU

In recent years, with the public availability of AI tools, more people have become aware of how closely the inner workings of artificial intelligence can resemble those of a human brain.

There are several similarities in how machines and human brains work; for example, in how they represent the world in abstract form, generalize from limited data, and process data in layers. in Nature Communications by DTU researchers is adding another feature to the list: convexity.

"We found that convexity is surprisingly common in deep networks and might be a fundamental property that emerges naturally as machines learn," says Lars Kai Hansen, a DTU Compute professor who led the study.

To briefly explain the concept, when we humans learn about a "cat," we don't just store a , but build a flexible understanding that allows us to recognize all sorts of cats—be they big, small, fluffy, sleek, black, white, and so on.

Taken from mathematics to describe—for example—geometry, the term convexity was applied to by Peter Gärdenfors, who proposed that our brains form conceptual spaces where related ideas cluster. And here's the crucial part: Natural concepts, like "cat" or "wheel," tend to form convex regions in these mental spaces. In short, one could imagine a rubber band stretching around a group of similar ideas—that's a convex region.

Think of it like this: Inside the perimeter of the , if you have two points representing two different cats, any point on the between them also falls within the mental "cat" region. Such convexity is powerful as it helps us generalize from a few examples, learn new things quickly, and even helps us communicate and agree on what things mean. It's a fundamental property that makes human learning robust, flexible and social.

When it comes to —the engines behind everything from image generation to chatbots—they learn by transforming raw data like pixels or words into complex internal representations, often called "latent spaces." These spaces can be viewed as internal maps where the AI organizes its understanding of the world.

Measuring AI's internal structure

To make AI more reliable, trustworthy and aligned with , there is a need to develop better ways to describe how it represents knowledge. Therefore, it is critical to determine whether machine-learned spaces are organized in a way that resembles human conceptual spaces and whether they also form convex regions for concepts.

The first author of the paper, Lenka Tetkova, who is a postdoc at DTU Compute, dove into this very question, looking at two main types of convexity.

The first is Euclidean convexity, which is straightforward: If you take two points within a concept in a model's latent space, and the straight line between them stays entirely within that concept, then the region is Euclidean convex. This is like generalizing by blending known examples.

The other is graph convexity, which is more flexible and especially important for the curved geometries often found in AI's internal representations. Imagine a network of similar data points—if the shortest path between two points within a concept stays entirely inside that concept, then it's graph-convex. This reflects how models might generalize by following the natural structure of the data.

"We've developed new tools to measure convexity within the complex latent spaces of deep neural networks. We tested these measures across various AI models and data types: images, text, audio, human activity, and even medical data. We found that the same geometric principle that helps humans form and share concepts—convexity—also shapes how machines learn, generalize, and align with us," says Tetkova.

AI's hidden order

The researchers also discovered that the commonalities are found in pretrained models that learn general patterns from massive datasets and fine-tuned models that are taught specific tasks like identifying animals. This further substantiates the claim that convexity might be a fundamental property that emerges naturally as machines learn.

When models are fine-tuned for a specific task, the convexity of their decision regions increases. As AI improves at classification, its internal concept regions become more clearly convex, refining its understanding and sharpening its boundaries.

In addition, the researchers discovered that the level of convexity in a pretrained model's concepts can predict how well that model will perform after fine-tuning.

"Imagine that a concept, say, a cat, forms a nice, well-defined convex region in the machine before it's even taught to identify cats specifically. Then it's more likely to learn to identify cats accurately later on. We believe this is a powerful insight, because it suggests that convexity might be a useful indicator of a model's potential for specific learning tasks," says Prof. Hansen.

A route to better AI

According to the researchers, these new results may have several important implications. By identifying convexity as a pervasive property, they have better understood how deep neural networks learn and organize information. It provides a concrete mechanism for how AI generalizes, which may be like how humans learn.

If convexity does prove to be a reliable predictor of performance, it may be possible to design AI models that explicitly encourage the formation of convex regions during training. This could lead to more efficient and effective learning, especially in scenarios where only a few examples are available. The findings may therefore provide a crucial new bridge between human cognition and machine intelligence.

"By showing that AI models exhibit properties (like convexity) that are fundamental to human conceptual understanding, we move closer to creating machines that 'think' in ways that are more comprehensible and aligned with our own. This is vital for building trust and collaboration between humans and machines in critical applications like health care, education, and public service," says Tetkova.

"While there's still much to explore, the results suggest that the seemingly abstract idea of convexity may hold the key to unlocking new secrets about AI's internal workings and bringing us closer to intelligent and human-aligned machines."

The research was carried out within the research project "Cognitive Spaces—Next generation explainable AI." The project's aim is to open the machine learning black-box and build tools to explain the inner workings of AI-systems with concepts that can be understood by specific user groups.

More information: On convex decision regions in deep network representations, Nature Communications (2025).

Journal information: Nature Communications

Citation: A geometric link: Convexity may bridge human and machine intelligence (2025, July 2) retrieved 3 July 2025 from /news/2025-07-geometric-link-convexity-bridge-human.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further


11 shares

Feedback to editors