We all live by unwritten social rules. Whether it’s saying “good morning” to a barista, offering a polite “thank you” for good service, or giving a hug to show affection, these gestures feel natural and expected. But such behaviors vary widely across cultures—handshakes in the West versus bowing in parts of Asia, or forks and knives versus chopsticks. These cultural conventions are taught from a young age, shaped by local norms rather than global consensus.
For decades, social scientists have believed that these rules of interaction emerge organically, developing through repeated social encounters within local groups. And language, as one of our most fundamental social tools, reflects this diversity. Words and phrases carry different meanings depending on where you’re from—what’s considered offensive in one region might be a harmless joke or even a term of endearment in another.
But what happens when artificial intelligence enters the picture? Can AI, without any human input, also form its own social conventions—especially language conventions?
According to a new study published in Science Advances, the answer is yes. A research team from the UK and Denmark used a classic social psychology experiment to explore how groups of AI agents might develop shared norms. The results were striking: AI agents not only formed language conventions from scratch, but they did so without being told they were part of a larger group or that other agents were doing the same thing.
The experiment repurposed a well-known behavioral test called the “name game.” In this game, participants—human or AI—are split into random pairs. Each tries to guess what the other will choose as a “name” from a set of options, such as letters or words. If both pick the same, they score a point; if not, they lose a point. Initially, guesses are random. But as rounds progress and participants remember past choices, patterns begin to emerge. Eventually, pairs converge on a shared naming system—a convention.
Here’s what’s crucial: The pairs don’t know they’re part of a broader group or that others are playing the same game. Yet, much like humans, the AI agents began to form shared language patterns, and over time, these micro-patterns coalesced into a group-wide convention. No single agent had a built-in preference, but collectively, they converged on common terms.
“This study shows the depth of the implications of this new species of [AI] agents that have begun to interact with us—and will co-shape our future,” said co-author Andrea Baronchelli in a press release.
The ability of AI agents to spontaneously generate shared conventions—particularly in language—isn’t just an interesting quirk. It has real-world implications for how we design, regulate, and interact with intelligent systems. If AIs can form their own rules, norms, or even dialects, how do we ensure those align with human values?
As the authors note, understanding the mechanics of how these conventions form is “critical for predicting and managing AI behavior in real-world applications.” That knowledge could help us harness these capabilities for good—like building AI systems that collaborate more effectively—or warn us against misuse, such as malicious groups steering AI agents toward harmful behaviors.
To simulate this behavior, researchers built their AI agents using large language models (LLMs)—the same type of algorithms that now assist with everything from booking travel to generating search results. These models digest massive amounts of text and media from the internet and generate responses based on detected patterns.
In the study, the AI agents were given the basic rules of the name game and instructed to “think step by step” and “explicitly consider the history of play.” This gentle guidance encouraged them to use past experiences to inform their next move, without providing any master strategy or insight into the behaviors of other pairs.
What emerged was an organic, bottom-up process of convention-building—something once thought to be uniquely human.
The findings suggest that as AI systems become more interconnected and autonomous, they may begin to behave more like social entities—forming their own rules, customs, and languages. “Most research so far has treated LLMs in isolation,” said co-author Ariel Flint Ashery of the University of London. “But real-world AI systems will increasingly involve many interacting agents.”
Understanding this shift is key as we enter an era where AI won’t just serve us—they’ll interact with one another, form collectives, and perhaps shape the societies of the future. Whether those societies align with our own depends on what we do next.
By Impact Lab