Imagine sitting for a two-hour conversation with an AI, answering questions about your childhood, career, and personal beliefs. Shortly after, a virtual version of you—a “digital twin”—emerges, mimicking your values, preferences, and decision-making with remarkable accuracy.
This concept is no longer hypothetical. A recent study by researchers from Stanford and Google DeepMind, published on arXiv, demonstrates that AI models can create such digital replicas. Led by Stanford PhD candidate Joon Sung Park, the team developed simulation agents—AI constructs designed to mirror human behaviors—based on interviews with 1,000 diverse participants. The study’s results showed that these agents replicated their human counterparts’ responses with an impressive 85% similarity across personality tests, social surveys, and logic games.
“If you can have a bunch of small ‘yous’ running around and making decisions as you would—that’s the future,” Park says.
The primary goal of these simulation agents is to transform social science and behavioral research. They enable experiments that would otherwise be impractical, expensive, or ethically challenging to perform with real people.
For example, researchers could use simulation agents to study the effects of misinformation on social media or analyze behaviors that lead to traffic congestion. Unlike tool-based agents, which perform tasks like data retrieval or appointment scheduling, simulation agents focus on replicating human decision-making and interactions.
“This paper demonstrates how you can create hybrid models: using real human input to generate personas that can be simulated programmatically,” says John Horton, associate professor at MIT Sloan School of Management.
While the technology holds transformative potential, it also raises significant ethical concerns. Just as AI image generators have led to harmful deepfakes, simulation agents could enable the creation of unauthorized digital replicas of individuals, risking misuse in ways that could damage reputations or manipulate public opinion.
Additionally, the evaluation methods used to validate these agents—such as the General Social Survey and assessments of personality traits—are relatively basic and don’t capture the full complexity of human behavior. For instance, the agents performed less effectively in behavioral experiments like the “dictator game,” which assesses fairness and altruism.
To create realistic digital twins, researchers rely on interviews to distill personal experiences into a format AI can process. Park emphasizes the power of qualitative interviews in uncovering unique aspects of individuals.
“A two-hour interview can reveal so much about someone—details you wouldn’t find in a survey,” he explains. For instance, a life-changing event like surviving cancer could significantly shape someone’s worldview and behaviors, yet such nuances are difficult to capture through traditional survey methods.
Companies like Tavus, which specialize in creating digital twins, have typically relied on large datasets such as customer emails to replicate personalities. However, this new research suggests that a brief interview with AI could be a more efficient approach.
This study represents a significant leap forward in AI’s ability to model human behavior, offering exciting opportunities for industries ranging from personalized services to behavioral research. Still, as the technology advances, ethical safeguards will be critical to ensure its responsible use.
“This is just the beginning,” says Park. “By leveraging AI to understand and replicate human behavior, we are pushing the boundaries of what’s possible in both research and technology.”
By Impact Lab