Human beings often behave irrationally—or as an artificially intelligent robot might say, “sub-optimally.” Data, the emotionless yet affable android from Star Trek: The Next Generation, frequently struggled to understand humans’ flawed decision-making processes. If he had been programmed with a new model developed by researchers at MIT and the University of Washington, he might have had an easier time.

In a paper published last month, Athul Paul Jacob, a Ph.D. student in AI at MIT, Dr. Jacob Andreas, his academic advisor, and Abhishek Gupta, an assistant professor in computer science and engineering at the University of Washington, described a novel approach to modeling an agent’s behavior. They employed their method to predict human goals and actions.

Jacob, Andreas, and Gupta introduced what they call a “latent inference budget model.” The key innovation of this model is its ability to infer an agent’s “computational constraints” based on previous actions. These constraints often lead to sub-optimal choices. For example, a common human constraint in decision-making is time. When faced with a difficult decision, people typically do not spend hours considering every possible outcome. Instead, they make decisions quickly without gathering all available information.

Existing models account for irrational decision-making but generally predict that errors will occur randomly. In reality, humans and machines make mistakes in more predictable patterns. The latent inference budget model identifies these patterns and uses them to forecast future behavior.

The researchers tested their model in three scenarios: navigating a maze, predicting a human chess player’s next move, and interpreting a human speaker’s intent from a quick utterance. In each case, their model outperformed existing models, proving as good or better at predicting behavior.

Jacob noted that the research highlighted the fundamental role of planning in human behavior. He stated, “Certain people are not inherently rational or irrational. It’s just that some people take extra time to plan their actions while others take less. The depth of planning, or how long someone thinks about the problem, is a really good proxy for how humans behave.”

Jacob envisions the model being utilized in future robotic helpers or AI assistants. “If we know that a human is about to make a mistake, having seen how they have behaved before, the AI agent could step in and offer a better way to do it. Or the agent could adapt to the weaknesses that its human collaborators have,” he said.

This research is part of a broader effort to develop tools that help AI predict human decision-making. Most researchers in this field anticipate positive outcomes, such as AI seamlessly coordinating with humans to assist in everyday tasks, boost productivity, and even act as companions.

However, there are potential dystopian applications. AI models designed to predict human behavior could be misused to manipulate individuals. With sufficient data on human reactions to various stimuli, AI could be programmed to elicit responses not in the individuals’ best interest. If AI becomes highly proficient at this, it raises urgent questions about human free will versus being mere automata reacting to external forces.

By Impact Lab