We’re at the halfway point of the epic 20-day, 150,000-hand “Brains Vs. Artificial Intelligence” Texas Hold’em Poker tournament, and a machine named Libratus is trouncing a quartet of professional human players. Should the machine maintain its substantial lead—currently at $701,242—it will be considered a major milestone in the history of AI. Here’s why.
Given the early results, it appears that we’ll soon be able to add Heads-Up, No-Limit Texas Hold’em poker (HUNL) to the list of games where AI has surpassed the best humans—a growing list that includes Othello, chess, checkers, Jeopardy!, and as we witnessed last year, Go. Unlike chess and Go, however, this popular version of poker involves bluffing, hidden cards, and imperfect information, which machines find notoriously difficult to handle. Computer scientists say HUNL represents the “last frontier” of game solving, signifying a milestone in the development of AI—and an achievement that would represent a major step towards more human-like intelligence.
The “Brains Vs. Artificial Intelligence” tournament began on January 11th at Rivers Casino in Pittsburgh. It pits Libratus, an AI developed by computer scientists at Carnegie Mellon University, against four professional human players, Dong Kim, Jimmy Chou, Jason Les, and Daniel McAulay. The human players are competing for $200,000 in prize money, but serious bragging rights are at stake, too: they are among the best HUNL players in the world, but their opponent is formidable.
As of the weekend, Libratus (which means “balanced” in Latin) amassed a lead of $459,154 in chips in nearly 5,000 hands played by the end of its ninth day. By the end of play on Monday, the machine’s lead stood at a daunting $701,242 over the second place contender. Frustratingly for the players, they can’t seem to get a step up on the artificial poker player. “The bot gets better and better every day,” said Chou in a Carnegie Mellon statement. “It’s like a tougher version of us.”
Limit Texas Hold’em was “solved” by AI back in 2015, but HUNL represents a much bigger challenge for AI developers. Some cards are hidden, and competitors can only see a small portion of what’s happening in the game at any given time. In order to win, players have to rely on their gut instincts, guessing what other players might be doing. In other words, unlike previous game-playing AI, Libratus has to deal with uncertainties and game-playing characteristics that were considered the exclusive domain of humans.
To make it work, a Carnegie Mellon team led by computer science professor Tuomas Sandholm, along with his Ph.D. student Noam Brown, equipped Libratus with algorithms that allow it to analyze the rules of poker and set its own strategy. Incredibly, these learning algorithms are not specific to poker.
Using using a powerful supercomputer called Bridges, Libratus refines its poker-playing skills by sifting through past games, including those played at the current tournament. During games, Bridges will perform calculations in real-time, helping Libratus to compute end-game strategies for each hand.
Writing at Wired, Cade Metz cautions that Libratus is succeeding at the tournament, but not without human help. The machine’s play does appear to be changing dramatically from day to day, leading Metz to insinuate that Carnegie Mellon researchers are somehow altering the system’s behavior as the match goes on.
But Sandholm says these day-to-day changes are not surprising, given that the Bridges computer is performing calculations to sharpen the AI’s strategy. Libratus’ evolution over the course of the tournament has been discouraging for the human players. “The first couple of days, we had high hopes,” said Chou. “But every time we find a weakness, it learns from us and the weakness disappears the next day.”
But are these improvements, as Metz suggests, the result of human intervention? That seems unlikely.
The Libratus-Bridges collaboration is fueled by tremendous computing power (Bridges has access to 15 million core hours of computation and 2.5 petabytes of data) and the wondrous, adaptive powers of machine learning. Libratus is obviously going to alter its behavior over time, learning from its opponents and its own successes and mistakes. At a qualitative level, Libratus won’t be the same AI going into the tournament as it will be going out. It’s also worth pointing out that the human players have been sharing notes and tips with each other, hunting for any weaknesses in the machine’s gameplay.
Playing and winning at poker is all fine and well, but this system could be adapted for a wide range of applications. As noted by Sandholm, most real world situations are “games” of incomplete information. He foresees the day when a similar system could be used for negotiations, cyber security, and medical treatment planning.
More conceptually, Libratus also represents a major step forward in the quest to develop artificial general intelligence (AGI). Aside from being exceptional at one specific task, like playing chess or Go, artificial intelligence tends to be incredibly stupid on account of its narrow focus. AGI, on the other hand, is adaptable, flexible, and capable of learning all sorts of new information—like the rudiments of poker, or the finer details of commodities stock trading.
Our brains are a prime example of biological general intelligence. With this recent AI breakthrough, and Libratus’ apparent victory at a major poker tournament, we’re inching steadily closer to an artificial intellect that truly acts and thinks like a human.