By Joel Hruska
Over the past few years, AI has gone from a niche topic to an exploding field. AI can improve audio and video quality, animate still images of long-dead people, and identify you from an analprint. One thing it hasn’t been able to do? Argue effectively within the context of a formal debate.
To overcome this problem, IBM created Project Debater, an AI development program focused on exactly what it sounds like. Many AI projects, especially those focusing on gaming, have a clear winner and a loser based on the evaluation of numerical criteria, such as pieces captured, lives lost, or the ratio between kills and deaths. Effectively debating a human requires a vastly different skill set.
A recent paper in Nature describes the results of a 2019 test between Project Debater and globally recognized debate champion Harish Natarajan. The AI and individual debated whether preschool should be subsidized. Each side was given 15 minutes for prep time without additional internet access, which Project Debater used to sort through its own internal database of content. Both sides gave a four-minute speech, followed by a two-minute closing statement.
Ultimately, Natarajan was judged to have won the debate, but Project Debater held its own, forming logical statements and arguments over the course of the discussion.
The researchers that developed Project Debater can’t compare it with other systems of its type. There aren’t any. Instead, they used PD to generate a single opening speech and compared it against various other methods.
In the graph below, “Summit” is a multi-document summary system, Speech-GPT2 is a “finely tuned language model,” and Arg-GPT2 was generated using concatenating arguments. Arg-Search refers to speeches extracted using ArgumenText. Arg-Human1 and Arg-Human2 refer to a hybrid approach that tested Project Debater’s argument mining module alongside human authorship and verification. Finally, speeches were included from human experts.
The graph above shows the baseline score where a score of 5 indicates “Strongly Agree” and a score of 1 means “Strongly Disagree.” Readers were asked to answer the following question: “This speech is a good opening speech for this topic.” This graph is not a full test of Project Debater’s capabilities — it only evaluates opening speeches — but it demonstrates that the system is capable of producing coherent arguments. IBM has a website for the project with links to the whitepaper, podcast, and the 2019 debate on preschool subsidies if you’d like to see more of how the system performed in action.
The question of who wins a debate will always be subjective, and humans still clearly outperform IBM’s Project Debater. For now, we’re still a long way from Data — but we’ve come a long way from Eliza, too.