While ChatGPT has propelled OpenAI to a $100 billion valuation with its 200 million weekly active users, the darker side of AI is quietly flourishing. Underground markets are exploiting large language models (LLMs) for illegal purposes, with some earning as much as $28,000 in just two months, according to a recent study published on arXiv by researchers from Indiana University Bloomington.

The study, which analyzed over 200 examples of malicious LLMs (dubbed “malas”) on underground marketplaces from April to October 2023, revealed two primary categories of these illicit models. Some are built as uncensored versions of open-source LLMs, while others are commercial models that have been “jailbroken” using specific prompts to bypass safety protocols.

“We believe now is the right time to study these models, so we can prevent large-scale damage before it occurs,” said Xiaofeng Wang, a professor at Indiana University and co-author of the study. “We want to stay ahead of the curve before attackers can cause significant harm.”

While mainstream LLMs like ChatGPT have safeguards to prevent misuse, hackers have developed illicit versions to cater to the demand for malicious applications. Many of these underground LLMs are created purely for profit. “We found that most of the mala services on underground forums exist mainly to earn profit,” said Zilong Lin, a co-author of the study.

The malicious capabilities of these LLMs range from generating phishing emails to developing sophisticated malware. One separate study cited in the report found that LLMs can reduce the cost of creating phishing emails by as much as 96%.

The effectiveness of these black-market LLMs varies, but some have proven to be powerful tools. For example, DarkGPT, which charges users 78 cents for every 50 messages, and EscapeGPT, a $64.98 per month subscription service, were able to generate correct and undetected malware code about two-thirds of the time. Another tool, WolfGPT, known for crafting phishing emails, successfully bypassed most spam detectors and is available for a one-time fee of $150.

Wang notes that the rise of these malicious AI tools is expected. “It’s almost inevitable for cybercriminals to utilize AI,” he said. “Every technology always comes with two sides.”

Andrew Hundt, a computing innovation fellow at Carnegie Mellon University who was not involved in the study, believes stronger measures are needed to curb the misuse of AI. “Policymakers should require AI companies to implement know-your-customer policies to verify a user’s identity,” Hundt suggested. “We also need legal frameworks to ensure that companies offering these models do so responsibly, mitigating the risks posed by malicious actors.”

Wang acknowledges that the fight against malicious AI is far from over. “Research like ours can provide insights and help develop technologies to combat these threats,” he said. “But stopping them entirely is a much bigger challenge that requires more resources than we currently have.”

As AI continues to grow, so does its potential for abuse. Efforts to understand and control the rise of illicit AI will be crucial in preventing significant harm in the digital landscape.

By Impact Lab