Disinformation campaigns used to consist of trolls and bots orchestrated and manipulated to produce a desired result. Increasingly, though, these campaigns are able to find willing human participants to amplify their messages and even generate new ones on their own.
The big picture: It’s as if they’re switching from employees to volunteers — and from participants who are in on the game to those who actually believe the disinformational payload they’re delivering.
Why it matters: Understanding this changing nature is critical to preparing for the next generation of information threats, including those facing the 2020 presidential campaign.
Speaking at Stanford University Tuesday, researcher Kate Starbird — a University of Washington professor who runs a lab that studies mass participation — traced the change across the stories of three different campaigns.
1. Russian interference in the 2016 election: Starbird’s work started not with studying disinformation, but with an analysis of the debate that raged on Twitter over the Black Lives Matters movement.
It was only after Twitter released data on Russian propagandists in November 2017 when her team realized that some of the most prolific posters — on both sides of the debate — were fictional personas created by the Russians.
“In a few cases, we can see them arguing with themselves,” said Starbird.
2. Syria’s “White Helmets”: In this case, an aid group known as the White Helmets working in Syria was attacked by online critics for a host of alleged atrocities.
Here Russia was actively involved in stirring the pot, but the posters themselves were neither bots nor trolls, but activists who adopted the issue as their own.
“These are real people who are sincere believers of the content they are sharing,” Starbird said.
Russian media, including Sputnik and RT, made the movement appear significantly larger, though, by interviewing activists and giving them both a platform and a veneer of legitimacy.
3. Conspiracy theories tied to mass-casualty events: People are predisposed to find conspiracies in every tragedy, and conspiracy theories have accompanied all manner of mass-casualty events such as the Boston Marathon bombing and Sandy Hook shooting.
The theories crop up organically, though Russian or other disinformation promoters can and do help amplify the messages.
Terms like “false flag” and “crisis actors” are applied to the victims, flipping the script of whatever has transpired.
“It’s almost like a self-sustaining community, but you can see it’s been shaped by disinformation campaigns of the past,” Starbird said.
All these factors, she said, makes these cases the “most frightening” she’s studied.
Between the lines: Not all of the disinformation has come from Russia, Starbird said, but added: “They have been innovators in this space.”
What’s next: Starbird recommended a couple of actions for the tech companies.
First, she urged them to look at entire campaigns, rather than focusing on the veracity of individual posts. While Twitter and Facebook tend to look at posts in isolation, the creators of disinformation are focused on an overall campaign, a set of narratives with a larger point, she said.
Starbird also said tech companies should discount false claims of conservative bias that, she suggested, are being leveled by the disinformation’s beneficiaries.
“The people that have benefited are now in power in a lot of places,” she said. “Anything the companies do to take a chunk [of their power away] is going to be called bias.”
Meanwhile: Many of the next disinformation threats may be domestic, notes former Facebook security chief Alex Stamos, who now teaches at Stanford. And those will be harder for law enforcement to investigate given that in many cases there is no law being broken.