Technology is democratizing the power of who gets to live and who does not. Are we ready for the consequences?
This is the second installment of “Privatizing the Apocalypse,” a four-part essay being published throughout October. Read Part 1: “The 50/50 Murder” here.
In 2015, a depressive young German named Andreas Lubitz killed himself, five co-workers, and quite a few strangers. He was one of perhaps 1,000 suicidal mass murderers to strike that year worldwide. But an unusual combination of two factors put Lubitz in a ghoulish class of his own.
First, he hatched and executed his plans without anyone’s help — which was not remarkable in itself. But the second factor was scale — in that he really killed a lot of strangers. As in 144 of them. Lubitz’s victims hailed from 18 countries and included infants, retirees, and all ages between. He killed dozens of times more people than most rampage murderers and almost three times as many as 2017’s Las Vegas shooter, who is (for now) history’s most prolific lone suicidal gunman.
Death tolls at Lubitz’s scale almost always involve an organized terror group, criminal gang, or paramilitary organization. For instance, dozens of suicide bombers have taken more than 100 victims with them. But such people are recruited and (sort of) trained, they have logistical support, and they’re abetted by experts like bomb makers.
When suicidal mass murderers go all in, technology is the main force multiplier.
Lubitz’s edge was in his weaponry. Of course, the Vegas shooter had quite the arsenal himself, including 22 semi-automatic rifles and several bump stocks (whose clever design helps mass murderers maximize their body counts). But Lubitz had an Airbus 320. And after locking the other pilot out of the cockpit, he plowed it into a mountainside.
This vividly illustrates a lethal reality: When suicidal mass murderers go all in, technology is the main force multiplier. An 18th-century Andreas Lubitz could have scarcely made a dent in a crowded pub. But armed with a jetliner, the modern one killed on the scale of an ISIS squad, with almost no effort.
Tech and death tolls also correlate in the annals of school massacres. A rash of them rocked China early this decade. By macabre coincidence, the last occurred shortly before the Sandy Hook attack. Hours later, almost as many died in that one grotesque act as in all 10 Chinese attacks combined. The key factor is that China’s deadliest retail items are hammers and knives. As weapons, these are medieval grade. Whereas in the United States, almost everyone — including many people on red-flag-hoisting meds and terrorist watch lists — can freely buy modern arsenals.
As Steven Pinker has relentlessly documented, we’ve made vast strides on almost every quantifiable metric of human flourishing, across decades, centuries, and millennia alike. But suicide is a stubborn exception to this iron-adjacent rule. Reliable statistics dating back to the 1950s show ebbs and flows but no steady downward trend. Nor has any medical advance left a trace in the data. Indeed, in the 26 years since the debut of the most touted class of antidepressant (SSRIs), suicide rates in the United States have increased by 10 percent.
Suicides are proportionately rare, afflicting just one person out of more than 6,000 last year. But their absolute number is large: about 1 million worldwide. Most suicides end only one life. But rare outliers take as many people with them as possible.
While the awful malfunction that propels these outliers strikes rarely, suicidal murderers have always been among us. Well-vetted people are less likely to go down this path than most. But no vetting is perfect. For instance, becoming a commercial pilot requires years of stably logging flight hours just to qualify for the job. This alone weeds out all kinds of desperate cases. After that, there’s active and ongoing evaluation by employers and air authorities. Yet in addition to Lubitz, strong evidence indicates that suicidal pilots working for Silk Air, Egypt Air, and Malaysia Airlines also racked up gruesome death tolls.
Not all mass murderers kill indiscriminately. Not all are suicidal. But a very high proportion slaughter as many strangers as possible, with a clear intent to die in the process. And it stands to reason that some proportion of them — perhaps small but not nonexistent — would gladly extinguish every last one of us.
This may sound a bit breathless. But consider the meticulousness, omnidirectional hatred, and ambition (the word’s use here is revolting but fits) behind Sandy Hook, Orlando, Las Vegas, and hundreds of other massacres. In Las Vegas, Stephen Paddock murdered 58 victims instead of 480 because he failed to kill 422 of the people he shot — not due to a shred of conscience or any sense of proportion. Given more bullets, targets, and time, we can’t sanely imagine he’d have stopped at 480. Or at 4,800.
We live in a time when unforeseen breakthroughs erupt daily from garages and labs, then quickly diffuse from vetted sanctums to the mainstream.
And even if Paddock did have some perverse upper limit, we can’t sanely imagine that all people who carefully set out to end their lives in the act of killing as many people as physically possible would quit at some arbitrary point. The body counts of Paddock, Lubitz, and Fang Jiantang (who stabbed four to death in China in 2010, attaining just 7 percent of Paddock’s tally and 2.7 percent of Lubitz’s) were limited only by their weaponry. So we can’t say that suicidal mass murderers never want to exterminate humanity. We can only say they don’t get to. For now.
This matters in a time when unforeseen breakthroughs erupt daily from garages and labs, then quickly diffuse from vetted sanctums to the mainstream. Like politics, all killing sprees are local — for now. But future inventions could be twisted to enable global ones. Depending on what looms on tech’s horizon, they could one day pose what philosopher Nick Bostrom has labeled an existential risk. Which is to say, “a human extinction scenario.”
In the face of such a risk, deterrence would sure beat annihilation. But killers bent on suicide are notoriously tough to deter. Some deny this — like the fantasists who burble about armed guards preventing school shootings. But in schools and elsewhere, mass shootings happen under the noses of armed guards all the time.
This includes the canonical spree at Columbine. That school indeed had a well-trained armed defender on duty. But he in no way mitigated the massacre’s outcome. A common retort is that if one gun’s not enough, then lots of campus guns should surely do the trick! But what’s the effective dose? Four armed guards? Forty armed teachers? Four hundred armed students?
How about 45,000 trained soldiers? That’s the resident population of Fort Hood, the largest U.S. military base. And it couldn’t stop a wimpy shrink from killing 13 and injuring dozens in 2009 (although he didn’t die during the attack, the perpetrator intended to, so he classifies as a suicidal mass murderer.) Five years later, a second gunman at the base killed four and injured 13 before killing himself. Fort Hood could presumably fend off an ambitious military onslaught. But a lethal defense is no deterrent to an attacker with a death wish. Indeed, it’s a massive added attraction.
Let’s now consider a scenario from the outer extreme. Technology is accelerating at unprecedented speeds and correlates strongly with the deadliness of suicidal mass murder. So could humanity ever actually be canceled — as in, every last one of us killed — by a deliberate act of destructive lunacy?
For almost all of history, our own extinction was rarely much of a topic beyond the realms of prophecy, chicanery, or science fiction. Humans were widespread and resilient, catastrophes were local, and weapons were mostly interpersonal devices.
The Cold War changed all that. We survived it for many reasons, including a lucky distribution of unusually level heads (Google “Stanislav Petrov” or “Vasili Arkhipov,” for instance). Another factor was that only a rotating cast of two-ish people were fully empowered to annihilate us all. Despite their many faults, none of these folks were suicidal desperados. Human extinction then faded as an issue after the USSR’s dying whimper.
Around the turn of the millennium, two thinkers returned it to the agenda. Bill Joy, co-founder of Sun Microsystems, wrote a resounding Wired cover story titled “Why the Future Doesn’t Need Us.” Later, Martin Rees, the U.K.’s Astronomer Royal, released Our Final Hour — the masterpiece of chilling speculation I cited in part one of this essay series last week (when I also posted a somewhat related podcast interview with Lord Rees).
Less tractable than the number of countries is the number of large companies, which might take risky shortcuts chasing trillion-dollar breakthroughs.
To distill their nuanced works (which each merit a complete reading to this day), certain technologies on the intermediate horizon could one day present deeper perils than a mere nuclear winter. As a top technologist and scientist, respectively, Joy and Rees couldn’t be dismissed as shrieking Luddites. And though spooky, their thinking was also thrillingly new to most of us at the time. Years of water-cooler rehashing ensued throughout the corridors of tech and science.
Joy and Rees lay out multiple risks. Each diverges from your father’s atomic doomsday in that nuclear nations needn’t be cast members. This is a highly escalating factor — because as scary as such nations are, their numbers will remain tractable, despite nuclear proliferation.
Far less tractable is the number of large companies, which might take risky shortcuts chasing trillion-dollar breakthroughs with terrifying long-shot side effects. Even less tractable is the number of erratic startups, which might do the same over a somewhat longer span. Completely intractable is the number of suicidal murderers humanity will generate over the coming decades. And as the ability to create — or merely access — lethal technologies moves down this stack, the hazards will mount.
Much will hinge on the peculiar mix of dangers, incentives, and safety mechanisms that arise as we relentlessly develop certain exponential technologies. No person, group, or nation has a monopoly on any of this. And multiple headlong races between bitter rivals (both countries and companies) are well underway. Caution can vanish when finish lines are at stake. So this race dynamic could be the situation’s gravest aspect.
Today, artificial intelligence (A.I.) and synthetic biology (synbio) are most cited when existential risk is discussed. They may later be joined by nanotechnology, perhaps some forms of geoengineering, plus God knows what.
And to repeat: Should existential threats emerge from these domains, assembly will not require the organized might of nations. Key advances in A.I. and synbio frequently arise from brainy teams of just dozens. Their main outputs are often information and digital methods. These sorts of advances can diffuse with minimal friction — allowing small teams to perch atop mountains of shoulders, spreading the breakthrough potential even wider. Key hardware is often general-purpose gear, the specs of which improve at compounding, exponential rates. Speedy horsepower growth is thus akin to a cheap utility, available to all. Leveraging this, a team might rapidly become hundreds of times more effective without adding a single member.
The Human Genome Project and its aftermath epitomize this. It took 13 years, $3 billion, and thousands of biology’s sharpest minds to sequence a single haploid genome. A decade and a half later, lone lab techs routinely accomplish quite a bit more than this in a single day. A parallel scenario with the Manhattan Project would have put atomic bombs in countless garages and college labs by the early 1960s. Performance improvements in synbio are meanwhile accelerating, not slowing.
So: Do your best to imagine the field’s collective output over the coming decade — and then imagine that gradually becoming an easy day’s work for latter-day undergrads.
Would that be a safer world than one with thousands of sovereign nuclear powers? Perhaps so in 2028. Perhaps not in 2038. There’s no way of knowing—which is terrifying—but we can safely predict that whatever undergrads can achieve in the near future, high school kids will be still more capable shortly thereafter. Then smart eighth-graders. Then dumb fifth-graders. If that doesn’t sound absurd, please reread it, because it really should.
But it’s not absurd. We’ve seen similar things happen repeatedly, and not just in genomics. Imagine, say, the horsepower, data, and services the CIA could cram into its best mobile device in 2005. Foretelling that billions of us would soon pack orders of magnitude more than that would have seemed delusional. But here we are.
Groaning shelves of thoughtful writing cover the dangers posed by small, violent groups. I in no way deny nor seek to diminish the risks presented by organized terrorism. But over longer time frames and on the vastest destructive scales, ambitiously nihilistic loners frighten me more.
They’ve always killed more people than terrorists in most places. They’re also much harder to detect. Even the tiniest groups exchange messages, gather and disperse to some extent, and engage in recruiting. All this leaves physical and virtual traces. Governments have gotten quite good at detecting these in the post-9/11 era. Of course, counterterrorism will always be an imperfect craft. But it has much more to latch onto when conspiracies are hatched, as opposed to when deadly plots form within isolated brains.
Every last one of us also constitutes a desirable target to the most bloodthirsty misanthropes. Such nihilism propelled the purely indiscriminate slaughter of Andreas Lubitz, plus that of the authors of Newtown, Las Vegas, and so forth. Certain terror groups might seem equally interested in arbitrary butchery. But at bottom, they’re animated by an urge to eliminate these people and not those people. So while they may want to kill huge numbers, they don’t want to kill off everybody.
This matters when we consider the most lethal technologies that might plausibly lurk in our intermediate future. Anything posing an existential threat is innately indiscriminate, whereas those driven by religious, racial, or nationalist hatred are all about discrimination. I could therefore imagine the hyper-empowered Andreas Lubitz of tomorrow flipping a “kill everybody” switch. But I do not believe al-Qaeda’s bosses would do the same.
This doesn’t mean organized groups don’t pose awful risks. They obviously do, and they will present ever greater risks as this fraught century unfolds. Some of these groups won’t harbor any ill will toward anyone. But this will make them even more dangerous, because they’ll seem so benign — above all, to themselves. The third article in this series will consider this danger next week, along with the risky team sport of building a superintelligence.
Via Medium