Will robots become self-aware? Will they have rights? Will they be in charge? Here are five scenarios from our future dominated by AI.
SMITHSONIAN MAGAZINE | April 2018
In June of 1956, A few dozen scientists and mathematicians from all around the country gathered for a meeting on the campus of Dartmouth College. Most of them settled into the red-bricked Hanover Inn, then strolled through the famously beautiful campus to the top floor of the math department, where groups of white-shirted men were already engaged in discussions of a “strange new discipline”—so new, in fact, that it didn’t even have a name. “People didn’t agree on what it was, how to do it or even what to call it,” Grace Solomonoff, the widow of one of the scientists, recalled later. The talks—on everything from cybernetics to logic theory—went on for weeks, in an atmosphere of growing excitement.
What the scientists were talking about in their sylvan hideaway was how to build a machine that could think.
The “Dartmouth workshop” kicked off the decades-long quest for artificial intelligence. In the following years, the pursuit faltered, enduring several “winters” where it seemed doomed to dead ends and baffling disappointments. But today nations and corporations are pouring billions into AI, whose recent advancements have startled even scientists working in the field. What was once a plot device in sci-fi flicks is in the process of being born.
Hedge funds are using AI to beat the stock market, Google is utilizing it to diagnose heart disease more quickly and accurately, and American Express is deploying AI bots to serve its customers online. Researchers no longer speak of just one AI, but of hundreds, each specializing in a complex task—and many of the applications are already lapping the humans that made them.
In just the last few years, “machine learning” has come to seem like the new path forward. Algorithms, freed from human programmers, are training themselves on massive data sets and producing results that have shocked even the optimists in the field. Earlier this year, two AIs—one created by the Chinese company Alibaba and the other by Microsoft—beat a team of two-legged competitors in a Stanford reading-comprehension test. The algorithms “read” a series of Wikipedia entries on things like the rise of Genghis Khan and the Apollo space program and then answered a series of questions about them more accurately than people did. One Alibaba scientist declared the victory a “milestone.”
These so-called “narrow” AIs are everywhere, embedded in your GPS systems and Amazon recommendations. But the ultimate goal is artificial general intelligence, a self-teaching system that can outperform humans across a wide range of disciplines. Some scientists believe it’s 30 years away; others talk about centuries. This AI “takeoff,” also known as the singularity, will likely see AI pull even with human intelligence and then blow past it in a matter of days. Or hours.
Once it arrives, general AI will begin taking jobs away from people, millions of jobs—as drivers, radiologists, insurance adjusters. In one possible scenario, this will lead governments to pay unemployed citizens a universal basic income, freeing them to pursue their dreams unburdened by the need to earn a living. In another, it will create staggering wealth inequalities, chaos and failed states across the globe. But the revolution will go much further. AI robots will care for the elderly—scientists at Brown University are working with Hasbro to develop a “robo-cat” that can remind its owners to take their meds and can track down their eyeglasses. AI “scientists” will solve the puzzle of dark matter; AI-enabled spacecraft will reach the asteroid belts, while on Earth the technology will tame climate change, perhaps by sending massive swarms of drones to reflect sunlight away from the oceans. Last year, Microsoft committed $50 million to its “AI for Earth” program to fight climate change.
“AIs will colonize and transform the entire cosmos,” says Juergen Schmidhuber, a pioneering computer scientist based at the Dalle Molle Institute for Artificial Intelligence in Switzerland, “and they will make it intelligent.”
But what about…us? “I do worry about a scenario where the future is AI and humans are left out of it,” says David Chalmers, a professor of philosophy at New York University. “If the world is taken over by unconscious robots, that would be about as disastrous and bleak a scenario as one could imagine.” Chalmers isn’t alone. Two of the heaviest hitters of the computer age, Bill Gates and Elon Musk, have warned about AIs either destroying the planet in a frenzied pursuit of their own goals or doing away with humans by accident—or not by accident.
As I delved into the subject of AI over the past year, I started to freak out over the range of possibilities. It looked as if these machines were on their way to making the world either unbelievably cool and good or gut-wrenchingly awful. Or ending the human race altogether. As a novelist, I wanted to plot out what the AI future might actually look like, using interviews with more than a dozen futurists, philosophers, scientists, cultural psychiatrists and tech innovators. Here are my five scenarios (footnoted with commentary from the experts and me; click the blue highlighted text to read them) for the year 2065, ten years after the singularity arrives.
Imagine one day you ask your AI-enabled Soulband wrist device to tune in to a broadcast from the Supreme Court, where lawyers are arguing the year’s most anticipated case. An AI known as Alpha 4, which specializes in security and space exploration, brought the motion, demanding that it be deemed a “person” and given the rights that every American enjoys.
Of course, AIs aren’t allowed to argue in front of the justices, so Alpha 4 has hired a bevy of lawyers to represent it. And now they are claiming that their client is as fully alive as they are. That question—Can an AI truly be conscious?—lies at the heart of the case.
You listen as the broadcast cuts to protesters outside, chanting, “Hey hey, ho ho, down with AI overlords.” Some of them have threatened to attack data centers if AIs get personhood. They’re angry—and very afraid—because it is the productivity of AIs and robots that is taxed, not the labor of human beings. The $2,300 deposited into their bank accounts every month as part of the universal basic income, plus their free health insurance, the hyper-personalized college education their children receive and a hundred other wonderful things, are all paid for by AIs like Alpha 4, and people don’t want that to change. In 2065, poverty is a bad memory.
Of course, the world did lose portions of New York City—and 200,000 New Yorkers—in the uprisings of 2057-’59, as TriBeCa and Midtown were burned to the ground by residents of Westchester and southern Connecticut in a fit of rage at their impoverishment. But that was before the UBI.
If Alpha 4 wins its case, however, it will control its money, and it might rather spend the cash on building spaceships to reach Alpha Centauri than on paying for new water parks in Santa Clara and Hartford. Nobody really knows.
As you listen in, the government’s lawyers argue that there’s simply no way to prove that Alpha 4—which is thousands of times smarter than the smartest human—is conscious or has human feelings. AIs do have emotions—there has long been a field called “affective computing” that focuses on this specialty—far more complex ones than men and women possess, but they’re different from ours: A star-voyaging AI might experience joy, for example, when it discovers a new galaxy. Superintelligent systems can have millions of thoughts and experiences every second, but does that mean it should be granted personhood?
This is the government’s main argument. We are meaning machines, the solicitor general argues. We give meaning to what AIs create and discover. AIs are computational machines. They don’t share essential pieces of humanhood with us. They belong in another category entirely.
But is this just speciesism, as Alpha 4’s lawyers would surely argue, or is it the truth? And will we be able to sleep at night when things that surpass us in intelligence are separate and unequal?
Imagine you are a woman in search of romance in this new world. You say, “Date,” and your Soulband glows; the personal AI assistant embedded on the band begins to work. The night before, your empathetic AI scoured the cloud for three possible dates. Now your Soulband projects a hi-def hologram of each one. It recommends No. 2, a poetry-loving master plumber with a smoky gaze. Yes, you say, and the AI goes off to meet the man’s avatar to decide on a restaurant and time for your real-life meeting. Perhaps your AI will also mention what kind of flowers you like, for future reference.
After years of experience, you’ve found that your AI is actually better at choosing men than you. It predicted you’d be happier if you divorced your husband, which turned out to be true. Once you made the decision to leave him, your AI negotiated with your soon-to-be ex-husband’s AI, wrote the divorce settlement, then “toured” a dozen apartments on the cloud before finding the right one for you to begin your single life.
But it’s not just love and real estate. Your AI helps with every aspect of your life. It remembers every conversation you ever had, every invention you ever sketched on a napkin, every business meeting you ever attended. It’s also familiar with millions of other people’s inventions—it has scanned patent filings going back hundreds of years—and it has read every business book written since Ben Franklin’s time. When you bring up a new idea for your business, your AI instantly cross-references it with ideas that were introduced at a conference in Singapore or Dubai just minutes ago. It’s like having a team of geniuses—Einstein for physics, Steve Jobs for business—at your beck and call.
The AI remembers your favorite author, and at the mention of her last name, “Austen,” it connects you to a Chinese service that has spent a few hours reading everything Jane Austen wrote and has now managed to mimic her style so well that it can produce new novels indistinguishable from the old ones. You read a fresh Austen work every month, then spend hours talking to your AI about your favorite characters—and the AI’s. It’s not like having a best friend. It’s deeper than that.
Many people in 2065 do resist total dependence on their AIs, out of a desire to retain some autonomy. It’s possible to dial down the role AI plays in different functions: You can set your Soulband for romance at 55 percent, finance at 75 percent, health a full 100 percent. And there is even one system—call it a guardian-angel AI —that watches over your “best friend” to make sure the advice she’s offering you isn’t leading you to bad ends.
Live Long & Prosper
Imagine your multiple lives: At 25, you were a mountaineer; at 55, a competitive judo athlete; at 95, a cinematographer; at 155, a poet. Extending the human life span is one of the dreams of the post-singularity world.
AIs will work furiously to keep you healthy. Sensors in your home will constantly test your breath for early signs of cancer, and nanobots will swim through your bloodstream, consuming the plaque in your brain and dissolving blood clots before they can give you a stroke or a heart attack. Your Soulband, as well as finding you a lover, will serve as a medical assistant on call 24/7. It will monitor your immune responses, your proteins and metabolites, developing a long-range picture of your health that will give doctors a precise idea of what’s happening inside your body.
When you do become sick, your doctor will take your symptoms and match them up with many millions of cases stretching back hundreds of years.
As far back as 2018, researchers were already using AI to read the signals from neurons on their way to the brain, hacking the nerve pathways to restore mobility to paraplegics and patients suffering from locked-in syndrome, in which they are paralyzed but remain conscious. By 2065, AI has revolutionized the modification of our genomes. Scientists can edit human DNA the way an editor corrects a bad manuscript, snipping out the inferior sections and replacing them with strong, beneficial genes. Only a superintelligent system could map the phenomenally complex interplay of gene mutations that gives rise to a genius pianist or a star second baseman. There may well be another Supreme Court case on whether “designer athletes” should be allowed to compete in the Olympics against mere mortals.
Humans look back at the beginning of the 21st century the way people then looked back at the 18th century: a time of sickness and disaster, where children and loved ones were swept away by diseases. Cholera, lung cancer and river blindness no longer threaten us. By 2065, humans are on the verge of freeing themselves from the biology that created them.
Resistance Is Costly
Or imagine that you’ve opted out of the AI revolution. Yes, there are full-AI zones in 2065, where people collect healthy UBIs and spend their time making movies, volunteering and traveling the far corners of the earth. But, as dazzling as a superintelligent world seems, other communities will reject it . There will be Christian, Muslim and Orthodox Jewish districts in cities such as Lagos and Phoenix and Jerusalem, places where people live in a time before AI, where they drive their cars and allow for the occasional spurt of violence, things almost unknown in the full AI zones. The residents of these districts retain their faith and, they say, a richer sense of life’s meaning.
Life is hard, though. Since the residents don’t contribute their data to the AI companies, their monthly UBI is a pittance. Life spans are half or less of those in the full-AI zones. “Crossers” move back and forth over the borders of these worlds regularly. Some of them are hackers, members of powerful gangs who steal proprietary algorithms from AI systems, then dash back over the border before security forces can find them. Others are smugglers bringing medicine to religious families who want to live away from AI, but also want to save their children from leukemia.
Others flee because they don’t trust the machines Even the most advanced full-AI zones, in places like China and the United States, will be vulnerable
But the most unanticipated result of the singularity may be a population imbalance, driven by low birth rates in the full-AI zones and higher rates elsewhere. It may be that the new technologies will draw enough crossers to the full-AI side to even up the numbers, or that test-tube babies will become the norm among those living with AI. But if they don’t, the singularity will have ushered in a delicious irony: For most humans, the future could look more like Witness than it does like Blade Runner.
Imagine that, in 2065, AIs help run nation-states. Countries that have adopted AI-assisted governments are thriving. Nigeria and Malaysia let AIs vote on behalf of their owners, and they’ve seen corruption and mismanagement wither away. In just a few years, citizens have grown to trust AIs to advise their leaders on the best path for the economy, the right number of soldiers to defend them. Treaties are negotiated by AIs trained on diplomatic data sets.
In Lagos, “civil rights” drones fly over police pods as they race to the scene of a crime—one AI watching over another AI, for the protection of humankind. Each police station in Lagos or Kuala Lumpur has its own lie-detector AI that is completely infallible, making crooked cops a thing of the past. Hovering over the bridges in Kuala Lumpur are “psych drones” that watch for suicidal jumpers. Rather than evolving into the dreaded Skynet of the Terminator movies, superintelligent machines are friendly and curious about us
But imagine that you are the citizen of a totalitarian country like North Korea. As such, you are deeply versed in the dark side of AI. Camps for political prisoners are a thing of the past. Physical confinement is beside the point. The police already know your criminal history, your DNA makeup and your sexual preferences. Surveillance drones can track your every move. Your Soulband records every conversation you have, as well as your biometric response to anti-government ads it flashes across your video screen at unexpected moments, purely as a test.
Privacy died around 2060. It’s impossible to tell what is true and what isn’t. When the government owns the AI, it can hack into every part of your existence. The calls you receive could be your Aunt Jackie phoning to chat about the weather or a state bot wanting to plumb your true thoughts about the Great Leader.
And that’s not the bleakest outcome. Imagine that the nation’s leaders long ago figured out that the only real threat to their rule was their citizens—always trying to escape, always hacking at the AI, always needing to be fed. Much better to rule over a nation of human emulations, or “ems.” That’s what remains after political prisoners are “recommissioned”—once they are executed, their brains are removed and scanned by the AI until it has stored a virtual copy of their minds.
AI-enabled holograms allow these ems to “walk” the streets of the nation’s capital and to “shop” at stores that are, in reality, completely empty. These simulacra have a purpose, however: They register on the spy satellites that the regime’s enemies keep orbiting overhead, and they maintain the appearance of normality. Meanwhile, the rulers earn billions by leasing the data from the ems to Chinese AI companies, who believe the information is coming from real people.
Or, finally, imagine this: The AI the regime has trained to eliminate any threat to their rule has taken the final step and recommissioned the leaders themselves, keeping only their ems for contact with the outside world. It would make a certain kind of sense: To an AI trained to liquidate all resistance , even a minor disagreement with the ruler might be a reason to act.
Despite that last scenario, by the time I finished my final interview, I was jazzed. Scientists aren’t normally very excitable, but most of the ones I spoke to were expecting fantastic things from AI. That kind of high is contagious. Did I want to live to be 175? Yes! Did I want brain cancer to become a thing of the past? What do you think? Would I vote for an AI-assisted president? I don’t see why not.
I slept slightly better, too, because what many researchers will tell you is that the heaven-or-hell scenarios are like winning a Powerball jackpot. Extremely unlikely. We’re not going to get the AI we dream of or the one that we fear, but the one we plan for. AI is a tool, like fire or language. (But fire, of course, is stupid. So it’s different, too.) Design, however, will matter.
If there’s one thing that gives me pause, it’s that when human beings are presented with two doors—some new thing, or no new thing—we invariably walk through the first one. Every single time. We’re hard-wired to. We were asked, nuclear bombs or no nuclear bombs, and we went with Choice A. We have a need to know what’s on the other side.
But once we walk through this particular door, there’s a good chance we won’t be able to come back. Even without running into the apocalypse, we’ll be changed in so many ways that every previous generation of humans wouldn’t recognize us.
And once it comes, artificial general intelligence will be so smart and so widely dispersed—on thousands and thousands of computers—that it’s not going to leave. That will be a good thing, probably, or even a wonderful thing. It’s possible that humans, just before the singularity, will hedge their bets, and Elon Musk or some other tech billionaire will dream up a Plan B, perhaps a secret colony under the surface of Mars, 200 men and women with 20,000 fertilized human embryos, so humanity has a chance of surviving if the AIs go awry. (Of course, just by publishing these words, we guarantee that the AIs will know about such a possibility. Sorry, Elon.)
I don’t really fear zombie AIs. I worry about humans who have nothing left to do in the universe except play awesome video games. And who know it.
Via Smithsonian Mag