Is artificial intelligence really an “existential threat” to humanity? Some very smart people: Elon Musk, Stephen Hawking, Bill Gates, Sam Altman and particularly Oxford ProfessorNick Bostrom, really think so.
He wrote the book, Superintelligence that got Musk so tweetably exercised. Technologists are supposed to be rationalists, and yet Musk waxed supernatural about the threat of renegade AI. “We are raising the demon,” he told an audience at MIT last October.
Demons are not quite what Bostrom has in mind. He is thinking more about risks and probabilities. He defined existential risk in apaper from 2002 as “One where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.” From his perspective, these risks are worth mitigating even if their probability is very low because the potential costs are so high. The causes of such a catastrophe could be purposeful or accidental, and come from many realms including biotechnology and nanotechnology as well as artificial intelligence.
Why then all of the harping then on AI? Certainly we are in a great upswing of machine intelligence capabilities and valuations. Deep learning, as I observed at the beginning of the year, is now baked into many of our leading tech companies. But after events like the crash of Germanwings 9525, machine intelligence seems more likely to protect against misguided human actions than to threaten the existence to all humanity.
My own research into this fear has led me to conclude that it is about something far more mundane and predictable: regulation. The concern most central to a select sampling of the language of tech leaders who have weighed in on AI recently is the need for regulation to avoid unintended consequences. Musk said as much right after he identified AI as “our biggest existential threat.” “There should be some regulatory oversight, maybe at the national and international level,” he warned, “just to make sure that we don’t do something very foolish.” Compared to regulation, though, fear of an existential threat to humanity is the least correlated concept on this list. See my forthcoming companion post, “Graph Theory Helps To Decode The AI Fears Of Tech Leaders,” in which I explain my methodology and give credit to its source (hat tip toDan Shipper). For the present post, I will explore the implications of my findings.
To start, let me affirm that I am one of those people who has been scratching his head about the fuss. There seem to be so many more pressing problems in the world than the flavor of the coming singularity. At the same time, I have tremendous respect for what other people may know that I don’t. When those other people are geniuses like Hawking, visionaries like Musk, humanitarians like Gates, or tech insiders like Altman it should give you pause. Maybe this is all much farther along than we mortals know?
On the other hand, none of the people who have most visibly raised the alarm are actually building AI themselves. Investing in it yes, but not making it. So I considered it a good sign when AI pioneer Jeff Hawkins of Numenta wrote a knowing rejoinder on Re/code with the title, “The Terminator Is Not Coming. The Future Will Thank Us.” Hawkins, who has been working on the problem of intelligence for 30 years, reassures by saying, “ I do not share these worries because they are based on three misconceptions.” He then goes on to explain that self-replication and self-awareness do not logically follow from intelligence and that even artificial intelligence must obey the laws of physics.
Central to Hawkins’ critique of the fearful is his understanding of what intelligence is. I interviewed Hawkins for this story and then paused to read his book, On Intelligence, (from 2005). I will make a leap here and say that contrary to the Cartesian Cogito, being does not follow from thinking. I think, and I am. The two are coincident but not causal. To Hawkins, intelligence is what the neocortex does and (quoting his book) “consciousness is simply what it feels like to have a cortex.” But building intelligent machines, which is what Hawkins aims to do, does not imply that those machines will be conscious, sentient or have the ability to self-replicate.
Sentient machines may one day be possible, and the scenarios for how to contain, constrain or incentivize these “beings” is the core of Bostrom’s concern in Superintelligence. But the clarity that Hawkins brings to the debate is his assertion that creating machines with a sense of self is a vastly different problem than making machines that are intelligent. And that is AI’s current mission. Bostrom’s “boxing” problem is only critical if intelligent machines have the motivation to expand their powers.
Another moderating voice comes from Musk’s longtime friend and PayPal mafioso, Peter Thiel. In in interview at Web Summit in Dublin last year, Thiel discussed the political aspect of superintelligence, comparing it to extraterrestrials landing on Earth:
If aliens landed on this planet tomorrow, the first questions would not be what does it mean for the economy. The first question would be political. Are they friendly? Are they unfriendly? And I think the political question about AI is an important one, and I think our intuitions about that one are very underdeveloped.”
Not only are our intuitions about the threat underdeveloped, but Thiel also finds them remote. “I still think it’s very far in the future. My guess would be that it is maybe a century or more away. So although I feel it is a very important question, it’s one that I don’t worry about that much.” Why then all the worry from tech leaders and even anti-robot protesters at the recent SXSW? Thiel places the blame squarely on the public’s general fear and misunderstanding of technology. Thiel told the audience at Web Summit, “One of my contrarian beliefs is that we are not actually living in a scientific or technological age. And those of us that are working in science and technology are the counterculture in our society today.”
Look at Hollywood science-fiction movies, Thiel suggests, “they all portray technology that doesn’t work, that kills people, that’s dystopian, that destroys the world.” In Kahneman’s parlance, these images of robots and renegade AIs are available to our imaginations. The challenge for the technology industry then is to make visions like Hawkins’ intelligent machines more available, more vivid, and more compelling than “The Terminator.” An assist from Hollywood would be helpful as well.
AI, or more properly machine intelligence, has much in common with synthetic biology. Both are technologies with a tremendous capacity to help humanity on a grand scale. They are also both technologies that inspire hysteria in popular culture. In both cases, I am most concerned about the motivations of rogue humans who may misuse these technologies than about the rogue capabilities of the products of these technologies themselves.
A useful tonic to the irrational fear of technology is the reminder that everything in the world, man-made or otherwise, is subject to the laws of physics. As Adrian Bejan points out in his book Design in Nature, the fact that unifies the natural and technological worlds is that they are all flow systems. Given freedom, these systems evolve over time to increase the flow of what flows through them. Electrons, populations, money, all follow the same principle. A corollary to this in Bejan’s work is the notion that all growth follows s-curves. So, unlike a purely mathematical model in which processes can evolve infinitely, the physical model imposes constraints.
This confrontation between math and science is exactly what I see playing out in the current debate about AI. Although deep learning has its roots in the biologically inspired neural networks of the 90s, its current young practitioners assume a more mathematical approach. If you did not live thru the promise and then disappointment of the neural net era, you might believe in the infinite potential of the current techniques.
One of the most specific critics of Hawkins’ results thus far is Yann LeCun. LeCun is one of the original neural net researchers who has become one of the most important figures in the deep learning movement. He is now head of AI for Facebook. In an AMA on Reddit last May, LeCun cautioned against underestimating the difficulty Hawkins faces “to instantiate these concepts and reduce them to practice.” Of Numenta’s Hierarchical Temporal Memory (HTM) algorithms, he asked, are they “minimizing an objective function?”
At this point, the deep learning developed by Google’s Geoffrey Hinton, Baidu’s Andrew Ng, Facebook’s LeCun, Microsoft and others has produced the greatest accuracy for solving specific classification tasks (identifying pictures of cats on ImageNet, for instance). The way these systems have been able to improve through scaling suggest continued progress from a mathematical perspective. But these methods have proven brittle and it is very easy to minutely alter the pixels of images or insert random words into text and demonstrate that these systems do not “understand” what they are processing. LeCun himself discussed Musk’s “existential threat” comments at the Data Driven Conference last December. He expressed a very balanced and measured view of the state of contemporary AI. “There are basic things we haven’t figured out. There are a lot of technical obstacles we will encounter as we move forward,” he said. “Deep learning gives us a lot of hope that we are moving forward. We don’t know where the next brick wall is.… There’s a lot of opportunity to make progress. We don’t know where the limit is yet.”
LeCun and Hawkins are both experienced soldiers of the AI wars, and their enthusiasms are balanced with cautions. There have been many fads in AI, and many of the current accomplishments may one day seem fads themselves. IBM just announced that it has formed a 100 person team to test out Numenta’s algorithms and attempt to build hardware that can support the massive parallelism required to instantiate Hawkins’ model of how the cortex works. LeCun agrees that Hawkins, “has the right intuition and the right philosophy.” LeCun also lists “unsupervised representation learning, particularly prediction-based methods for temporal/sequential signals,” which is what Numenta’s method specializes in, among the promising areas for young engineers to explore. Time-based data will be the hallmark of the coming internet of things and create different demands than existing forms of big data. This will place a premium on building models that predict dynamically over those based on storing larger and larger troves of static data. As Hawkins told me, “the future may be more like Snapchat than Facebook.”
This then brings us back to the issue of regulation. If machine intelligence is essentially predictive of and responsive to external data streams, what might there be to regulate? The answer, at this point, is not the AIs themselves but the uses to which humans put data. This is correct but an understandable pain point for tech companies who complain that regulation inhibits innovation. The intersection of this Venn diagram occurs where these innovations clearly benefit the most humans.
I don’t think Musk or the rest are irresponsibly pointing to the threat of AI, but this gesture has served to take attention away from the effect that humans at tech companies are having on the rest of the planet’s humans. As I have commented about biotechnology, the best way to lower the public fear about new technologies is to demonstrate clear benefits for a lot of people. The unintended side effect of fomenting fear of AI is that governments may react to the pressures of a sensationalized public with regulations that prove counterproductive. In many ways, I think the Future of Life Institute, to which Elon Musk has donated $10 million to research the threats of AI, is an attempt by technologists and scientists to regulate themselves before governments do it for them, and not as well.
Image credit: Doug Bowman | Flickr
Via Forbes