Many people are worried about artificial intelligence, and what will happen if machines become smarter than humans. Even Bill Gates has said he is concerned about the decisions machines of the future will make once they outsmart humans.
The age of artificial intelligence may be very nearly upon us—which is, on one hand, great news. Machines have long helped humans do things better and faster and more safely and affordably.
Except the rise of artificial intelligence is also, leaders in technology continue to remind us, cause for some concern. Failing to take seriously the potential for a world in which smart machines run amok could make artificial intelligence more dangerous to humanity than nuclear weapons, Tesla CEO Elon Musk has said.
Bill Gates told Reddit this week that he agrees with Musk. “I am in the camp that is concerned about super intelligence,” Gates wrote. “First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”
That’s a common refrain: The rise of machines will be okay as long as we manage it well. But what does managing it well even look like? One of the keys may be to build machines that are able to reflect about their own behaviors (and the behaviors of other artificially intelligent machines), as well as understand their connection to the physical world. Because today, models of artificial intelligence represent a kind of Cartesian dualism—the computer mind sees itself as totally separate from the computer body.
“The traditional separation of the agent from its environment seems even less attractive when one considers… it may become better than any human at the task of making itself even smarter, leading to an ‘intelligence explosion’ and leaving human intelligence far behind,” wrote Benja Fallenstein and Nate Soares in a paper for the Machine Intelligence Research Institute. “It seems plausible that an [artificial general intelligence] undergoing an intelligence explosion may eventually want to adopt an architecture radically different from its initial one, such as one distributed over many different computers, where no single entity fulfills the agent’s role from the traditional framework.”
These questions of computational reasoning represent a complex problem without a clear solution: How do we build machines that will make the world better, even when they start running themselves? And, perhaps the bigger question therein, what does a better world actually look like? Because if we teach machines to reflect on their actions based on today’s human value systems, they may soon be outdated themselves. Here’s how MIRI researchers Luke Muehlhauser and Nick Bostrom explained it in a paper last year:
Suppose that the ancient Greeks had been the ones to face the transition from human to machine control, and they coded their own values as the machines’ final goal. From our perspective, this would have resulted in tragedy, for we tend to believe we have seen moral progress since the Ancient Greeks (e.g. the prohibition of slavery). But presumably we are still far from perfection.
We therefore need to allow for continued moral progress. One proposed solution is to give machines an algorithm for figuring out what our values would be if we knew more, were wiser, were more the people we wished to be, and so on. Philosophers have wrestled with this approach to the theory of values for decades, and it may be a productive solution for machine ethics.
Experts say we are running out of time to figure out the answers to these questions. Again, Bill Gates: “There will be more progress in the next 30 years than ever. Even in the next 10 problems like vision and speech understanding and translation will be very good. Mechanical robot tasks like picking fruit or moving a hospital patient will be solved. Once computers/robots get to a level of capability where seeing and moving is easy for them then they will be used very extensively.”
Fruit-picking robots, though, do not a humanity-replacing singularity make. Or not right away anyway. Humans are actually—go figure—pretty bad at predicting AI timelines, according to MIRI. For decades, people have been predicting the rise of artificial intelligence is just around the corner, 15 to 25 years away.
“Expert predictions are not only indistinguishable from non-expert predictions, they are also indistinguishable from past failed predictions,” wrote the authors of a report on failed predictions. “Hence it is not unlikely that recent predictions are suffering from the same biases and errors as their predecessors.”
Image credit: jlmaral | Flickr
Via The Atlantic