In the future it’s possible we will be able to create artificially human brains that emulate a real human.

Imagine that in hundreds of years in the future it becomes possible to create an exact replica of any human brain on Earth. How should the copy be treated? Should scientists be allowed to experiment on it and, ultimately, put it down if it is no longer needed? After all, it is merely artificial intelligence (AI).



Or should it be given the same rights, both legally and socially, as the human it was copied from? These are questions tackled by Dr Anders Sandberg of the Future of Humanity Institute at Oxford University – and he warns there are no easy answers.

In the future it is plausible, if not a certainty, that we will be able to emulate the brains of any animal to a level that the emulation is indistinguishable from the original.

This would create AI that mimics its real-life counterpart – but with its creation, what can we ethically do with it?

Could we experiment on emulated lab rats? How would we deal with copies of humans? And how could we prepare for computer-driven beings more intelligent than ourselves.

These are all questions that Anders Sandberg discusses in his paper Ethics of brain emulations.

In the paper he surmises that whole brain emulation (WBE) will be a possibility in the future.

The basic idea of it is that a brain could be scanned in enough detail that it can be fully recreated with a software model, creating a new ‘brain’ that is essentially the same as the original.

There is considerable research currently being done into brain emulation.

Indeed, most forms of neuroscience are furthering our understanding of the brain and bringing us closer to being to make a ‘copy’.

And with 3D-printing growing in abilities it is even becoming possible to reprint parts of human organs – although printing a brain is still a long way off.

As computers grow more powerful and we understand the brain more and more it will come ever closer to being a reality.

Sandberg considers what social status we would give to a robot that had such an emulated brain.

‘If emulations of human brains work well enough to exhibit human-like behaviour rather than mere human-like neuroscience, legal personhood is likely to eventually follow,’ he writes.

If an emulated brain is identical to the original, who gets to keep the ‘identity’ of the person?

This was a fictional scenario tackled in the 2006 movie The Prestige, where Hugh Jackman’s character Robert Angier, a magician, discovers a machine that can make an exact clone of himself.

For the purposes of a magic trick, Angier continues to clone himself and appear in a different location, much to the amazement of his audience.

But behind the scenes that dark reality is exposed, with Angiers having to kill his clone after each act to prevent multiple copies of himself wandering around.

The dilemma posed is that the viewers, and indeed Angiers, are never sure who is truly the original – is it the one who steps into the machine, or the one who steps out?

This is not too dissimilar from the same quandary posed by Sandberg – if we make an exact copy of a human brain, down to every cell, both would have the same memories, the same emotions and the same mentality.

Should both be allowed to exist, or should one take precedence over the other? Sandberg says that the safest strategy will be to treat an emulation in the same manner as a sentient being.

This is something known as the principle of assuming the most (PAM) – ‘assume that any emulated system could have the same mental properties as the original system and treat it correspondingly.’

In addition, he postulates what would happen if the emulation were capable of being superior to the original.

If brains can be created that not only mimic but are superior to human brains, it poses a potential for a dystopian future envisaged in various science-fiction works.

‘It has been argued that successful artificial intelligence is potentially extremely dangerous because it would have radical potential for self-improvement, yet possibly deeply flawed goals or motivation systems,’ Sandberg writes.

‘If intelligence is defined as the ability to achieve one’s goals in general environments, then superintelligent systems would be significantly better than humans at achieving their goals – even at the expense of human goals.’

One solution to this is to impose any AI with human limitations and forms of control such as knowledge of what is right and wrong of what is socially and morally acceptable.

This might, though, stunt the growth and progress of AI research and development.

Of course, long before we can fully recreate a human brain it is likely we will be able to at least partially emulate a brain to create a ‘limited’ being.

For example, perhaps a brain will be capable of being created that has the mental capacity of a baby and never develops further, but the level at which emulations should begin to be given equal social and moral status to humans is questionable.

‘At one extreme, it has been suggested that even thermostats have simple conscious states,’ Sandberg writes.

Or, more controversial still, he considers a piece of research by German philospher Dr Thomas Metzinger, who in 2003 had a strong warning for artificial intelligence.

‘What today’s ethics committees don’t see is how the first machines satisfying a minimally sufficient set of constraints for conscious experience could be just like mentally retarded infants,’ Sandberg quotes Metzinger as saying.

‘They would suffer from all kinds of functional and representational deficits too. But they would now also subjectively experience those deficits.

‘In addition, they would have no political lobby – no representatives in any ethics committee.’

This in itself poses a further dilemma: when is it ever ethical to deactivate an emulation, if ever at all?

Sandberg likens the scenario to abortion in the modern day and the battle between people who are pro-choice and those who are pro-life.

If an emulation was rub for just a millisecond of time before being deactivated, some might argue that this would constitute a ‘murder’ of sorts, destroying a life as it had been created.

One further interesting point Sandberg makes is that of ownership.

He cites a case in California in 1998 where it was ruled that a patient ‘did not have property rights to cells extracted from his body and turned into lucrative products.’

By that same taken, who would be the owner of an emulated human being? Would it be the human it was created from, the company who performed the procedure or the emulation itself?

And, if the latter, what if the emulation ran out of funds to pay for its upkeep?

But it’s not all doom and gloom regarding the future of artificial intelligence.

Sanberg also cites some benefits that will arise from creating superintelligent beings that are comparable or superior to humans.

For example, if a future pandemic were to arise that was fatal for humans, artificial robots would likely be immune.

In addition, ‘brain emulations are ideally suited for colonising space and many other environments where biological humans require extensive life support,’ says Sandberg.

‘One of the largest obstacles to space colonisation is the enormous cost in time, energy and reaction mass needed for space travel: emulation technology would reduce this.’

This future scenario, with one set of biological human beings susceptible to the dangers of their environment and one immune, would essentially be like ‘splitting the human species into two’ to ensure the survival of the human race, in one form or another.

And brain emulation also hints at humans one day being immortal, as they could upload their entire brain into a machine and continue living in an artificial body.

Sandberg concludes, although all of the points discussed so far may be pure speculation, it is worthwhile to prepare for a future where emulation is possible to prepare us for any problems.

‘In many cases, the steps are simply to gather better information and have a few preliminary guidelines ready if the future arrives surprisingly early,’ he writes.

‘While we have little information in the present, we have great leverage over the future.

‘When the future arrives we may know far more, but we will have less ability to change it.’

Via Daily Mail