Will we have more rights or fewer rights when artificial intelligence kicks in? How about the right to have our diseases cured, the right to a full head of hair, the right to a job that matches our skills, or the right to marry our perfect mate?
In response to advances in neuroscience and technologies that alter or read brain activity, some researchers are proposing a recognition of new human rights to mental integrity. These would protect people from having their thoughts abused, hacked, or stolen. The idea of this kind of human right is a recognition that although brain-related technologies have the potential to transform our lives in many positive ways, they also have the potential to threaten personal freedom and privacy.
A large portion of brain-related technology owes its development to medical research and physical need; some diagnostic tools and treatments, for example, need to “read” brain activity. However, this area of research and development has also given birth to performance enhancers, game interfaces, and brain-computer interfaces that can control anything you want them to.
According to University of Basel neuroethicist Marcello Ienca and University of Zurich human rights lawyer Roberto Andorno, these advances in neuroscience and technology threaten personal freedom and privacy in new ways. The pair argues that we are not yet doing enough to protect ourselves, and the human brain as the last refuge for human privacy. They have therefore offered up four new human rights they hope can preserve that refuge: the rights to cognitive liberty, mental integrity, mental privacy, and psychological continuity.
“Cognitive liberty” concerns a person’s freedom to alter their mental state using brain stimulation and other techniques—and to refuse to do so. If this human right is recognized, it could, for example, make it illegal for employers to use any kind of brain stimulation techniques on employees. The right to “mental integrity” concerns the possibility of hackers who might interfere with brain implants that are otherwise being willingly used by their owners. Hacking might take the form of sending false signals to implants or taking control of the implant itself.
The right to “mental privacy” would guard against a person having their mind read without their consent, whatever form that takes as technology continues to improve. Under the current state of the law, you might have better luck pursuing someone for stealing and publicly sharing photographs or documents you took pains to keep private than if they used a device to steal your memories or thoughts and posted them publicly.
The right to “psychological continuity” would protect people from actions that could disrupt their sense of identity, or harm their feeling of going through life being the same person. The use of electrode implantation for deep brain stimulation to control Parkinson’s symptoms and other conditions, for example, has already triggered concerns about personal identity — with some patients indicating that they no longer feel like themselves after the procedure.
“The question we asked was whether our current human rights framework was well equipped to face this new trend in neurotechnology,” Ienca told The Guardian. “The information in our brains should be entitled to special protections in this era of ever-evolving technology. When that goes, everything goes.”
Brain Technologies In Bloom
Although some may think these concerns sound like science fiction, brain-related technologies and brain-computer interfaces are in development now at an astonishing rate—and this will only gain momentum, producing more and more sophisticated technologies. If you’re thinking the four human rights related to the integrity of the mind sound far-fetched, consider the breakthroughs that have come to fruition within the last few years alone.
Engineers created a robot than can be controlled with brainwaves. Scientists connected monkeys using brain implants and the monkeys learned to “communicate” at a distance using their minds. They also achieved brain synchronicity in rats, and connected two humans with EEG caps well enough to allow them to communicate yes and no answers. Early this year people suffering from locked-in syndrome were able to communicate using BCIs.
While the BCI technology isn’t perfect yet, it’s progressing fast. In fact, humans have used BCIs to control some amazing things. Researchers in Korea have used BCIs to control the movement of turtles by mentally controlling their instinctive escape behavior. Earlier this year a quadriplegic man used a brain-computer interface (BCI) and Functional Electrical Stimulation (FES) technology to “think” his arm into moving again. Facebook is already working on its BCI, and it will allow people to “think” to each other without typing or speaking. And Tesla’s Elon Musk is creating what may be the most ambitious BCI application yet: a third layer of the human mind that will merge human intelligence with AI.
Brain implant technologies have also been exploding: a Harvard team is working on implants that are not rendered less effective by scar tissue, and may soon be using them to restore sight to the blind. Other researchers are working to create electrodes from glassy carbon that are ideal for use in BCIs and may help paralyzed people become mobile again. MIT researchers have developed ultrafine fibers approximately 200 micrometers in diameter that are flexible like brain tissue and can be used to manipulate chemical, electrical, or optical signals.
Researchers are even reprogramming actual brain cells to fight Parkinson’s disease. They’ve already been able to “decode” brain activity, and the refinement and sophistication of these decoding skills are growing. It is safe to assume more reprogramming and decoding abilities are coming.
Ienca agrees that although some of these concerns are ahead of the technology, that won’t be true for long, and it’s typically better to be proactive. At least one experiment has already shown that brain signals are likely to be hackable in the future. “We cannot afford to have a lag before security measures are implemented,” he told The Guardian. “It’s always too early to assess a technology until it’s suddenly too late.”