Virtual reality, robots, chatbots and holograms could allow us to exist perpetually. Whether we should choose the option is a different story.
In 2016, Jang Ji-sung’s young daughter Nayeon passed away from a blood-related disease. But in February, the South Korean mother was reunited with her daughter in virtual reality. Experts constructed a version of her child using motion capture technology for a documentary. Wearing a VR headset and haptic gloves, Jang was able to walk, talk and play with this digital version of her daughter.
“Maybe it’s a real paradise,” Jang said of the moment the two met in VR. “I met Nayeon, who called me with a smile, for a very short time, but it’s a very happy time. I think I’ve had the dream I’ve always wanted.”
Once largely the concern of science fiction, more people are now interested in immortality — whether that’s keeping your body or mind alive forever (as explored in the new Amazon Prime comedy Upload), or in creating some kind of living memorial, like an AI-based robot or chatbot version of yourself, or of your loved one. The question is — should we do that? And if we do, what should it look like?
In Korea, a mother was reunited with a virtual reality version of her young daughter who had passed away years before, as part of a documentary project.
Modern interest around immortality started in the 1960s, when the idea of cryonics emerged — freezing and storing a human corpse or head with the hope of resurrecting that person in the distant future. (While some people have chosen to freeze their body after death, none have yet been revived.)
“There was a shift in death science at that time, and the idea that somehow or another death is something humans can defeat,” said John Troyer, director of the Centre for Death and Society at the University of Bath and author of Technologies of the Human Corpse.
However, no peer-reviewed research suggests it’s worth pouring millions of dollars into trying to upload our brains, or finding ways to keep our bodies alive, Troyer said. At least not yet. A 2016 study published in the journal PLOS ONE did find that exposing a preserved brain to chemical and electrical probes could make the brain function again, to some degree.
“It’s all a gamble about what’s possible in the future,” Troyer said. “I’m just not convinced it’s possible in the way [technology companies] are describing, or desirable.”
The Black Mirror effect
There’s a big difference between people actively trying to upload their brain to try and live on forever and those who die whose relatives or the public try to resurrect them in some way through technology.
In 2015, Eugenia Kuyda, co-founder and CEO of software company Replika, lost her best friend Roman after he was hit by a car in Moscow. As part of the grieving process, she turned to tech. Kuyda trained a chatbot on thousands of text messages the two had shared over the years — creating a digital version of Roman that could still “talk” to family and friends.
The first time she messaged the bot, Kuyda said she was surprised at how close it came to feeling like she was talking to her friend again. “It was very emotional,” she said. “I wasn’t expecting to feel like that, because I worked on that chatbot, I knew how it was built.”
Eugenia Kuyda created a chatbot based on text messages from her friend Roman after he passed away in a car accident.
If this sounds like an episode of Black Mirror, it’s because it was. The 2013 episode Be Right Back centers on a young woman whose boyfriend is killed in a car accident. In mourning, she signs up for a service that allows her to communicate with an AI version of him based on his past online communications and social media profiles — ultimately turning it into an android version of her boyfriend. But he’s never exactly the same.
However, Kuyda says her Roman chatbot was a deeply personal project and tribute — not a service for others. Anyone trying to do this on a mass scale would run into a number of barriers, she added. You’d have to decide what information would be considered public or private and who the chatbot would be talking to. The way you talk to your parents is different from the way you’d talk to your friends, or to a colleague. There wouldn’t be a way to differentiate, she said.
The digital version of your friend could potentially copy the way they speak, but it would be based on things they had said in the past — it wouldn’t make new opinions or create new conversations. Also, people go through different periods in life and evolve their thinking, so it would be difficult to determine which phase the chatbot would capture.
“We leave an insane amount of data, but most of that is not personal, private or speaks about us in terms of what kind of person we are,” Kuyda said. “You can merely build the shadow of a person.”
The question remains: Where can we get the data to digitize people, in full? Kuyda asks. “We can deepfake a person and create some nascent technology that works — like a 3D avatar — and model a video of the person,” she added. “But what about the mind? There’s nothing that can capture our minds right now.”
Perhaps the largest barrier to creating some kind of software copy of a person after they die is data. Pictures, texts, and social media platforms don’t typically exist online forever. That’s partially because the internet continues to evolve and partially because most content posted online belongs to that platform. If the company shuts down, people can no longer access that material.
“It’s interesting and of the moment, but it’s a great deal more ephemeral than we imagined,” Troyer said. “A lot of the digital world disappears.”
Memorialization technology doesn’t typically stand the test of time, Troyer said. Think video tributes or social media memorial pages. It’s no use having something saved to some cloud if no one can access it in the future, he added. Take the story of the computer that Tim Berners Lee used to create HTML on the web with — the machine is at CERN, but no one knows the password. “I see that as sort of an allegory for our time,” he said.
“We leave an insane amount of data, but most of that is not personal, private or speaks about us in terms of what kind of person we are. You can merely build the shadow of a person.”
Preserving the brain
One of the more sci-fi concepts in the area of digitizing death came from Nectome, a Y Combinator startup that preserves the brain for potential memory extraction in some form through a high-tech embalming process. The catch? The brain has to be fresh — so those who wanted to preserve their mind would have to be euthanized.
Nectome planned to test it with terminally ill volunteers in California, which permits doctor-assisted suicide for those patients. It collected refundable $10,000 payments for people to join a waitlist for the procedure, should it someday become more widely available (clinical trials would be years away). As of March 2018, 25 people had done so, according to the MIT Technology Review. (Nectome did not respond to requests for comment for this story.)
The startup raised $1 million in funding along with a large federal grant and was collaborating with an MIT neuroscientist. But the MIT Technology Review story garnered some negative attention from ethicists and neuroscientists, many of whom said the ability to recapture memories from brain tissue and re-create a consciousness inside a computer is at best decades away and probably not possible at all. MIT terminated its contract with Nectome in 2018.
“Neuroscience has not sufficiently advanced to the point where we know whether any brain preservation method is powerful enough to preserve all the different kinds of biomolecules related to memory and the mind,” according to a statement from MIT. “It is also not known whether it is possible to recreate a person’s consciousness.”
It’s currently impossible to upload a version of our brain to the cloud — but some researchers are trying.
Meanwhile, an app in the works called Augmented Eternity aims to help people live on in digital form, for the sake of passing on knowledge to future generations. Hossein Rahnama, founder and CEO of context-aware computing services company FlyBits and visiting professor at MIT Media Lab, seeks to build software agents that can act as digital heirs, to complement succession planning and pass on wisdom to those who ask for it.
“Millennials are creating gigabytes of data on a daily basis and we have reached a level of maturity where we can actually create a digital version of ourselves,” Rahnama said.
Augmented Eternity takes your digital footprints — emails, photos, social media activity — and feeds them into a machine learning engine. It analyzes how people think and act, to give you a digital being resembling an actual person, in terms of how they react to things and their attitudes, Rahnama said. You could potentially interact with this digital being as a chatbot, a Siri-like assistant, a digitally-edited video, or even a humanoid robot.
The project’s purpose is to learn from humans’ daily lives — not for advertising, but to advance the world’s collective intelligence, Rahnama said.
“I also like the idea of connecting digital generations,” he added. “For example, someone who is similar to me in terms of their career path, health, DNA, genomics. They may be 30 or 40 years ahead of me, but there is a lot I could learn about that person.”
The team is currently building a prototype. “Instead of talking to a machine like Siri and asking it a question, you can basically activate the digital construct of your peers or people that you trust in your network and ask them a question,” Rahnama said.
A robot proxy
In the Intelligent Robotics Laboratory at Osaka University in Japan, director Hiroshi Ishiguro has built more than 30 lifelike androids — including a robotic version of himself. He’s pioneered a research field on human-robot interactions, studying the importance of things like subtle eye movements and facial expressions for replicating humans.
“My basic purpose is to understand what a human is by creating a very human-like robot,” Ishiguro said. “We can improve the algorithm to be more human-like if we can find some of the important features of a human.”
Ishiguro has said that if he died, his robot could go on lecturing students in his place. However, it would never really “be” him, he said, or be able to come up with new ideas.
“We cannot transmit our consciousness to robots,” Ishiguro said. “We may share the memories. The robot may say ‘I’m Hiroshi Ishiguro,’ but still the consciousness is independent.”
Professor Hiroshi Ishiguro (right) poses with the robotic version of himself.
However, this line is only going to get blurrier.
“I think in the near future we’re going to have a brain-machine interface,” Ishiguro said. This will make the boundary between a human and a computer very ambiguous, in the sense that we could share part of a memory with the computer.
“Then, I think it’s quite difficult to say where is our consciousness — is it on the computer, or in our brain?” Ishiguro said. “Maybe both.”
Despite what you may think, this won’t look anything like a science fiction movie, Ishiguro said. In those familiar examples, “they download the memory or some other information in your brain onto the computer. We cannot do that,” he said. “We need to have different ways for making a copy of our brains, but we don’t know yet how we can do that.”
Humans evolved thanks to a biological principle: Survival of the fittest. But today, we have the technology to improve our genes ourselves and to develop human-like robots, Ishiguro said.
“We don’t need to prove the biological principal to survive in this world,” Ishiguro said. “We can design the future by ourselves. So we need to carefully discuss what is a human, what is a human right and how we can design ourselves. I cannot give you the answers. But that is our duty to think about the future.
“That is the most important question always — we’re looking for what a human is,” Ishiguro said. “That is to me the primary goal of science and engineering.”
This story is part of CNET’s The Future of Funerals series. Stay tuned for more next week.