In an episode of the British dystopian sci-fi show Black Mirror, titled “Be Right Back,” a man who dies in a car crash leaves behind enough social media and other data traces to be digitally recreated, first online and then as a creepily life-like robot. Turns out, that scenario is not so far-fetched.

“A lot of the concepts that are on the Internet now came from sci-fi, and then the infrastructure caught up and people can build it,” says Hossein Rahnama, founder of the location-based service company Flybits and a visiting scholar at the MIT Media Lab. He has taken the first step into the uncanny valley of digital surrogates with Augmented Eternity, technology that builds concierge bots based on real human beings with particular expertise and personalities.

Instead of talking to Alexa from Amazon, Cortana from Microsoft, or Siri from Apple, you’d be talking to digital avatars of the real-life Bob from accounting about an expense report, Jenny from legal about a contract, or Simone from Paris about your upcoming vacation. These digital personalities live their own artificially intelligent lives and could outlive the people on whom they are based.

“From your email interactions, from your IM interactions, from the photos that you take, from places that you visit, we are generating a lot of logic,” says Rahnama. “If you are not around, those data will enable us to feed that to a machine-learning engine and represent you with a level of probability.” Machine learning is critical, because Augmented Eternity doesn’t just ingest the data someone leaves; to create a convincing facsimile, it analyzes how they think and act. The “presentation layer,” as Rahnama calls is, could take many forms: a chatbot, a voice interface, or even a 3D avatar in virtual reality. “Before you can have that presentation layer, you need to have a successful semantic and context layer that can understand the situation of the user,” says Rahnama. That’s what he’s creating.

Augmented Eternity is built atop a trend of moving away from manually digging into apps or websites and instead asking a bot or digital concierge to do the work for you. Rahnama’s goal is to provide a bot that is more intuitive and personal, with the expertise you need at the moment.


Augmented Eternity resembles a light version of the Singularity, a concept advanced by futurists like Ray Kurzweil about what happens when computers surpass human intellect. It includes the notion of outliving our mortal body by uploading the contents of our mind to a computer. We’re already uploading, says Rahnama, with data from tweets, Facebook posts, Instagrams, Slack messages, Fitbit readings, and Pokémon Go wanderings. Augmented Eternity could also be a shortcut to something resembling the artificial intelligence of sci-fi—like HAL from 2001, Jarvis from Ironman, or Samantha from Her—but basing them on real people, rather than minds synthesized from scratch.

“There have been a million things that are like this,” says Sandy Pentland, cofounder of the MIT Media Lab. “It’s that none of them worked very well. They required you to sit down 500 times a day and write what you were doing.” Rahnama has been developing Augmented Eternity working with Pentland’s Human Dynamics research group at the Media Lab. Neither of them claims that such a digital avatar is sentient, but it might some day pass the Turing Test; i.e., it’s able to respond in such a natural manner that someone could mistake it for a human. “It sounds like the person, in the way they react to things and attitudes,” Pentland says. “Maybe that counts as a person.”

Rahnama’s first project is to provide a digital stand-in when colleagues aren’t available. “Within an organization, which is what we are going to start with, you and your [coworker] will have a trust relationship,” says Rahnama. By trust, he means that one coworker has to give permission for access to their avatar and to what their avatar knows. From Jenny’s emails and chats, for example, her online avatar knows about all the contracts Jenny has handled and the details of negotiations. The avatar could draw on this knowledge to answer questions about a new contract.

Trust is critical for Augmented Eternity, which accesses a person’s life in exquisite detail, but only for a moment and a specific topic. There’s no central database that ingests a digital life. Augmented Eternity pulls in data on the fly from other databases and creates brief, disposable computing sessions around a particular topic and conversation. “We don’t transport the data to a new location, we just link to it,” says Rahnama. “You can keep your data on Google, you can keep your data on Twitter, you can keep your data on your email server.” The person whose data is being collected holds a key that enables the sessions. They control who gets access.

Nor does Augmented Eternity recreate the entire person; just the aspect needed for a particular chat. “Let’s say you want to quickly switch the conversation from a legal matter to a personal matter,” he says. “Then you need to open a new session which has the semantics of a personal discussion, because the grammar of your personal discussion is different from a … professional discussion.”


Augmented Eternity is an outgrowth of Rahnama’s previous work. In 2013, he founded the company Flybits, a platform that lets developers add context-sensitive features to mobile apps, based on conditions like location, weather, and social media feeds. A point-and-click interface allows users to pair contexts with actions. For instance, a bank sponsoring a music festival can add a location-based feature to its app that sends alerts to customers about the event if they are near the venue. Clients include TD Bank, Vodafone UK, Bosch, and Ryerson University in Toronto (where Rahnama is a professor and cofounder of its startup incubator, Ryerson DMZ). Rahnama’s development work that led to Flybits earned him a spot as one of the 35 Innovators Under 35 by MIT Technology Review in 2012.

For Rahnama, gathering info about someone’s knowledge and personality is the next step into the realm of context-aware computing. The concept, developed in the 1990s by people like Mark Weiser of Xerox PARC, is that computers become so ubiquitous and adaptable to our needs that we don’t see them as discrete devices. “The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it,” wrote Weiser in a 1991 Scientific American essay “The Computer for the 21st Century.”


Rahnama is thinking beyond personal assistant bots. He envisions a way to connect generations—maybe not with digital ghosts, but with insights from how they lived and decisions they made. “There is someone similar to me,” he says, “in terms of career path, in terms of health metrics, in terms of DNA, in terms of genomics.” Yep, he wants to get that personal. “Because that person is—I don’t know, 30, 40 years ahead of me—there is a lot I can learn about that person.”

This won’t work for people who lived before the digital age. There probably wouldn’t be a chance to get career advice from Einstein. Just reading what Einstein wrote wouldn’t provide enough context. “You don’t know what the context of the statement is,” Sandy Pentland says. “To understand his writings, there has to be this rich representation of humans to know that when he says this, he’s probably thinking about, you know, something he was doing before breakfast.”

Though Einstein was prolific, he didn’t push out that much information. With our online presence, we’re getting closer to the level of context needed to recreate a whole person (not just a chatbot). “I don’t think we have enough,” Pentland says. “But Hossein’s stuff is a really interesting start.”

Image credit: Flickr user Mathematical Association; App Photo: Columbia Pictures (“Multiplicity”, 1996)
Article via: Fast Company