Woman Looking at Reflection

Talking to your digital twin could one day be like looking in the mirror.

“When you and I die, our kids aren’t going to go to our tombstones, they’re going to fire up our digital twins and talk to them,” says John Smart, futurist and founder of the Acceleration Studies Foundation. Smart uses many names for the technology he predicts — digital twin, cyber-self, personal agent — but the concept stays the same: a computer-based version of you. (Video)


Using various strategies for gathering and organizing your data, digital twins will mirror peoples’ interests and values. They’ll “input user writings and archived email, realtime wearable smartphones (lifelogs), and verbal feedback to allow increasingly intelligent and productive guidance of the user’s purchases, learning, communication, feedback, and even voting activities,” Smart writes. They’ll displace much of today’s information overload from regular people to their cyber-selves.

And one day, Smart theorizes, these digital twins will hold conversations and have faces that mimic human emotion. “They will become increasingly like us and extensions of us,” Smart says.

The concept might sound far-fetched. But consider that people often turn to a deceased friend or family member’s Facebook wall to grieve. People already form relationships with each other’s online presences. As computer science advances, the connection will only improve and strengthen — even with identities that aren’t real people.

“Where we’re headed is creating this world in which you feel you have this thing out there looking after your values,” Smart says.

For digital twins to reach their full potential, however, they require two important developments: “good conversational interfaces and semantic maps,” Smart explains.

Conversational Interfaces (CI)

Ron Kaplan, a data scientist in Silicon Valley, already chronicled the necessity of CI for Wired last year. In his words, simply scheduling a flight could require 18 different clicks or taps on 10 different screens. “What we need to do now is be able to talk with our devices,” he wrote.

Smart couldn’t agree more. “With technology, we want things that enable us to use as much of our brains as possible at one time,” he adds.

For example, with a single, spoken sentence, you could tell your personal agent you feel sick. It could reference your calendar or emails to determine when to make a doctor’s appointment. And when you arrive, you might not even need to fill out forms. Your personal agent would have looked at your hospital records and healthcare information for you — and then later relayed the outcome of any tests taken during your visit.

While no company boasts such comprehensive abilities yet, many have started to implement similar technologies. Right now, Apple has Siri. Microsoft has Cortana. And in the summer of 2014, a program named “Eugene Goostman,” imitating a Ukrainian teen, passed the Turing Test (with some healthy skepticism).

Smart, however, places great emphasis on an earlier cognitive machine: IBM’s Watson, which the company claims “literally gets smarter.” Watson’s performance on Jeopardy against champion Ken Jennings, shown below, convinced many skeptics of the emergence and optimization of CI.


Vocal technologies like Siri, Cortana, and Watson already rely on semantic maps, tools that represent relationships in data, especially language. And companies constantly improve them. For example, a late 2013 Google update brought pronouns to the table — and Smart’s wife, for one, quickly noticed a difference.

Walking in downtown Mountain View, his wife pulled out her phone, and as a test, asked Google, “Who is the President of the United States?” Naturally, her phone responded: “Barack Obama.”

Next, Smart’s wife inquired: “Who is his wife?”

Phone: “Michelle Obama.”

Smart’s wife: “Where was she born?”

Phone: “Chicago, Illinois.”

Not only did Smart’s wife engage in conversation with her phone, it understood words like “he” and “she” — pronouns that refer to an antecedent earlier in the conversation. “Now, you don’t have to specify every little detail,” Smart explains. “Because the computer has some memory of previous exchanges and uses that as context.”

Once we create “decent maps of human emotion,” Smart adds, digital twins will even have faces to help them communicate. They’ll smile or furrow their brows to show whether they understand or not.

“But the next step is something I call a ‘valuecosm,'” Smart explains.

The ‘Valuecosm’

A valuecosm doesn’t just, for example, analyze all your emails and formulate a record of your interests and values. It allows a personal agent to interact in your stead based on this information.

“You’re reaching for a can of tuna at a grocery store in 2030,” Smart envisions.  “And your bracelet gives a green arrow to move your hand a few inches to the left, from Bumble Bee to Chicken of the Sea or whatever.”

You’d previously told your personal agent to watch for foods with high mercury levels or companies that over-fish the oceans. So this wearable piece of technology, imprinted with a digital version of your values, informed you which product to choose based on that.

“And then, back in your car, your digital twin directs you to the gas station that’s most in line with your environmental values,” Smart adds. A valuecosm not only uses information in a human way, it’s flexible, too. You can review your settings and change them manually.

“You’ll be having a conversation with your [personal] agent, and you say, ‘I want more of this or this plus something else,'” Smart explains. “You know, I care more about social justice so make that area bigger.”

To make this technology the most usable and effective though, your digital twin will have to pull your information from various places, with your permission — not push its functions onto you.

“People who have started using Google alerts, they’ve moved themselves toward a more pull-based view of the internet,” Smart says.

In truth, the concept started as a way to improve advertising. For example, internet cookies monitor your online activity, allowing companies to match their advertisements to your interests. But instead of a company “pushing” their products or ideas onto you and trying to create demand, with pull-based marketing, you give permission for access to your information, and the advertising follows.

“Instead of a filter, it’s more like a magnet,” Smart explains. That idea, however, could lead to even less online privacy.

The Future Of Privacy

The uncertain status of online privacy already bothers the general public. People criticize companies like Google and Amazon that only pull their information from what’s available. But with digital twins, we’ll have to give full permission for companies to access our online identities to optimize our use of the technology.

“You know, I’d like to have control of my healthcare or financial information in my own little internet locker,” Smart admits. “But that kind of thinking is first generation. You can’t accomplish much by having control of your own data.”

Big-name companies using algorithms and predictive analytics can probably best host our personal agents. As long as people feel they have strong control over the technology, privacy will come secondary, in Smart’s opinion.

“People who are thinking that you can control your own identity aren’t thinking about the problem right,” he says. “The future of personal control isn’t control of data. The future that we care about is control of an algorithmic interface of your identity.”

For comparison, Smart mentions domestication. Humanity didn’t engineer the brains of cats and dogs. We simply chose the ones more amenable to us and bred them. “We’ll do the same to our advanced AIs, whose brains we won’t be designing, but rather teaching, like a small child,” Smart explains.

And as Smart predicts, all these technologies, required to make fully functional personal agents possible, are only about five years away.

Photo credit: Daily Mail

Via Business Insider