Photo credit: eWallStreeter

What will phones look like in 10 years?

The mobile is pretty well-established in the near future.  The iPhone 5 has been released and it is similar to every iPhone that has been released since 2007.  That shows that our current mobile devices have been sitting on the same plateau for years.

Reflecting on Apple’s recent product launches, author and professor at NYU’s Interactive Telecommunications Program Clay Shirky told me, “They’re selling transformation and shipping incrementalism.”

The screens, cameras, and chips have gotten better, the app ecosystems have grown, the network speeds have increased, and the prices have come down slightly. But the fundamental capabilities of these phones hasn’t changed much. The way that you interact with phones hasn’t changed much either, unless you count the mild success of Siri and other voice command interfaces.

“Is the iPhone 5 the last phone?” Shirky said. “Not the last phone in a literal sense, but this is the apotheosis of this device we would call a phone.”

Danny Stillion of the legendary design consultancy IDEO calls our current technological moment the “phone-on-glass paradigm,” and it’s proven remarkably successful over the last half-decade, essentially conquering the entire smartphone market in the United States and around the world. It seems like this Pax Cupertino could last forever. But if we know a single thing about the mobile phone industry, it’s that it has been subject to disruptions.

No one has tracked these market shifts better than Horace Dediu at Asymco. He’s documented what he calls “a tale of two disruptions,” one from above in Apple and one from below in cheap Chinese and Indian manufacturers. In just the last five years, Nokia, Samsung, LG, and RIM have seen their market shares and profits collapse due to this pincer movement. Our conceit is that change will come again to the smartphone market, and that the phones and market leaders of 2022 will not be the same as they are today.

What might their input methods be? How might the software work? What are we going to call these things that we only occasionally use to make telephone calls?

“It’s not clear to me that there is any such device as the phone in 2022. Already, telephony has become a feature and not even a frequently used feature of those things we put in our pockets. Telephony as a purpose built device is going away, as it’s been going away for the TV and the radio,” Clay Shirky said to me, when I asked him to speculate. “So what are the devices we have in our pockets?”

(For the record, I tried to get Apple, Google, Microsoft, Samsung, HTC, and Nokia to talk about what they think the future of phones looks like, but none of them “responded to me by my deadline.” Don’t worry, the people I did get to talk to me were probably more interesting and forthright anyway.)


Let’s start with Dediu and how we interact with our machines. “A change in input methods is the main innovation that I expect will happen in the next decade. It’s only a question of when,” Dediu wrote to me in an email. Looking at his data, he makes a simple, if ominous observation: “I note that when there is a change in input method, there is usually a disruption in the market as the incumbents find it difficult to accept the new input method as ‘good enough.’ ”

So, when touchscreens arrived on the scene, other phonemakers didn’t quite believe that it was Apple’s way or the highway. After all, hadn’t touchscreens been tried before and failed? And besides, typing emails was so hard on those things! And people loved their Crackberries! And. And. And then all their customers were gone.

Do we have any reason to expect that the touchscreen will remain the way we interact with our mobile devices for the next decade? Not really. They have proven to be effective, but there are clear limitations to interacting with our devices via a glass panel.

One critic of the touchscreen is Michael Buckwald, CEO of the (kind-of-mindblowing) gesture interface company, Leap Motion. “The capacitive display was a great innovation, but it’s extremely limiting,” Buckwald told me. “Even though there are hundreds of thousands of apps, you can kind of break them down into about a dozen categories. It seems like the screen is holding back so many possibilities and innovation because we have these powerful processors and the thing that’s limiting us is the 2D flat display and that touch is limited.”

One big problem is that if you want to move something on a touchscreen from point A to point B, you have actually have to drag it all the way there. i.e. there is a 1:1 relationship between your movement and the movement “in” the device. “To move something 100 pixels, your fingers you have to move your fingers 100 pixels and they end up blocking the thing you’re interacting with,” he told me.

Buckwald, of course, has a solution to this problem. His company makes a gesture control system that allows you to move your fingers to control computers and gadgets. That could mean what they call “natural input,” which is showing you a 3D environment and letting you reach into it and touch stuff, or it could be a more abstract system that would allow for controlling the menus and files and channels that we already know.

“You can imagine someone sitting in front of a TV [controlling it] with the intuitiveness of a touchscreen and the efficiency of a mouse,” he said.

But their technology has already been miniaturized and so it could already be used for controlling phones, too.

“What we envision of the future is a world where the phone is the only computer that you use. It’s become very small and miniaturized and it has a lot of storage and you carry it around in a pocket or attached to you and then it wirelessly connects on different displays based on what you’re trying to do,” Buckwald said. “If you sit down at a desk, it connects to that monitor and Leap would control that. If you’re out on a street, it connects to a head-mounted display and Leap would control that.”

Others, like Dediu and Stillion see voice as the transformative input. Siri has not been the unabashed success that Apple’s commercials set us up for. It’s buggy and it seems to fall into some human-computer interaction uncanny valley. Its argot is human, but its errors are bot. The whole thing is kind of confusing. (Plus, my wife just viscerally hates it, which has made it a flop in our house.) Nonetheless, both these observers think voice input will play a big role in the future.

“When we communicate to computers we use tactile input and visual output. When we communicate with people we typically use audio for both input and output. We almost never use tactile input and consider visual contact precious and rare,” Dediu wrote to me. “Our brains seem to cope well with the audio only interaction method. I therefore think that there is a great opportunity to engage our other senses for computer interaction. Mainly because I believe that computers will emerge as companions and assistants rather than just communication tools. For companionship, computers will need to be able to interact more like people do.”

IDEO’s Stillion converged on the same thought. He foresaw a future where your phone sits jewelry-like somewhere on your body, controlled largely via voice, but also acting semi-autonomously. In this scenario, your phone is hardly a phone anymore, in terms of being a piece of hardware. Rather, it’s a hyper-connected device with access to your data from everywhere. It might even have finally have lost the misnomer, phone. “It’s no longer your phone but the feed of your life,” he said. “It’s the data you’re encountering either pushed on you or pulled by you. Either the things you’re consuming or the things you’re sharing.”

You could TiVo your life, constantly recording and occasionally sharing. That sounds exhausting to me, but Stillion said that’s where the artificial assistants will come in. “What does the right level of artificial intelligence when brought to the table allow us to do with our day-to-day broadcasting of our lives?” he asked. “Is it dialing in sliders of what interests we want to share. How open we feel one day versus the next? Someone is going to deal with that with some kind of fluid affordance.”

Think of it not as frictionless sharing, but as sharing with AI greasing the wheels. “You’d almost have an attache or concierge. Someone that’s whispering in your ear,” he said.

What’s this all have to do with input methods? Much of the interacting we have to do now concerns giving a piece of software a lot of information and context about what we want. But if it already *knows* what we want, then we don’t have to input as much information.

What’s fascinating to me is that I think we’ll see an “all of the above” approach to user input. It’ll be touch screens and gestures and voice and software knowing what we want before we do, and a whole bunch of other stuff. When I interviewed anthropologist and Intel researcher Genevieve Bell, she asked me to think about what it’s like to sit in a car. They’re 120 years old and yet there are still maybe half a dozen ways of interacting with the machine! There’s the steering wheel to direct the wheels, pedals for the gas and break, some kind of gear shifting, a panel for changing interior conditions, and levers for the windshield wipers and turn signals. Much work is even done automatically, so it doesn’t need a system like, say, automatic gear shifting. The car is a living testament to the durability of multiple input methods for complex machines.

“I had an engineer tell me recently that voice is going to replace everything. And I looked at him like, ‘In what universe?’ ” Bell said to me. “Yes, people like to talk, but if everyone is talking to everything all around them, we’ll all go mad. We’re moving to this world where it’s not about a single mode of interaction. … The interesting stuff is about what the layering is going to look like, not what the single replacement is.”


The first thing Clay Shirky says when I ask him about the future of phones is this. “Bizarrely, I don’t even remember why we were talking about this, but my eight-year-old daughter, yesterday said, ‘Oh, cell phones are eventually going to be one [biological] cell big and you can just talk into your hand,'” Shirky said. “She totally internalized the idea that the container is going to keep shrinking. When an eight-year-old picks it up, it’s not like she’s been reading the trade press. This is in the culture.”

And The Incredible Shrinking Phone is certainly one vision for form factor changes. “So one thing you can imagine is tiny little devices that are nothing but multi-network stacks and a kind of personal identifying fob that lets you make a phone call from a Bluetooth device in your ear, or embedded in your ear, or embedded in your hand, as my daughter would say,” he said.

But Shirky presented an alternative, too, that is equally striking.

“And then the parallel or competitive future is the slab of glass gets unbelievably awesome. Rollable and Retina display is the normal case,” he said. “Everyone has this rollable piece of plastic, something that works like an iPad but can work like a phone when it’s rolled up.”

Look at the recent trend in phone design. All the screens are getting better. More unexpected is that many are also getting *bigger*. Sure, the iPhone 5 just got bigger, but the Samsung Galaxy Note II is 5.5 inches long! (One sad consequence of this future would be the permanent dominance of cargo pants.) I’ve only seen one of these in the wild, at Incheon Airport in Seoul. It seemed like a joke. This ultra-long iPhone 20 actually is a joke. And yet … Unlike Steve Jobs’ vision of Two Sizes Fitting All, it seems like all the screen sizes from 4 to 9 inches (and beyond?) are going to be filled with better-than-print resolution devices.

The other ubiquitous referent for the phone form of the future is Google Glasses. I have to give kudos to Google for creating such an inescapable piece of technology. No one can seriously discuss what things might look like in 10 years without at least namechecking something that looks like this:

Google Glass — or its successors — will allow you to have a kind of heads-up display (maybe?) and life-logging recorder right on your face at all times. They are one vision of a phone that pushes hard on merging digital and physical information (“augmented reality”). In some sense, they are Shirky or Buckland’s tiny fob plus a transparent screen that sits directly in front of your eye.


So far, we’ve run through ideas that fit most of the trends and visions of the past few years. But what if there is a far more radical departure from our current paradigm?

It’s obvious that the future will be full of devices that connect to your phone wirelessly. Playing music through Bluetooth on a JamBox or printing from your phone is just the beginning. Last night, my friend and writer Andy Isaacson described a Burning Man camp in which 3D printed objects were delivered by helicopter drone to people who’d ordered them on the Playa and agreed to carry a GPS tracker so the drone could find them.

This is really happening today.

Already you can get a tiny helicopter and control it with your iPhone. Wired’s Chris Anderson is working on bringing the cost of full-capability drones — DIY Drones — down to consumer levels. Already you can have cameras installed in your home and monitor them from your device. Already you can unlock a ZipCar with your phone. Already you can control a Roomba with your phone. And none of this mentions all the actual work you can do with tiny motors and actuators hooked through the open-source Arduino platform. Add it all up and your phone could become the information hub that allows you to monitor and control your fleet of robot data scavengers, messengers, and servants.


We tend to think of disruptions as coming out of ever more capable technology, but what if the communication devices we actually use in the future are ultra low-cost, close-to-disposable devices. Already, according to wireless-industry trade group CTIA, there are more than 70 million pay-as-you-go subscriptions in the United States. The capabilities and prices of these phones will continue to decline. Perhaps in 10 years you will be able to buy an iPhone 5’s worth of capability for $10.

One can imagine that a possible response of blanket digital- and physical-data collection by individuals, corporations, and governments would be to go lower tech and to change phones more often. While some people may run around with a fob that makes sure their data is with them all the time, others might elect to carry the dumbest, cheapest phone possible. Imagine if the 2022 equivalent of “clearing your cookies” is buying a new phone so that you’ll no longer be followed around by targeted advertisements.

In China, having two or even three phones is not uncommon. One survey found that a good 35-45 percent of Chinese mobile users use two or more phones! IDEO’s Stillion imagined a less dystopian version of the ubiquitous, low-cost phone model. He imagined we might just leave phones acting as video cameras so that we could visit places we missed. You could check in on the redwood forest from your desk. “You can visit these things when you like, especially when there is some mechanism for enhanced solar power,” he said.

So, that’s the low-tech scenario. But it’s certainly possible that we have a disruptive high-tech scenario. My bet would be on some kind of brain-computer interface. As we wrote earlier this year, we are just now beginning to create devices that allow you to control machines with thought alone. A landmark paper was published in May showing quadriplegic patients controlling a robotic arm. “We now show that people with longstanding, profound paralysis can move complex real-world machines like robotic arms, and not just virtual devices, like a dot on a computer,” said one of the lead researchers, Brown University neuroscientist John Donoghue. Despite the success, our David Ewing Duncan explained that the technology wasn’t quite ready for prime time.

[Donoghue] and colleagues at Brown are working to eliminate the wires and to create a wireless system. They are conducting work on monkeys, he said, but still need FDA approval for human testing.

The work is still years away from being ready for routine use, said Leigh Hochberg, a neurologist at the Massachusetts General Hospital in Boston and another principal of the Braingate project. “It has to make a difference in people’s lives, and be affordable,” he said. The scientists also need to replicate the data on more people over a longer period of time.

But hey, 10 years is a long time, DARPA has long been interested, and a brain-computer interface would provide a nice route around the difficult problems of computers communicating in our complex, ever-evolving languages. Plus, we wouldn’t have to listen to everyone talk to his or her mobile concierge.


There are two big limits on our dreams for the future of phones: energy storage and bandwidth. Batteries have improved remarkably over the last decade and Steven Chu’s ARPA-E agency wants to create radical breakthroughs in storing electrons. But it’s not easy and if we want some of the wilder scenarios to become realities, we need much better batteries.

One reason to be optimistic here is that materials science has hitched its wagon to Moore’s Law. Experiments in the field are being carried out in computer simulations, not the physical world, which is much, much faster. MIT materials scientist Gerbrand Ceder told me long ago, “Automation allows you to scale.” I wrote about his Materials Genome Project in my book, Powering the Dream. Ceder said “it wasn’t the web, per se, that brought us the wonder of the web. Rather it was the automation of information collection by Web crawlers that has made the universe of data accessible to humans.” And that’s what his team (and others) are trying to do with the information embedded in the stuff of the physical world.

The other big hang up is network bandwidth. We all know that cellular-network data is slow. Many of us simply work around that by using our phones on wifi networks. But that’s not how it is all over the world. Korea, famously, has very fast mobile broadband.

“If you go on the subway in Seoul, there are people watching live streaming television underground,” Shirky said. “You get on the New York subway and I can’t send a text message to my wife. … You want to know what the American phone in 2022, imagine what it’s going to be like in Seoul in 2016.”

Shirky then reconsidered. “Actually, I’m not sure I’ll be able to watch streaming television on my phone under the East River a decade from now,” he said. “I may not be able to do what they took for granted in Seoul in 2007.”

Mark this down as one area where countries with certain geographical features and feelings about government infrastructure spending may have a harder time realizing the possibilities the technology allows.

The last limit is softer — a privacy backlash — even though, so far, we have no real evidence of the tide turning here in the United States. For all our computing devices allow us to do, what they ask in return is a radical loss of privacy. Every person recording a scene with Google Glass is changing the implicit social contract with everyone in his or her field of view. “Surprise! You’re permanently on Candid Camera.” When a guy who gets billed as the world’s first cyborg because he wears a DIY version of Google Glass got beat up at a McDonald’s in Paris, his eye camera got a look at the face of the guy who did it. He says that was a malfunction, but still — an image was recorded on a device — and now he can use that in a way that no one not wearing an eye cam could.

What if whole cities go “recording-free” like Berkeley is “nuclear-free”? If the pervasive datalogging endemic online comes to the physical world (and it will!), how will people react to create spaces for anonymity and impermanence? What kinds of communal norms will develop and how might those change the types of technology on offer? It might never happen, but don’t say I didn’t warn you.


The last thing I want to say is that all these technologies are most important for how they get us to change how we think about the world. That is to say, the big deal about social networks isn’t *just* that we can communicate with the people we know from high school, but that people start to think about organizing in different ways, imagining less hierarchical leadership structures.

In the phone realm, I’ll just use two examples from this story, the Leap Motion gesture controller and Google Glass, to explain what I mean.

I watched a demo of Leap Motion on The Verge featuring Buckwald’s co-founder, David Holz. On his screen is a virtual 3D environment. Holtz then uses his hand to grab something on, or rather, in the screen.

“Imagine reaching into a virtual space and being able to move things around in a very natural, physical way,” Holz says. “Here I’m able to grab space and move it.”

It’s that prepositional change — in not on, into not on — that signals a major shift in how we might actually come to feel about computing in general. Somehow, a 3D environment becomes much more real when you can manipulate it like a physical space. A tactile sense of depth is the last trick we need to feel as if “cyberspace” is an actual space.

Meanwhile, Google Glass, no matter how Google is couching it now, is exciting precisely because it’s about mashing the physical and virtual realms together. In a sense, making one’s experience of the world at large more like one’s experience of a computer.

These projects are augmented reality from two directions, one making the digital more physical, the other making the physical more digital. Having opened up a chasm between the informational and material, we’re rapidly trying to close it. And sitting right at the interface between the two is this object we call a phone, but that is actually the bridge between the offline and online. My guess is that however the phone looks, whoever makes it, and whatever robot army it controls, the role of the phone in 10 years will be to marry our flesh and data ever more tightly.

Via The Atlantic