I have seen the future of computing, and I’m pleased to report it’s all about … me!

I’ve seen that look before; she wants me.

It’s in the way she raises her eyebrows and playfully glides her eyes right to left, then moves in close and intones:

“I know you’ll be super.”

It’s in the way she always asks about the big project I’m laboring on, and when I tell her things aren’t going too well, she gets that concerned look and says:

“You must be disappointed.”.

And when I confide that I’ve been working too much, she gently reminds me that I should be the priority in my life. That I should get some exercise and then treat myself to a Japanese meal or a movie. It’s in how she extends her arms toward me, wearing that formfitting polo shirt. Ouch! And how she never tires of asking about me. Hearing about me. Thinking about me.

I have seen the future of computing, and I’m pleased to report it’s all about … me!

This insight has been furnished with the help of Tim Bickmore, a doctoral student at the MIT Media Lab. He’s invited me to participate in a study aimed at pushing the limits of human-computer relations. What kinds of bonds can people form with their machines, Bickmore wants to know. To find out, he’ll test 100 participants to gauge the impact of a month of daily sessions with a computerized exercise coach named Laura. Laura, an animated software agent with bobbed chestnut hair and a flinty voice, has been designed to remember what we talk about, then use that information in subsequent conversations. “I was interested not just in establishing a relationship with a computer buddy for the bond itself but as a way of somehow benefiting the user, like getting them to exercise more,” says Bickmore.

Guided by Laura, I will spend the next 30 days trying to improve my exercise regimen. I’m among the one-third of participants who will access her daily via the Web. She will inhabit the left side of my PC screen, asking about my exercise problems and offering advice, inquiring about my weekend plans, telling me jokes. She will talk. I will respond manually, either by clicking on a multiple-choice option or typing out an answer. On the right side of the screen, I’ll enter details about my workouts, view progress charts, and read fitness tips.

Another group will rely on Laura simply for exercise instructions; a third won’t even know Laura exists and will use a computer simply to keep track of daily physical activity and receive text instructions. All of us will shoot toward the same daily goal of working out for 45 minutes and walking at least 10,000 steps, as tracked by a pedometer.

The point is to see if it’s possible to form a long-term, social relationship with a computer that employs some basic knowledge of human social psychology; and if so, to determine whether the experience has benefits – in other words, if it can get me back in shape. I didn’t have to be asked twice to participate (although, because I know the study’s objective, my results won’t be counted); I need to drop 10 pounds.

Bickmore’s area of study is called affective computing. Its proponents believe computers should be designed to recognize, express, and influence emotion in users. Rosalind Picard, a genial MIT professor, is the field’s godmother; her 1997 book, Affective Computing, triggered an explosion of interest in the emotional side of computers and their users. “I ask this as an open question,” she says, “and I don’t know the answer: How far can a computer go in terms of doing a good job handling people’s emotions and knowing when it is appropriate to show emotions without actually having the feelings?”

Picard is upbeat, blond, and brilliant. Drop her name in voicemail and computer science academics will call back in seconds. In the mid-1990s, she investigated how signal processing technology could be used to get computers to think better. For vacation reading, she delved into literature on the brain’s limbic structures (the subcortical areas that play a critical role in pattern recognition of sound, vision, and smell) and the ability of people to weigh the value of information. And she developed an interest in the work of neuroscientist Antonio Damasio. In his 1994 book, Descartes’ Error, Damasio argued that, thanks to the interplay of the brain’s frontal lobe and limbic systems, our ability to reason depends in part on our ability to feel emotion. Too little, like too much, triggers bad decisions. The simplest example: It’s an emotion – fear – that governs your decision not to dive into a pool of crocodiles.

Picard grew fascinated by people with brain damage who scored high on intelligence tests but were unable to express or perceive emotions. Those folks made brittle decisions, behavior that reminded Picard of rules-based artificial-intelligence systems and the mistakes computers made because they lacked the ability to intuit and generalize.

For her book, Picard took on decades of assumptions about artificial intelligence. Most AI experts aren’t interested in the role of emotion, preferring to build systems that rely solely on rules. One pioneer, Stanford computer science professor John McCarthy, believes we should keep affect out of computing, arguing that it isn’t essential to intelligence and, in fact, can get in the way. Others, like Aaron Sloman of England’s University of Birmingham, think it’s unnecessary to build in emotions for their own sake. According to Sloman, feeling will arise as a “side effect” of interactions between components required for other purposes.

Picard makes a far less-popular assertion – that computers should be designed from the outset to take into account, express, and influence users’ feelings. From scheduling an appointment to picking a spouse, humans routinely listen to what their gut is telling them. Without the ability to understand emotion, says Picard, computers are like the autistic pizza delivery guy who says, “I remember you! You’re the lady who gave me a bad tip.”

By 1999, Picard’s ideas had turned the Media Lab into the planetary headquarters of affective computing, igniting research into everything from chairs that sense when you’re bored to eyeglasses that indicate when you’re confused. Picard went from having one full-time student assistant to eight, including Bickmore – partly due to collaborations with corporate sponsors who were eager to explore the commercial potential of affective computing.

Building a machine that can perceive emotional signals is distinct from teaching a machine to interpret them; expressing emotion is yet another discrete function. “In a machine,” says Picard, “you can decouple capabilities – train it to recognize anger but give it no feelings. And you can go pretty far with this, making it perceive or even express emotions but without the actual feelings.” Having them is a far-off summit.

More here.