[youtube]http://www.youtube.com/watch?v=07XPEqkHJ6U[/youtube]

On the TV quiz show, Jeopardy!, IBM’s Watson defeated two grand-champions.  Watson is the world’s smartest computer and it was matched up against two really smart humans. The quiz-show win captured peoples’ attention, but, these days, as we identify uses for Watson throughout society, it’s becoming clear that these technologies will be used primarily to augment human intelligence, not compete with people or replace us.

 

 

It’s not human versus machine, but human plus machine taking on challenges together and achieving more than either could do on its own. Nowhere is this powerful new one-two punch clearer than in the world of medicine and healthcare. Cognitive machines have the potential to help physicians diagnose diseases and assess the best treatments for individual patients. But, to make the most of this opportunity, machines will have to be designed and trained to interact with doctors in ways that are most natural to them.

Today, at the Cleveland Clinic Medical Innovation Summit, Eric Brown, the leader of the Watson team at IBM Research, is demonstrating a technology project we’re co-developing with physicians at Cleveland Clinic aimed at helping medical students learn to think like experienced physicians. Called WatsonPaths, the project is the most advanced example to date of a computer and humans thinking together.

For me, Eric’s demonstration marks the fulfillment of more than three years of hard work. I joined IBM Research from a digital marketing agency a few months before the Jeopardy! contest was aired. I was hired to help develop real-world applications for Watson. Now, I’m the Watson team’s manager of natural language engineering.

It was clear when I joined that the Watson technology would have to be adapted to be useful in healthcare, banking, retailing, education and other spheres of business and life. Watson was designed to form precise answers to precise questions on Jeopardy!, but that’s not the way the world works. To be useful in real life, the system must be able to understand complex, real-world scenarios so it can help people deal with them. So we had to train Watson to use its question and answer capabilities like a pick to chip away at a complex scenario and break it down into comprehensible pieces.  The system had to be able to discover salient facts, form hypotheses, test them, and arrive at conclusions. So we developed a technology we call the Watson inference chaining system, or WICS, to achieve this.

At Cleveland Clinic, we found a perfect match for our inference-chaining technology—which helped us evolve it into the application we call WatsonPaths. The Clinic’s Lerner College of Medicine uses problem-based-learning methods to teach students how to think like doctors. Using medical scenarios, they walk step by step through the process a physician goes through to evaluate a patient’s condition and determine the best treatment.

We saw the potential for integrating Watson and computer visualization technologies into this training exercise. Now, working with WatsonPaths, students will be able to review the evidence the computer offers and the inferences it draws. As they work with the system, the path to the best conclusion will become more pronounced graphically. Think of it this way: The students will train the system and, at the same time, the machine will help train the students. The goal is to one day incorporate these kinds of capabilities into future Watson commercial offerings.

We created a version of WatsonPaths within IBM Research, but we recognized that to be really useful, the technology would have to be super easy to use. We needed a superior graphical user interface. So we turned for help to the folks at IBM’s Design Lab in New York City. Normally, the team develops Web sites and mobile applications to assist in marketing initiatives. Working with us and Cleveland Clinic, they designed a new face for WatsonPaths.

This is just the start of Watson’s interactions with humans. In the future, you can expect cognitive systems to have written and verbal conversations with people—even debates—all aimed at penetrating complexity so we can make better decisions.

One of my colleagues at IBM Research, Dario Gil, envisions rooms where groups of people will interact with cognitive systems—using hand gestures to summon information or insights on screens. In these settings, Dario imagines, each person will see information displayed in the formats that are most useful to them. The systems will “see” what we’re looking at and “hear” what we’re saying, and they will proactively offer us useful data and suggestions.

As a digital marketing professional for 10 years previous to working at IBM, I came to understand the value of the graphical presentation of information and of interactivity. But, now, when I look ahead into the era of cognitive computing, I see a revolution unfolding before my eyes. For the first time, computers will adapt to the way we want to do things, rather than vice versa. That will be a remarkable change.

Via A Smarter Planet

0