These days, a radiologist at UCSF will go through anywhere from 20 to 100 scans a day, and each scan can have thousands of images to review.
In health care, you could say radiologists have typically had a pretty sweet deal. They make, on average, around $400,000 a year — nearly double what a family doctor makes — and often have less grueling hours. But if you talk with radiologists in training at the University of California, San Francisco, it quickly becomes clear that the once-certain golden path is no longer so secure.
“The biggest concern is that we could be replaced by machines,” says Phelps Kelley, a fourth-year radiology fellow. He’s sitting inside a dimly lit reading room, looking at digital images from the CT scan of a patient’s chest, trying to figure out why the patient is short of breath.
Because MRI and CT scans are now routine procedures and all the data can be stored digitally, the number of images radiologists have to assess has risen dramatically. These days, a radiologist at UCSF will go through anywhere from 20 to 100 scans a day, and each scan can have thousands of images to review.
“Radiology has become commoditized over the years,” Kelley says. “People don’t want interaction with a radiologist, they just want a piece of paper that says what the CT shows.”
Dr. Marc Kohli says that radiologists should embrace artificial intelligence.
“Computers are awfully good at seeing patterns”
That basic analysis is something he predicts computers will be able to do.
Dr. Bob Wachter, an internist at UCSF and author of The Digital Doctor, says radiology is particularly amenable to takeover by artificial intelligence like machine learning.
“Radiology, at its core, is now a human being, based on learning and his or her own experience, looking at a collection of digital dots and a digital pattern and saying ‘That pattern looks like cancer or looks like tuberculosis or looks like pneumonia,’ ” he says. “Computers are awfully good at seeing patterns.”
Just think about how Facebook software can identify your face in a group photo, or Google’s can recognize a stop sign. Big tech companies are betting the same machine learning process — training a computer by feeding it thousands of images — could make it possible for an algorithm to diagnose heart disease or strokes faster and cheaper than a human can.
UCSF radiologist Dr. Marc Kohli says there is plenty of angst among radiologists today.”You can’t walk through any of our meetings without hearing people talk about machine learning,” Kohli says.
Both Kohli and his colleague Dr. John Mongan are researching ways to use artificial intelligence in radiology. As part of a UCSF collaboration with GE, Mongan is helping teach machines to distinguish between normal and abnormal chest X-rays so doctors can prioritize patients with life-threatening conditions. He says the people most fearful about AI understand the least about it. From his office just north of Silicon Valley, he compares the climate to that of the dot-com bubble.
“People were sure about the way things were going to go,” Mongan says. “Webvan had billions of dollars and was going to put all the groceries out of business. There’s still a Safeway half a mile from my house. But at the same time, it wasn’t all hype.”
“You need them working together”
The reality is this: Dozens of companies, including IBM, Google and GE, are racing to develop formulas that could one day make diagnoses from medical images. It’s not an easy task: to write the complex problem-solving formulas, developers need access to a tremendous amount of health data.
Health care companies like vRad, which has radiologists analyzing 7 million scans a year, provide anonymized data to partners that develop medical algorithms.
The data has been used to “create algorithms to detect the risk of acute strokes and hemorrhages” and help off-site radiologists prioritize their work, says Dr. Benjamin Strong, chief medical officer at vRad.
Zebra Medical Vision, an Israeli company, provides algorithms to hospitals across the U.S. that help radiologists predict disease. Chief Medical Officer Eldad Elnekave says computers can detect diseases from images better than humans because they can multitask — say, look for appendicitis while also checking for low bone density.
Radiologist John Mongan is researching ways to use artificial intelligence in radiology.
“The radiologist can’t make 30 diagnoses for every study. But the evidence is there, the information is in the pixels,” Elnekave says.
Still, UCSF’s Mongan isn’t worried about losing his job.
“When we’re talking about the machines doing things radiologists can’t do, we’re not talking about a machine where you can just drop an MRI in it and walk away and the answer gets spit out better than a radiologist,” he says. “A CT does things better than a radiologist. But that CT scanner by itself doesn’t do much good. You need them working together.”
In the short term, Mongan is excited that algorithms could help him prioritize patients and make sure he doesn’t miss something. Long term, he says radiologists will spend less time looking at images and more time selecting algorithms and interpreting results.
Kohli says in addition to embracing artificial intelligence, radiologists need to make themselves more visible by coming out of those dimly lit reading rooms.
“We’re largely hidden from the patients,” Kohli says. “We’re nearly completely invisible, with the exception of my name shows up on a bill, which is a problem.”
Wachter believes increasing collaboration between radiologists and doctors is also critical.
“At UCSF, we’re having conversations about [radiologists] coming out of their room and working with us. The more they can become real consultants, I think that will help,” he says.
Kelley, the radiology fellow, says young radiologists who don’t shy away from AI will have a far more certain future. His analogy? Uber and the taxi business.
“If the taxi industry had invested in ride-hailing apps maybe they wouldn’t be going out of business and Uber wouldn’t be taking them over,” Kelley says. “So if we can actually own [AI], then we can maybe benefit from it and not be wiped out by it.”
At least for now, Kelley offers what a computer can’t — a diagnosis with a face-to-face explanation.