Jordan Perchik commenced his radiology residency at the University of Alabama at Birmingham during what he refers to as the “AI scare” in the field. In 2018, just two years after computer scientist Geoffrey Hinton boldly proclaimed that radiologists would soon be rendered obsolete by machine-learning tools, he encountered a significant drop in applications for radiology programs. Hinton, often dubbed the godfather of artificial intelligence (AI), predicted that these AI systems would soon surpass human capabilities in reading and interpreting medical scans and X-rays. Radiology trainees like Perchik worried about the future of their profession.

Fast forward seven years, and radiologists remain very much in demand. While AI-based tools have become integral to medical care, surveys indicate that only a small proportion of physicians, ranging from 10% to 30%, have actually used clinical AI tools. Their attitudes vary from cautious optimism to outright mistrust. The skepticism stems from concerns about the quality and safety of AI applications. This skepticism sometimes leads to the abandonment of the latest AI approaches.

Even when AI tools function as intended, it’s still uncertain whether they translate into improved patient care. A comprehensive analysis is required to determine their true impact.

However, there is growing excitement around a concept known as generalist medical AI. These are models trained on extensive datasets, similar to the models powering AI chatbots like ChatGPT. After ingesting vast amounts of medical images and text, these models can be adapted for numerous tasks. Unlike currently approved tools that serve specific functions, such as detecting anomalies in a CT chest scan, generalist models act more like physicians, comprehensively assessing all anomalies in scans and providing a diagnosis.

While bold claims of machines replacing doctors have receded, many believe these generalist models could address current limitations of medical AI and, in certain scenarios, even surpass human physicians. The goal is for AI to assist with tasks that humans may not excel at.

Nonetheless, there is a long road ahead before these latest AI tools can be integrated into real-world clinical care.

Current Limitations: AI tools in medicine primarily serve as support for practitioners, helping them swiftly review scans and flag potential issues. While they can work seamlessly at times, errors can have serious consequences. Radiologists rely on their judgment when AI makes mistakes, which can slow down their workflow.

Additionally, existing AI tools tend to focus on specific tasks rather than providing a comprehensive interpretation of medical examinations. This limits their ability to consider all relevant factors, including the patient’s clinical history and previous results.

To address these limitations, researchers have explored medical AI with broader capabilities, inspired by large language models like ChatGPT. These foundation models, trained on diverse datasets using self-supervised learning, offer more flexibility and could potentially identify patterns that humans cannot.

Big tech companies are investing in medical-imaging foundation models that combine multiple image types and incorporate electronic health records and genomics data. These models have shown promise in improving diagnostic accuracies, often with minimal labeled data.

Excitement surrounds the diagnostic potential of AI tools, but they must meet a high bar for success. Other medical applications, such as matching participants to clinical trials, may have a more immediate impact.

Despite the advancements in AI, radiologists like Perchik believe AI will augment their role rather than replace it. The challenge is to train medical professionals to work effectively with AI. Efforts are underway to demystify AI and manage expectations surrounding its capabilities.

The future of AI in medicine holds promise, but responsible integration and ongoing research are vital to ensure its safe and effective use in clinical care.

By Impact Lab