Katie Sarvela found herself in her Nikiksi, Alaska bedroom, perched on a moose-and-bear-themed bedspread, as she input her initial symptoms into ChatGPT. Vividly describing sensations like half of her face feeling ablaze, intermittent numbness, inexplicable skin dampness, and night blindness, Sarvela sought insights from the AI chatbot.

Despite ChatGPT’s initial disclaimer, stating it couldn’t provide medical diagnoses, Sarvela’s subsequent revelation was astonishing—multiple sclerosis. This autoimmune ailment targeting the central nervous system had manifested in Sarvela during her early twenties. While the chatbot’s conclusion wasn’t an official diagnosis, the accuracy surprised both Sarvela and her neurologist, prompting further medical investigation.

ChatGPT, an AI-powered chatbot sifting through the internet’s vast information, garnered widespread attention in 2023. Based on the GPT-3.5 model, it offered a free and accessible tool for users to receive personalized information in a conversational tone. Its ability to swiftly amalgamate data and tailor results sparked comparisons to the widely practiced act of “self-diagnosing” or consulting “Dr. Google” before seeing a healthcare professional.

For individuals like Sarvela, grappling with prolonged undiagnosed symptoms, ChatGPT offered a potential lifeline. It presented an opportunity to explore personalized queries, potentially saving crucial time within a healthcare system marked by extended wait times, medical gaslighting, biases, and communication gaps.

However, entrusting any tool, including ChatGPT, with influence over one’s health comes with inherent risks. The AI’s notable limitation lies in the potential generation of inaccurate information, referred to as “hallucinations” in AI circles. Relying on such data without professional consultation could lead to perilous consequences. Dr. Karim Hanna, Chief of Family Medicine at Tampa General Hospital, acknowledged ChatGPT’s diagnostic potential but emphasized it as a complementary tool, not a substitute for medical professionals.

Despite the caveats, ChatGPT offers a distinctive advantage over traditional internet searches. Dr. Hanna compared it favorably to Google, considering it more than a mere search engine. The cautionary notes extend to distinguishing reliable medical information sources, avoiding “cyberchondria,” and preventing false reassurance that might lead to overlooking serious health issues.

A 2022 survey by PocketHealth highlighted that “informed patients” utilize various sources, combining insights from doctors, the internet, articles, and online communities. While internet democratizes medical information, it also poses risks of anxiety and misinformation, urging individuals to carefully navigate the abundance of available data.

Research examining ChatGPT’s accuracy in self-diagnosing orthopedic conditions revealed inconsistencies, emphasizing its potential as a preliminary step in healthcare. Doctors like Dr. Kaushal Kulkarni see AI’s strength lying not just in diagnosis but also in aiding comprehensive understanding and post-diagnostic research.

Despite its potential, ChatGPT has encountered challenges, such as hallucinating fake references, as reported in a study published in JAMA Ophthalmology. The risk of misinformation underscores the need for users to critically evaluate the data presented.

In the quest for informed patient care, experts suggest leveraging ChatGPT to frame discussions with healthcare professionals, especially for individuals with chronic illnesses. Users can prepare for doctor’s appointments by employing the ICE method—identifying ideas, expressing concerns, and aligning expectations.

As these AI tools evolve, individuals like Katie Sarvela and countless others grappling with undiagnosed conditions hope for more informed, efficient, and collaborative healthcare solutions. While AI presents a promising avenue, it remains imperative to tread carefully, acknowledging the limitations and ensuring professional oversight in matters of health.

By Impact Lab