The Discomfort Overlooked by Doctors, Caught by AI: A Woman Misdiagnosed for Years Finds Her "True Diagnosis" Through AI

The Discomfort Overlooked by Doctors, Caught by AI: A Woman Misdiagnosed for Years Finds Her "True Diagnosis" Through AI

"AI Discovers Woman's Rare Disease." At first glance, such a headline might suggest a story where a chatbot outperformed a doctor. However, the essence of this event lies not in the omnipotence of AI, but in the fact that the patient's complaints were not addressed head-on for years. Phoebe Tesoriere, a 23-year-old living in Cardiff, Wales, suffered from symptoms like walking difficulties, sensory abnormalities, incontinence, and abnormal reflexes, while being given multiple explanations such as anxiety, depression, and epilepsy. Even after experiencing days of coma following a seizure, she couldn't reach a satisfactory answer.

According to reports, after repeated visits to the emergency department, she was suggested that if she continued to visit, she might be treated as a psychiatric patient. The shock for many was not that AI made the diagnosis, but rather that a patient with such complex and severe symptoms was nearly dismissed as having "imaginary" or "mental" issues. Rare diseases are difficult to detect in medical settings. However, failing to detect them is not the same as dismissing complaints lightly. This distinction is why this story is more than just a technological success story.

The turning point came when she input her symptoms into an AI chatbot. The chatbot suggested Hereditary Spastic Paraplegia (HSP) as one of the possible conditions. She brought this possibility to her general practitioner, and subsequent genetic testing confirmed the diagnosis. In other words, it was not AI that ultimately confirmed the disease, but a medical institution. However, it is true that AI provided the foundation for the patient to consult with her doctor about the possibility of this disease.

HSP is a rare and progressive group of hereditary disorders characterized by muscle weakness and stiffness in the legs, walking difficulties, and more. There are many types, with simple and complex forms, and the complex form can involve upper limbs, sensory, urinary functions, and other neurological symptoms. Both the NHS and the US NINDS describe HSP as a "rare and progressive hereditary neurological disorder," noting that it is difficult to diagnose and can be hard to distinguish from other diseases. This case, therefore, is not a simple dichotomy of "incompetent doctors and competent AI," but rather a question of how patients can protect themselves within the constraints of limited consultation time and the challenges of rare diseases.

What is important here is that AI did not "diagnose" but "organized hypotheses." Humans cannot always describe bodily changes in medical terms. When multiple symptoms occur simultaneously, each may be treated as a separate issue, making the overall picture hard to see. AI chatbots can compile these fragments and return a list of possible disease names. The value in this case was not just that a rare disease was included in the list, but that the patient could articulate, "I want to be tested in this direction." It did not replace medical care but functioned as a supplementary line to access it.

The reason this story spread widely on social media is that its structure resonated with many people's experiences. On LinkedIn, there were cautious affirmations like "AI can be a lot of nonsense, but sometimes useful," and voices from those who experienced AI providing more accurate answers than doctors for conditions like ME/CFS. On the other hand, there were opinions suggesting that "AI can offer a lot... when used with care," emphasizing the need for careful use. The focus of the discussion was not AI worship but the empathy of people who had "experiences of not being heard well in medical settings."

 

On X, there were even stronger emotional reactions. One post expressed anger at the medical response itself, suggesting that "psychiatric labels are being overused." Another reaction pointed out that what was once called "Dr. Google" is now being replaced by "Dr. AI." In other words, social media is interpreting this event not just as tech news, but in the broader context of distrust in medicine, dissatisfaction with the tendency to downplay women's symptoms, and the necessity for patient self-defense.

However, it is dangerous to be swept away by enthusiasm here. A study published by Oxford University in February 2026 showed that participants using AI for health consultations did not demonstrate a clear advantage in real-world decision-making compared to those using traditional searches or the NHS site. One reason is that users cannot sufficiently input the necessary information. Another is that AI's responses mix good and bad information, making it difficult for users to distinguish between them. The fact that answers can change significantly with slight changes in the phrasing of questions was also pointed out as an issue. In other words, while AI can sometimes provide hints, it does not become a safe decision-making tool on its own.

OpenAI itself explains that AI in the health field is "a support for medicine, not a substitute for diagnosis or treatment." This is a cautious stance as a company and an important perspective for interpreting this case. Phoebe's case should be read not as an example of AI standing above medicine, but as an example of a patient finally finding a "detour" to access medical care. Therefore, the question this event raises is not only "Should AI be used more for diagnosis?" but also "Why did the patient have to go to such lengths to be heard?"

With AI entering the medical field, patients have gained a new tool. It helps organize symptoms, learn technical terms, and consider what questions to ask doctors next. However, at the same time, this tool can also become a blade that reinforces misinformation or misconceptions. If this case is consumed merely as a "miracle where ChatGPT detected a disease," we lose sight of what truly needs to be learned. What is needed is neither blind faith in AI nor its complete denial. It is about how to create a relationship where AI serves as a supportive tool for listening more quickly and attentively to patients' voices.

The main character of this story is not AI, even until the end. It is a patient who continued to suffer without answers and did not give up on the changes happening in her body. AI merely gave words to that persistence. However, sometimes just obtaining the right question can finally bring a person closer to appropriate medical care. In that sense, this event should be remembered not as a victory for AI, but as a moment when the "unheard voice" was finally visualized.


Source URL

  1. https://g1.globo.com/tecnologia/noticia/2026/04/14/como-chatbot-de-ia-descobriu-condicao-rara-de-mulher-apos-anos-de-diagnosticos-errados.ghtml
  2. Article summarizing and reprinting content from the BBC. Used to confirm Phoebe Tesoriere's background, the process of inputting symptoms into AI, and confirmation through genetic testing.
    https://www.pslhub.org/blogs/entry/9724-chatgpt-uncovered-womans-rare-condition-after-years-of-misdiagnosis/
  3. Used to confirm specific examples of symptoms and the supplementary information that she was diagnosed with complex limb-type HSP in August 2025.
    https://www.ladbible.com/news/health/anxiety-chatgpt-health-diagnosis-phoebe-tesoriere-hsp-410723-20260407
  4. Basic explanation of HSP (Hereditary Spastic Paraplegia). General explanation by NHS.
    https://www.nhs.uk/conditions/hereditary-spastic-paraplegia/
  5. Confirmation of HSP's definition and its status as a progressive neurological disorder. Explanation by the US NINDS.
    https://www.ninds.nih.gov/health-information/disorders/hereditary-spastic-paraplegia
  6. Supplementary confirmation that HSP is a hereditary and progressive group of neurological disorders. Explanation by the US GARD.
    https://rarediseases.info.nih.gov/diseases/6637/hereditary-spastic-paraplegia
  7. Oxford University's announcement regarding the risks of AI in medical consultations. Used to confirm the point that "good and bad information are mixed, making it difficult for users to distinguish."
    https://www.ox.ac.uk/news/2026-02-10-new-study-warns-risks-ai-chatbots-giving-medical-advice
  8. Supplement to the same Oxford study. Used to confirm the point that answers can change significantly with slight differences in the phrasing of questions.
    https://www.oii.ox.ac.uk/news-events/new-study-warns-of-risks-in-ai-chatbots-giving-medical-advice/
  9. Reuters article reporting on the Oxford study. Used to confirm that AI users did not show a significant advantage in real-world decision-making.
    https://www.reuters.com/business/healthcare-pharmaceuticals/ai-no-better-than-other-methods-patients-seeking-medical-advice-study-shows-2026-02-09/
  10. Official explanation by OpenAI. Used to confirm the policy that AI in the health domain is a support for medicine, not a substitute for diagnosis or treatment.
    https://openai.com/index/introducing-chatgpt-health/
  11. LinkedIn post used to confirm social media reactions. Source of cautious affirmations and empathetic reactions in the comments section.
    https://www.linkedin.com/posts/anilvanderzee_chatgpt-diagnoses-cardiff-womans-rare-condition-activity-7448414384688861184-JEaN
  12. Used to confirm social media reactions on X. Used to verify anger and skeptical reception towards medical responses.
    https://x.com/Miroandrej/status/2042668305190948993
  13. Used to confirm critical reactions towards the handling of psychiatric labels on X.
    https://x.com/senmum05