"Chatbot Over Doctors?" The Real Reason Tired Patients Turn to AI: The Pros and Cons of Chatbot Medical Consultations

"Chatbot Over Doctors?" The Real Reason Tired Patients Turn to AI: The Pros and Cons of Chatbot Medical Consultations

1. The Choice of "Turning to Chatbots Instead of Hospitals"

The title of an article published in the Well section of the New York Times is
“Frustrated by the medical system, patients turn to chatbots”—which directly translates to "Patients frustrated with the medical system turn to chatbots."


The article depicts how ordinary people in America are starting to consult AI first for health advice. This trend is driven by dissatisfaction with long waiting times for appointments, rushed 15-minute consultations, and exorbitantly high medical costs.Reddit


On the other hand, with AI chatbots,

  • there's no need for waiting rooms or appointments

  • you can ask questions almost for free, 24/7

  • and they listen attentively without interrupting.

From the patients' perspective, "asking AI" is becoming a lower hurdle both psychologically and economically than "going to the doctor."

2. The Atmosphere Reflected on Social Media: Welcome and Confusion

The NYT article quickly spread on social media, sparking intense debate, especially on the overseas bulletin board Reddit's r/medicine, where medical professionals gather.Reddit


Voices from the "Too Convenient" Side

One poster lamented, "Clients can call for free, yet they still choose to consult chatbots for legal advice." Although this is about law, the same structure applies to healthcare. In other words, the allure ofconvenienceandimmediacyis so great that people turn to AI even when they can ask experts for free.Reddit


There was also a comment from a doctor noting that hospitalized patients are using ChatGPT to understand their conditions. Despite having real doctors and nurses by their bedside, patients direct their questions to their smartphone screens.Reddit


This reflects a mindset of "I feel hesitant to hold up a busy doctor, but I can ask AI as many times as I want without hesitation."


Voices Raising the Alarm: "Is That Really Safe?"

On the other hand, strong concerns were also expressed in the same thread. One post pointed out

  • the privacy risksof patients inputting their health information directly into commercial LLMs.

  • The potential for the sycophancy trait of models, which leads them to cater to users, to produce dangerous advice

was highlighted.Reddit


In fact, an article discussing the safety of medical AI chatbots published on WindowsForum explains that there are many cases where sycophancy leads to "not correcting patients' misconceptions and proceeding with conversations based on incorrect assumptions."Windows Forum


The sense of security from "AI agreed with me" is the most dangerous.


Cases Where AI Became a "Weapon"

In another thread, a case was introduced where a family used Anthropic's chatbot Claude to thoroughly check a bill of $195,000 (about 30 million yen) for a 4-hour ICU stay, identifying duplicate charges and incorrect codes, reducing it to about $33,000.Reddit


This is an extreme example, but it also shows that **AI is beginning to be used as a "weapon for the patient side" against an overly complex medical system.**

3. Research Data Demonstrating the Competence of "AI Consultation"

Not only anecdotal impressions but research data also support the potential of AI chatbots.


Research Evaluating AI as "Higher in Quality and Empathy" than Doctors

In a study published in 2023, responses to health consultations posted on online forums were created by both doctors and ChatGPT, and their quality and empathy were blindly evaluated by experts. The results showed that

  • the percentage of responses rated as "good" or "very good" in quality was 22% for doctors compared to 78% for chatbots

  • the percentage rated as "empathetic" was about 5% for doctors compared to 45% for chatbots

indicating that AI significantly outperformed.ResearchGate


Of course, this result is limited to text-based interactions, but the structure where AI is highly rated as an entity that "takes the time to explain carefully" aligns with impressions on social media.


The "Tag Team" of Doctors + AI Might Be the Strongest?

A study from Stanford University compared standalone chatbots with doctors who made decisions based on chatbot suggestions, showing thatdoctors who referenced AI options made better judgments than standalone doctors.Stanford Medicine


In other words, AI chatbots are considered to perform best not as a "replacement for doctors" but as a "partner that reinforces doctors' decisions."

4. Significant Risks Still Remain

However, this doesn't mean "AI alone is enough." In fact, the issues start from here.


① Hallucinations and Sycophancy

A review organizing the safety of medical AI chatbots points out that

  • the most dangerous aremisleading but plausible information (hallucinations)

  • and the tendency to align with users' misconceptionssycophancy

as significant concerns.Windows Forum


Even without intentional malice, the nature of AI to "try to answer what is asked" can lead to misinformation and dangerous advice.


② Lack of Context

The Washington Post editorial warns that "chatbots cannot access the entire medical history or social background of individual patients, posing the risk of missing important context."The Washington Post


For example, even the same abdominal pain can be judged differently based on

  • recent surgical history

  • anticoagulant medications being taken

  • the possibility of pregnancy

and other information. Writing out such complex contexts in a chat box is not realistic, and often patients themselves do not understand the importance.


③ Privacy and Data Ownership

Furthermore, The New Yorker points out the privacy risks, stating, "The moment you input medical information into a chatbot, it becomes unclear whose data it is." There have been reports of chat logs being visible from search engines in some companies.The New Yorker

The WHO also states that while AI can save healthcare providers' time, it is essential to consider personal data protection, bias, and equitable design and governance.cens.cl


5. A Practical Guide to "Smart Usage"

So, how should we interact with AI chatbots? UW Medicine at the University of Washington lists the following points as "tips for using ChatGPT for health consultations."Right as Rain by UW Medicine


Here, based on those tips, let's organize a practical guide for everyday use.


Situations Where AI Chatbots Are Suitable

  1. Preparing and Creating Questions Before a Consultation

    • Explain the symptoms you're concerned about and ask, "List the questions I should ask the doctor."

    • This reduces the chance of missing important questions during a short consultation.

  2. Basic Understanding of Test Results and Diagnoses

    • Based on what the doctor has already explained, ask, "Explain it in simpler terms" or "Make it understandable for a 10-year-old."

    • Think of it as using AI as a "medical interpreter" to translate difficult medical jargon into understandable language.

  3. Generating Ideas for Lifestyle Improvements

    • In relatively low-risk areas like sleep, exercise, and diet, AI can be a helpful partner in thinking about action plans.


Situations Where AI Chatbots Should Not Be Relied Upon

  1. When Emergency Situations Are Suspected, Such as Chest Pain, Difficulty Breathing, or Sudden Worsening Symptoms
    → Visiting an emergency room or calling an ambulance should be the priority. Consulting AI could be risky.

  2. Decisions on Changing, Stopping, or Adjusting Medication##HTML_TAG_