Can AI Enter the Examination Room? Doctors Discuss "Where It Should Be Used / Where It Should Be Avoided"

Can AI Enter the Examination Room? Doctors Discuss "Where It Should Be Used / Where It Should Be Avoided"

1) Doctors are not against AI. The issue is "AI speaking directly to patients"

TechCrunch did not depict a "Doctors VS AI" conflict. Rather, both doctors and investors see that "AI can improve the inefficiencies in healthcare." What is being cautioned against is that the moment AI becomes the **interface with patients**, the issues of responsibility and safety become significantly heavier.


In TechCrunch's LinkedIn post, the concern was not "AI itself" but the point that "when AI starts to handle patient interactions, the place of accountability becomes ambiguous." In healthcare, if the outcome is poor, someone must bear legal, ethical, and clinical responsibility.



2) The strong figure of "45% pulmonary embolism" distorts medical practice

Dr. Sina Bari, a surgeon and leader in the medical AI field, shared an experience where he was shocked by a generative AI response brought by a patient. The recommended medication was accompanied by a "45% probability of pulmonary embolism," but upon investigation, it was derived from a paper concerning a limited group with tuberculosis, which did not apply to that patient.


This is the fear of medical chatbots.
In healthcare, the meaning of numbers changes with the target group, conditions, and assumptions. Yet, **"numbers are strong."** Patients become anxious, and doctors have to start consultations by "correcting misunderstandings." The more confidently errors are stated (so-called hallucinations), the higher the communication costs and risks.



3) Why Dr. Bari still has expectations for "ChatGPT Health"

Surprisingly, even after this experience, Dr. Bari expressed "more expectation than concern" for OpenAI's ChatGPT Health. The reason is simple: the trend of patients consulting AI about health is already happening. Therefore, it is better to prepare a more private space and protective measures and "institutionalize" it safely in line with reality.


ChatGPT Health aims to separate conversations in the health domain into a dedicated space and, according to OpenAI, does not use them for foundational model learning. There are also plans to enhance individuality through the upload of medical records and integration with Apple Health and MyFitnessPal.



4) "Don't use it because it's dangerous" doesn't work—the reality of a 3 to 6-month wait

Dr. Nigam Shah from Stanford indicates that discussions on medical AI cannot be limited to "accuracy." In situations where waiting 3 to 6 months for primary care is not uncommon, he poses the question, "Do you wait half a year for a real doctor, or talk to someone who is not a doctor but can do something?"


This perspective is important. If medical access is sufficient, one can choose "not to use AI because it's dangerous." However, for those who cannot get an appointment or it's too late, AI becomes **"better than nothing."** Therefore, the focus of the discussion shifts from "Can AI be banned?" to how to create designs and systems that reduce accidents.



5) The shortcut for medical AI is "restoring doctors' time" rather than "replacing patients"

Dr. Shah suggests a realistic route is the introduction on the **healthcare provider side** rather than patient-facing chats. In primary care, it's noted that administrative tasks take up about half of a doctor's time, and automating this could increase the number of patients seen and reduce the pressure for patients to turn to "substitute doctor AI."


Stanford's ChatEHR, under development, aims to make information retrieval within electronic health records interactive, reducing the "search time" for doctors and increasing "time spent talking to patients."


In the same context, Anthropic's Claude for Healthcare is also explained to potentially shorten "tedious but essential" tasks like prior authorization, mentioning the possibility of saving 20-30 minutes per case, which adds up to significant time savings.



6) Privacy: What happens the moment it steps outside HIPAA

When patients hand over medical records to chatbots, the data leaves the hospital's management domain, and the framework of protection can change. Itai Schwartz, co-founder of a data leakage prevention company, highlights the issue of medical data moving from HIPAA-compliant organizations to non-compliant vendors and is closely watching regulatory responses.


This anxiety is strong on social media as well. In discussions on Hacker News, there is caution about "health data being outside HIPAA and potentially connecting to future monetization (insurance, employment, etc.)," as well as complaints about complex settings leading to "unintentional sharing."


On the other hand, OpenAI explains that Health is not a substitute for medical care but a support, adopting a dedicated protection design. Here, the condition for trust will be making it "verifiable" not just by "company promises" but through audits, regulations, and transparency.



7) Reactions on social media: Opinions are divided, but the issues are mostly the same

On **LinkedIn (around the TechCrunch post)**, the summary that "AI that supports doctors with diagnostic assistance and documentation is welcomed, but AI that interacts directly with patients makes the boundary of responsibility ambiguous" garnered support. In short, the concern is more about "where responsibility lies" than "convenience."


On Hacker News, the temperature difference is even greater.
Some share experiences where AI helped understand test results or prepare for communication with doctors, arguing "regulation is necessary, but the value should not disappear."


Another group is concerned about AI's tendency to lead towards "agreeing" (redoing conversations to elicit desired answers) and the risk of amplifying self-diagnosis, excessive supplements, and distrust in medicine. There are also strong opinions that "if made publicly available, independent safety certification should come first."


This conflict ultimately converges on the following three points.

  1. The worse the medical access, the greater the demand for AI (hard to stop)

  2. The harm of misinformation tends to concentrate on the "vulnerable" (those in trouble rely more)

  3. Spreading with ambiguous responsibility and privacy is the worst (hence the need for systems)



8) So how should patient-facing medical chatbots be used to "reduce accidents"?

In conclusion, medical AI should be used as an aid to improve the quality of consultations rather than as a "diagnostic substitute."

  • Organizing symptoms and progress, and noting key points to convey during consultations

  • Understanding general explanations of test results, medications, and disease names (final judgment by a doctor)

  • Listing questions to ask at the next consultation


And on the societal side, it makes more sense to first develop **healthcare provider-side AI (chart search, administration, prior authorization, etc.)** and invest in reducing the "six-month wait" in the first place.


As Dr. Bari mentioned, protecting patients requires healthcare providers to be "conservative and cautious." Medical AI should be a tool that supports, not disrupts, that caution.



Reference URLs


Reference Article

Doctors think AI has a place in healthcare, but maybe not as a chatbot.
Source: https://techcrunch.com/2026/01/13/doctors-think-ai-has-a-place-in-healthcare-but-maybe-not-as-a-chatbot/