Skip to main content
ukiyo journal - 日本と世界をつなぐ新しいニュースメディア Logo
  • All Articles
  • 🗒️ Register
  • 🔑 Login
    • 日本語
    • 中文
    • Español
    • Français
    • 한국어
    • Deutsch
    • ภาษาไทย
    • हिंदी
Cookie Usage

We use cookies to improve our services and optimize user experience. Privacy Policy and Cookie Policy for more information.

Cookie Settings

You can configure detailed settings for cookie usage.

Essential Cookies

Cookies necessary for basic site functionality. These cannot be disabled.

Analytics Cookies

Cookies used to analyze site usage and improve our services.

Marketing Cookies

Cookies used to display personalized advertisements.

Functional Cookies

Cookies that provide functionality such as user settings and language selection.

OpenAI Reaffirms ChatGPT Policy: "Prohibition on Legal and Medical Advice" Was Not a "New Rule"—What Has Changed and What Hasn't

OpenAI Reaffirms ChatGPT Policy: "Prohibition on Legal and Medical Advice" Was Not a "New Rule"—What Has Changed and What Hasn't

2025年11月16日 19:50

1. What Became a Hot Topic: The Misunderstanding of "ChatGPT Banning Medical and Legal Consultations?"

In early November 2025, a post claiming that "ChatGPT has completely banned medical, legal, and financial advice" spread globally on social media. One trigger was reports and posts cited by news sites stating, "Chatbots will no longer provide specific advice on treatment, law, or money."Khaleej Times+1

Particularly viral was a message posted on X by the betting platform "Kalshi," stating, "ChatGPT no longer provides health or legal advice" (later deleted).The Verge

As a result,

  • "Will it no longer answer questions about illnesses?"

  • "Will it be impossible to check contracts or consult about taxes?"

  • "Has ChatGPT been downgraded to just a 'chat tool' due to AI regulations?"

Such anxieties and speculations spread among users worldwide, including in Japan.


However, OpenAI's official explanation contrasts with this. According to a report by NDTV Profit, OpenAI stated, "The behavior of ChatGPT has not changed," and "The rules regarding legal and medical advice have existed for some time and were not newly introduced."NDTV Profit



2. OpenAI's Official Explanation: "The Model's Behavior Has Not Changed"

Karan Singhal, OpenAI's Head of Health AI, directly refuted these rumors on social media. According to The Verge, Singhal posted the following on X.The Verge


"This is not true. Speculation is spreading, but this is not a new change in the terms of use. The model's behavior has not changed. ChatGPT has never been a substitute for professionals, but it remains an excellent resource for understanding legal and health information."


There are two key points.

  1. There are no "new prohibitions" added
    The rule prohibiting "individual specific advice" in high-risk areas such as law, medicine, and finance without the involvement of professionals has existed for some time.The Times of India+1

  2. General information provision will continue
    General explanations of diseases, overviews of laws and systems, explanations of precedents and news, and general health information will continue to be provided as "educational information" or "explanations to aid understanding." OpenAI also positions ChatGPT as an "Educational Tool."NDTV Profit+1


In other words, it's not that it "stopped answering altogether," but rather, it has reiterated what it "should not have been doing in the first place," emphasizing that "you should always consult a human expert in those areas."



3. What Actually Happened with the Usage Policy Revision on October 29

So, what changed with the update on October 29, 2025? OpenAI updated its Usage Policies on its official site, consolidating documents that were previously divided.OpenAI+1


3-1. Integration and Clarification of Policy Documents

Previously,

  • there was a "Universal Policy" common to all users

  • a policy specific to ChatGPT

  • a policy for developers using the API

Documents were divided according to use. In this revision, these have been unified and organized as "Usage Policies common to all OpenAI services."The Times of India


Furthermore,

  • "Protect people"

  • "Respect privacy"

Major pillars were established, listing examples of prohibited uses under these categories.



3-2. Specific Descriptions Regarding Legal and Medical Advice

In the "Protect people" section of the Usage Policies, one of the prohibited actions includes the following wording:OpenAI

"Providing 'tailored advice' in licensed fields (such as law and medicine) without the involvement of appropriately qualified professionals."


This is almost the same in intent as the previous policy wording:

"Prohibiting the provision of individual advice on law, medicine, and finance without expert review, as actions that could significantly harm people's safety, rights, and welfare."

The content is a "continuation" and "clarification of expression," and no new prohibitions were suddenly added.The Times of India+1

In other words, it is a "continuation" and "clarification of expression," and no new prohibitions were suddenly added.



3-3. Restrictions on Automation in High-Risk Areas

The current policy also restricts automating decision-making solely by AI in areas that significantly impact people's lives, such as law, medicine, finance, housing, employment, and insurance.Khaleej Times+1


  • Making hiring or firing decisions solely by AI

  • Deciding insurance payouts solely by AI

  • Determining treatment plans solely by AI


Such actions carry high risks. OpenAI states that human expert review and decision-making should always be included in such uses.



4. Why Restrict "Legal and Medical Advice"?

There are several reasons why OpenAI is cautious about legal and medical advice.


4-1. Risks Due to Incorrect or Incomplete Information

Generative AI can produce plausible text but may also cause factual errors (so-called "hallucinations"). Especially in medicine and law,

  • Slight differences in facts can change conclusions

  • Rules vary by country, region, and time

  • Optimal solutions change based on individual symptoms and circumstances

These characteristics make it dangerous to apply "generalizations" directly to individual cases.


For example,

  • Even with the same disease name, safe treatment methods change based on pre-existing conditions and medications being taken.

  • Contract disputes can also change the necessary actions based on the contract's wording and details of interactions with the other party.

In such situations, if AI provides incorrect advice and users misunderstand it as "expert opinion," real health damage or legal troubles can occur.

4-2. Ambiguity in Responsibility

If significant damage results from actions taken based on AI advice, the question of "who is responsible" arises.

  • Is it the company providing the AI?

  • Is it the developer using the model?

  • Is it the user who adopted the AI advice?

  • Or is it the organization that decided to implement AI?


Currently, discussions are ongoing in various countries, and many areas lack clear rules. OpenAI aims to avoid increasing risks unnecessarily in such "gray areas" byavoiding "AI-only decisions" in high-risk areas.



4-3. Consistency with Professional Ethical Standards

Doctors, lawyers, tax accountants, and other professionals have their own ethical standards and legal obligations.

  • Doctors: Medical Practitioners Act, medical advertising guidelines, duty to explain medical treatments, etc.

  • Lawyers: Lawyers Act, basic regulations on lawyer duties, confidentiality obligations, etc.


AI is not directly subject to these "obligations imposed on qualified professionals," so the same level of responsibility and ethical standards cannot be expected. OpenAI's repeated emphasis that "ChatGPT is not a substitute for professionals" is rooted in this reality.NDTV Profit+1



5. ChatGPT Still Useful: As a "Navigator" for Understanding Information

If "individual advice" is restricted, does that mean ChatGPT is no longer useful? OpenAI does not think so. The key is tocorrectly position its role.


5-1. Translating Complex Information into "Understandable" Terms

The worlds of law and medicine often involve specialized terminology and lengthy documents, which can be difficult for the general public to understand. ChatGPT can

  • explain the meanings of technical terms in simple language

  • ##HTML
← Back to Article List

Contact |  Terms of Service |  Privacy Policy |  Cookie Policy |  Cookie Settings

© Copyright ukiyo journal - 日本と世界をつなぐ新しいニュースメディア All rights reserved.