Skip to main content
ukiyo journal - 日本と世界をつなぐ新しいニュースメディア Logo
  • All Articles
  • 🗒️ Register
  • 🔑 Login
    • 日本語
    • 中文
    • Español
    • Français
    • 한국어
    • Deutsch
    • ภาษาไทย
    • हिंदी
Cookie Usage

We use cookies to improve our services and optimize user experience. Privacy Policy and Cookie Policy for more information.

Cookie Settings

You can configure detailed settings for cookie usage.

Essential Cookies

Cookies necessary for basic site functionality. These cannot be disabled.

Analytics Cookies

Cookies used to analyze site usage and improve our services.

Marketing Cookies

Cookies used to display personalized advertisements.

Functional Cookies

Cookies that provide functionality such as user settings and language selection.

Private AI chat leaked to Google? The ChatGPT "leak scandal" reexamines privacy in the AI era

Private AI chat leaked to Google? The ChatGPT "leak scandal" reexamines privacy in the AI era

2025年08月03日 09:41

1. The Beginning: Fast Company Unveils "Conversations That Shouldn't Be Seen"

On July 31 (U.S. time), Fast Company reported a scoop that "Google is indexing ChatGPT conversations." By executing the search operator "site:chatgpt.com/share," a list of URLs with publicly set conversations appeared. These included counseling content related to PTSD, discussions about layoffs, and even unpublished research ideas.Fast Company


2. The Pitfall of "Make this chat discoverable"

The root of the problem was a sharing feature that had been experimentally implemented for some users since late June. When sending a chat to a friend, two buttons appeared: ① "Create a shareable link" and ② "Make discoverable in search." Turning on the latter allowed crawlers from Google and others to access it. However, the cautionary note below the button was in light gray text, criticized by UX researchers as a "UX that induces accidental taps."Windows CentralSearch Engine Land


3. 4,500 Exposed: More Serious Than the Number

Crawls conducted by Fast Company and Tom’s Guide revealed at least 4,500 public links. About 30% of these contained information such as email addresses, internal project names, and medical histories, posing GDPR violation risks. On Reddit's /r/privacy thread, users reported, "My self-introduction for a job interview was fully visible," and "I had shared NDA materials publicly."Tom's GuideReddit


4. OpenAI's Emergency Response and Accountability

Late on the 31st, OpenAI CSO Dane Stuckey announced on X that "although it was a short-term experiment, it led to accidental sharing," and declared the immediate removal of the feature. Furthermore, he explained that they are working with search companies to exclude existing links and will remove the button from all users within 24 hours.


5. Social Media Uproar: "#ChatGPTLeak" and "#PrivacyFail" Trending

  • X

    • "My mental health consultation is in search results. Is this the 'future of AI'?"

    • "Google and OpenAI, another 'opt-in' disguised as an 'opt-out hell.'"

  • Reddit (r/privacy)

    • "Surprised... or maybe not again."

    • "Rename the share button. 'Publish to the world' is more accurate."

  • Instagram Reels saw short videos spreading, with the comment section in a panic, saying, "Even if links are deleted, caches remain," and "Trade secrets are leaking too."RedditInstagram


6. Expert Perspective: "Double-Check Before Sending, Just Like Email"

Business Insider pointed out that "shareable links are essentially the same as cloud public URLs." TechCrunch analyzed that "due to the same structure as Google Drive's 'Anyone with link,' it's not surprising they were picked up by search." Many information security researchers agree that "the principle of 'minimum exposure' should be strictly adhered to when using generative AI."Business InsiderTechCrunch


7. Legal and Ethical Impact: GDPR and Japan's Personal Information Protection Law

In Europe, while some argue that "as long as the individual explicitly 'publishes,' it's not a GDPR violation," there is heated debate that "if the UI induces misunderstanding, it cannot be considered valid consent." Japan's Personal Information Protection Commission also commented that "regardless of intent or negligence, if a leak occurs, the business operator cannot escape responsibility" (based on interviews with stakeholders). As of the article's publication, there have been no major damage reports from domestic corporate users, but investigations are ongoing.


8. Five Immediate Measures Users Can Take

  1. Inventory of Shared URLs – Check the list in ChatGPT's left menu › Shared Links and delete unnecessary links.

  2. Request to Delete Search Cache – Submit an early exclusion request using Google's "Remove Outdated Content" tool.

  3. Regularly Delete History – Turn off "Data Settings › Chat History & Training."

  4. Establish Internal Policies – Formalize the prohibition of using sharing features and restrictions on entering NDA information.

  5. "Don't Write" Before "Let AI Write" – Do not input information intended to remain undisclosed from the start.


9. Conclusion: Reaffirming "Non-Public by Default" for Generative AI

This incident has once again highlighted the obvious reality that, like emails and cloud documents, "once it's out, it can't be taken back" with AI chats. While OpenAI acted swiftly, the entire industry needs to learn the importance of "opt-in design" and "clear UI." Users should also cultivate the imagination that "AI might be more like a bulletin board than a secretary" and make double-checking before sending a habit.Double-check before sending should become a habit.


References

Private Conversations with ChatGPT Appear on Google, Causing Global Alert
Source: https://www.infomoney.com.br/mundo/conversas-privadas-com-o-chatgpt-aparecem-no-google-e-provocam-alerta-global/

Powered by Froala Editor

← Back to Article List

Contact |  Terms of Service |  Privacy Policy |  Cookie Policy |  Cookie Settings

© Copyright ukiyo journal - 日本と世界をつなぐ新しいニュースメディア All rights reserved.