Skip to main content
ukiyo journal - 日本と世界をつなぐ新しいニュースメディア Logo
  • All Articles
  • 🗒️ Register
  • 🔑 Login
    • 日本語
    • 中文
    • Español
    • Français
    • 한국어
    • Deutsch
    • ภาษาไทย
    • हिंदी
Cookie Usage

We use cookies to improve our services and optimize user experience. Privacy Policy and Cookie Policy for more information.

Cookie Settings

You can configure detailed settings for cookie usage.

Essential Cookies

Cookies necessary for basic site functionality. These cannot be disabled.

Analytics Cookies

Cookies used to analyze site usage and improve our services.

Marketing Cookies

Cookies used to display personalized advertisements.

Functional Cookies

Cookies that provide functionality such as user settings and language selection.

Information Security in the AI Era: 5 Types of Information You Shouldn't Share with ChatGPT

Information Security in the AI Era: 5 Types of Information You Shouldn't Share with ChatGPT

2025年06月15日 11:22

――Deciphering Risks and Countermeasures from Reports in Brazil and Voices on Japanese Social Media――

Introduction: The Irreversible Cost Hidden Behind Convenience

Generative AI has become as integral to daily life as search engines and social media. With tools like ChatGPT, Copilot, Gemini, and Notion AI, we can ask AI questions, plan projects, and manage our finances 24/7. However, the fact that the content we input may remain in the cloud and be repurposed for model improvement or ad targeting is surprisingly not well-known.

An article published on June 14 by Brazil's InfoMoney titled "Gemini, ChatGPT e mais: 5 informações para nunca compartilhar com IAs" lists **"Five Types of Information You Should Never Share with AI"**, which garnered significant attentioninfomoney.com.br. This article will thoroughly introduce the key points of the original article in Japanese, discuss the mixed reactions on social media both domestically and internationally, and provide a comprehensive 10,000-character analysis of practical measures Japanese users should take immediately.



Chapter 1: The "5 Major No-Go Data" Warned by InfoMoney

The InfoMoney article categorically states the following five items as "absolutely no-go."

#CategoryExamplesMain Risks
1Personal InformationName, address, phone number, CPF/RG (Brazilian equivalent of a social security number)Identity theft
2Login InformationID, password, one-time codesAccount hijacking
3Workplace SecretsFull internal chat logs, contract draftsLeakage of trade secrets and patents
4Financial and Medical DataCard numbers, investment balances, medical recordsFinancial fraud, employment discrimination
5Thoughts and EmotionsDeep concerns like depression symptoms or romantic advicePsychological profiling

The article emphasizes that **"AI is neither a friend nor a therapist,"** pointing out the danger that even vivid texts about emotions can become learning materialinfomoney.com.br.



Chapter 2: Japanese Legal Regulations and Cultural Background

1. Personal Information Protection Law and AI

In Japan, the "Act on the Protection of Personal Information" was amended, and from 2022, regulations on the provision of personal-related information to third parties were introduced. When customer data is pasted into a prompt, it may be considered "external provision," making it essential for companies to specify the purpose of use and implement safety management measures.


2. The Other Pitfall of a "Society that Reads the Air"

While Japanese people are cautious about privacy, there is also a tendency to vent concerns "only" to AI. The cultural pressure of "not wanting to bother others" can ironically lead to the massive input of sensitive emotional data.



Chapter 3: Delving into the 5 Major NG Data

1. Personal Information: Not Safe Even with an Anonymous Account

There have been numerous reports on domestic social media that "when a name is entered into generative AI, it appears directly in search results." Note contributor @safe_lynx suggests that simply abstracting names and business partners can significantly reduce the risk of leakagenote.com.


Points

  • Replace names like "Taro Yamada→T Company Representative."

  • Blur dates like "June 14, 2025→Mid-last month."


2. Login Information: Do Not Hand Over to Anything Other Than a Password Manager

While it's understandable to want to ask AI to "manage passwords," be aware that unless it's encrypted storage, it may be exposed to third parties. InfoMoney emphasizes, "Use only memorization or trusted dedicated management tools."infomoney.com.br.


3. Workplace Secrets: External Consultation Equals "External Leakage"

There are cases where entire internal libraries are pasted for code review requests. When Tottori Prefecture banned the use of ChatGPT for work, it sparked a debate on social media with comments like **"It's outdated"** and **"But information leakage is scary."**nlab.itmedia.co.jp. The conclusion reached was to either "set up a dedicated internal GPT or only provide summaries to external models"—these are the two options.


4. Financial and Medical Data: The Distance Between AI and "Money & Health"

In the fintech industry, there is a growing movement to "let AI read automatic household accounts," but InfoMoney asserts that directly pasting card numbers or medical records is suicidal.infomoney.com.br.In Japan, bank APIs come with encrypted communication and access control as a set. This cannot be achieved with AI chat for personal use.


5. Thoughts and Emotions: Mental Health-related "AI Friends" are a Double-edged Sword

Generative AI vectorizes "thoughts and emotions" and profiles users. A Brazilian article points out the "possibility of psychological damage from incorrect advice" and calls for emotional consultations to be directed to professionals or trusted individuals.infomoney.com.br.



Chapter 4: What SNS Said—The Real Voices of Japan

Representative PostsDirection of ReactionsSource
"I want to use ChatGPT at work, but I don't know 'what not to paste.'"Anxiety and Lack of InformationComments Section of a Note Articlenote.com
"Is Tottori Prefecture stuck in the Showa era?" "No, it's a wise decision."Pros and Cons of Usage BanITmedia Reportnlab.itmedia.co.jp
"When I consulted AI about romance, it gave me a response just like my ex-boyfriend.""Scary" Fear of Handling Emotional DataNote Experiencenote.com


Main Trends

  1. Demand for Rules: "We want a clear 'Input Prohibition List'"

  2. Conflict Between AI Utilization and Competitiveness: "I understand the prohibition, but if we don't use it, we'll be left behind"

  3. Fear of Psychological Data: "If I show my weaknesses, won't the ads hit too hard?"



Chapter 5: Latest Trends in Corporate and Municipal Guidelines

  • Internal Prompt Guidelines are being created, and more companies are explicitly marking "shared NG items" in rednote.com.

  • Examples of Introducing "Prompt Gateway" for Automatic Masking Before Input are emerging.

  • **Data Processing Addendum** contracts are being concluded with cloud AI providers to prevent unintended relearning.



Chapter 6: Deep Risks—Deepfakes and National-Level Information Warfare

The Wall Street Journal warns thatdeepfake fraud has increased by 700% year-on-year.jp.wsj.com. Additionally, there are reports that China has attempted to manipulate voters in Taiwan and the United States with AI-generated content, indicating that advanced disinformation attacks are not someone else's problemjp.wsj.com.



Chapter 7: "AI Safety Checklist" for Japanese Users

  1. Recite the "5 Major No-Gos" before pasting

  2. If absolutely necessary

    • Personal information → Initials

    • Confidential → Summary only

    • Finance/Medical → Dummy numbers

  3. Emotional consultation to humans: Utilize experts, friends, and municipal hotlines

  4. Regular deletion of prompt history: Always check service settings

  5. Be cautious of beta features and third-party extensions: Read about the presence of learning opt-out



Conclusion: To remain on the "user side" of AI

While generative AI delivers astonishing outputs, it also datafies our every move and retains it indefinitely. The **"5 types of information you should never give away"** listed by InfoMoney are "red lines" applicable across borders.

On Japanese social media, polarization is progressing between "too scared to use" and "scared but using." However, rather than an extreme binary choice, **understanding risks specifically and establishing technical and organizational guardrails to "use it appropriately and thoroughly"** is the only way to maintain digital competitiveness.

I hope this article will be of some help in improving your AI literacy and safe utilization.


Reference Articles

"Gemini, ChatGPT, etc.: 5 Types of Information You Should Never Share with AI"
Source: https://www.infomoney.com.br/consumo/informacoes-nunca-compartilhar-ia/

← Back to Article List

Contact |  Terms of Service |  Privacy Policy |  Cookie Policy |  Cookie Settings

© Copyright ukiyo journal - 日本と世界をつなぐ新しいニュースメディア All rights reserved.