Skip to main content
ukiyo journal - 日本と世界をつなぐ新しいニュースメディア Logo
  • All Articles
  • 🗒️ Register
  • 🔑 Login
    • 日本語
    • 中文
    • Español
    • Français
    • 한국어
    • Deutsch
    • ภาษาไทย
    • हिंदी
Cookie Usage

We use cookies to improve our services and optimize user experience. Privacy Policy and Cookie Policy for more information.

Cookie Settings

You can configure detailed settings for cookie usage.

Essential Cookies

Cookies necessary for basic site functionality. These cannot be disabled.

Analytics Cookies

Cookies used to analyze site usage and improve our services.

Marketing Cookies

Cookies used to display personalized advertisements.

Functional Cookies

Cookies that provide functionality such as user settings and language selection.

Pitfalls of the AI Era: The Birth of Copy-Paste Brain? 55% Reduction in Memory with Long-term Use of ChatGPT

Pitfalls of the AI Era: The Birth of Copy-Paste Brain? 55% Reduction in Memory with Long-term Use of ChatGPT

2025年07月20日 14:21

Introduction: The "Cognitive Cost" Behind Convenience
Over three years have passed since the generative AI boom began—using text generation models, emails and reports can be completed astonishingly quickly. But behind that convenience, is the human "ability to think" truly being diminished? On the 18th, Brazilian media AcheiUSA introduced new research from the U.S. MIT Media Lab, reporting that university students using ChatGPT showed decreased brain activity and a potential decline in critical thinking. The article quickly spread on X (formerly Twitter) and Reddit, attracting a flood of both supportive and critical comments such as "It's expected" and "Put a brake on its introduction in educational settings." The debate over the distance between AI and education is heating up globally once again.AcheiUSA


Chapter 1: The Experimental Design by MIT Media Lab
The research was led by Dr. Nataliya Kosmina's team at MIT Media Lab, specializing in brainwave interfaces. They divided 54 participants aged 18 to 39 into three groups: ① writing with ChatGPT, ② gathering information via Google search and writing by hand, and ③ "brain-only" with no tools. Each group was tasked with writing three SAT-style essays in 20 minutes.


During writing, participants wore a 32-electrode EEG cap to measure alpha, theta, and delta brainwaves, focusing on the frontal and parietal lobes. Completed essays were blindly graded by linguists and high school teachers. Evaluation criteria spanned 10 indicators, including creativity, memory retention, syntactic diversity, and logical consistency, creating a comprehensive design that cross-referenced behavioral and neural data.MIT Media LabTIME


Chapter 2: "Cognitive Debt" Indicated by Brainwaves and Behavioral Indicators
The results were shocking. In the ChatGPT group, the alpha band power in the prefrontal cortex decreased by an average of 47% within five minutes of starting to write, and the copy-paste ratio exceeded 70% by the third essay. Language teachers commented, "Everything is written in the same template, and it lacks soul," and behavioral indicators showed vocabulary diversity was less than half of the control group. Brainwaves also showed a disruption in the attention network synchronization and weakened hippocampal connections. In contrast, the brain-only group showed increased connectivity density with each session, and their memory test recall rate was 34% higher. The Google search group fell in between, with a higher creativity index due to brain usage for information selection, but they did not match the brain-only group in memory retention.AcheiUSATIME


Chapter 3: Tool Dependence Is "Hard to Break Free From"
The research team also conducted a "crossover trial." In the fourth session, the ChatGPT group was asked to rewrite the same task without tools, while the brain-only group used ChatGPT for the first time. The results showed that former ChatGPT users couldn't even recall their own text, and their alpha and theta band activity remained low. Conversely, participants using AI for the first time increased their text volume, but their brain activity only slightly decreased, with no significant difference in memory retention. Dr. Kosmina calls this "the accumulation of cognitive debt," warning that long-term dependence is problematic. This suggests a mechanism where "once it becomes a habit, it's hard to break free," highlighting that the risks differ qualitatively between short-term use and long-term dependence.TIME


Chapter 4: The Reason for Rushing the Preprint Release
Although this paper is at the preprint stage before peer review, the team rushed its release due to a "race against time." "If politicians mandate 'Kinder GPT' while we wait for six months, it will be too late," said the doctor. Currently, the sample size is limited to 54, with participants mainly being university students near Boston, USA, raising questions about external validity. Still, the call to "check neuroscientific evidence before introducing digital teaching materials" is beginning to reach the ears of education policy experts and parents. TIME magazine also reported, "Despite being a small study, the implications are significant," and on social media, reactions ranged from extreme opinions like "Bring back handwriting phases in schools immediately" to cautious arguments like "Strengthening AI literacy education will suffice."AcheiUSATIME


Chapter 5: Comparison with Past Research and "Digital Amnesia"
In fact, this is not the first time that the suggestion has been made that "AI dependence makes the brain lazy." A study released in June this year by Queensland University of Technology in Australia found that middle and high school students could only reproduce 40% of the content of reports created using AI tools the following week, leading to the coining of the term "digital amnesia." Additionally, a Harvard Business Review survey reported that "generative AI increases productivity but lowers intrinsic motivation." Stanford University found that "if limited to the idea generation stage, creativity does not decline," indicating divided conclusions even in academia. This is due to the "plasticity" where the effects vary greatly depending on the interaction design with AI and the user's learning stage.Herald Sun


Chapter 6: Voices from the Field—Professors and Teachers' Cries
Cries from the educational field are also rising. In the Reddit thread "IfBooksCouldKill," a university professor commented, "Students are weaker in critical thinking than they were 10 years ago. There are more 'decent enough' reports written by AI, but some students panic when asked questions orally," a voice that ranked highest. High school teachers also criticized, saying, "The alliance of grade supremacy and AI is mass-producing unimaginative 'model students,'" and users listed specific measures such as "return to handwritten exams" and "AI detection tools can't keep up."Reddit


Chapter 7: Positive Experiences and the "Depends on How You Use It" Argument
However, not all is pessimistic. On the same Reddit, there are a certain number of experiences like "AI is helping disabled people communicate" and "Using tools allows extra resources to be allocated to deep research." In the thread "ArtificialIntelligence," a poster stated, "As long as balance is maintained, learning can be expanded," and in the comments, constructive discussions unfolded, such as "Instead of viewing technology as an enemy, evaluation methods should evolve." Particularly, a case where a user with language disabilities testified that "AI proofreading lowered the hurdle for expressing intentions" suggests that AI can function as an "equal opportunity tool."Reddit


Chapter 8: Corporate Training and the "AI Last Mile"
Corporations are no exception. Major consultancies recommend "AI Last Mile," where AI drafts, and humans perform the final critical check, in first-year training. However, some startups are moving to automate the review process itself.


In an interview with TIME magazine, Dr. Kosmina revealed that "in a follow-up study targeting software engineering, the decline in brain activity was more pronounced than in writing," warning that "new engineers may not develop, potentially leading to a decline in technical skills in the medium to long term."


In fact, reports pointing out the decline in code review quality and the increased cost of bug leaks are increasing in U.S. tech companies, highlighting the risk that AI introduction may raise short-term ROI but invite hidden costs of human capital deterioration.TIME


Chapter 9: Psychological Perspective—The Importance of "Struggling"
Concerns are also deep from a clinical psychology perspective. Child psychiatrist Dr. Zishan Khan points out that "the developing brain strengthens synapses through 'struggling experiences,' but if AI provides shortcuts in thinking, those circuits are less likely to form." He observed many high school students in clinical settings who depend on AI-generated summaries, stating, "Not only memory and recall abilities but also resilience to recover from failure are weakening." Neuroscientifically, it is known that intentional effort stimulates the anterior cingulate cortex and activates the motivation network, and an environment that is too easy may lead to immature emotional regulation abilities.TIME


Chapter 10: Policy Movements and Updating Evaluation Methods
In response to these findings, the European Commission is proposing the introduction of "AI dependency monitoring" in K12 education in the draft "AI Literacy Framework" scheduled for implementation next spring. Specifically, it mandates the recording of thought processes and self-reflection reports when using AI in classes, with teachers providing interactive feedback while referencing browser logs. ICT education researcher Ernst Schmidt states, "Rather than shutting out AI, transparency and promoting metacognition are more effective."


In Japan, similar demands are rising from educational settings, and the University Entrance Examination Center is said to have begun considering new evaluation axes, including prompt submission, with a target of 2027. While there are challenges such as privacy protection and excessive management in implementation, it is attracting attention as a more constructive discussion than simply banning it.


Chapter 11: The "Three-Stage Model" for Educators
For educational practitioners, experts interviewed by the author commonly recommend the following three-stage model. First, "Analog Drafting": Initial idea generation and structuring are done on paper or a whiteboard to warm up the brain by moving the hands. Second, "AI Bonding": Use ChatGPT, etc., for research assistance and paraphrase suggestions, always adding comments to the dialogue log to visualize one's reasoning.


Third, "Human Final": Erase all AI responses and reconstruct in one's own words, passing it through AI detection tools and peer review. This maintains an optimal balance between convenience and cognitive load. Additionally, it is recommended to include "blank time" in lesson design, incorporating "digital fasting" where tasks are completed in environments where AI cannot be used, to recover metacognitive abilities and concentration.


Chapter 12: Corporate Governance and the Move Toward New ISO Standards
In the business domain, it is reported that the ISO/IEC 9600 series (tentative name), which provides guidelines for the introduction of generative AI systems, is being formulated. The four pillars are expected to be: ① disclosure of the basis for AI proposals, ② risk review by human responsible parties, ③ regular updates and performance audits of training data, and ④ critical thinking training for employees.


Particularly, ④ is in line with the idea that "internal education in the AI era should shift from 'proofreading' of code and documents to workshops that train 'falsifiability,'" and major European banks and Japanese electronics manufacturers are already advancing trial implementations. On the other hand, voices remain strong from departments prioritizing cost and short-term performance, saying, "If AI automation suffices, training is unnecessary," making it unclear how far governance strengthening will take root.


Chapter 13: The Risk of Widening Disparities and Public Infrastructure
The perspective of social disparity cannot be overlooked. The "AI learning gap" is widening between urban students who can access expensive AI tools and high-speed internet and rural students with limited equipment. According to estimates by researcher Maria Rogers, tool dependency correlates with a decline in reading comprehension, but economically disadvantaged groups have fewer alternative resources, making them more susceptible to cumulative negative impacts.


Fortunately, some municipalities in Europe have begun initiatives allowing chatbot access from public libraries, with programs that provide learning guidance based on usage history reportedly achieving results.


Chapter 14: Development of "Metacognitive Mode" on the Technology Side
The technology side is also evolving. The open-source LLM community is discussing the implementation of a "metacognitive mode" that detects human brain fatigue and deliberately returns only hints. When a user tries to copy an answer, a reminder saying "Try writing your thoughts first" appears. There are

← Back to Article List

Contact |  Terms of Service |  Privacy Policy |  Cookie Policy |  Cookie Settings

© Copyright ukiyo journal - 日本と世界をつなぐ新しいニュースメディア All rights reserved.