Is Individuality Disappearing from Student Writing? How Generative AI is Transforming Learning and Identity

Is Individuality Disappearing from Student Writing? How Generative AI is Transforming Learning and Identity

AI Enhances Writing, But Where Does "Your Own Voice" Disappear?

In university and vocational school classrooms, generative AI is no longer a special presence. Students organize their assignment themes, consider report structures, correct grammar, and refine expressions. Tasks that, a few years ago, relied on friends, tutors, or time-consuming revisions now appear on screens in seconds.

Indeed, writing becomes easier to read. The logical flow is organized. Vocabulary becomes slightly specialized, the style is calm, and the format is presentable to instructors without embarrassment. However, behind this convenience, students are beginning to feel a new sense of unease.

"It's well-written, but it doesn't feel like my own writing."

The issue at hand is not simply whether using AI for reports is cheating. The deeper problem is that in the process of AI refining students' writing, it might smoothly erase the students' own voices, hesitations, quirks, incompleteness, and traces of learning.

The original article highlights that as the use of generative AI spreads among Canadian students, they feel a disconnect between their own writing and the AI-refined writing. Surveys show that many Canadian students use generative AI for school assignments, and it's not uncommon for students to use AI as their "first response" when given an assignment.

This is not simply a matter of students being lazy. Rather, students are wavering between convenience and anxiety while using AI. They want better grades. They lack confidence in their English or writing skills. They want to learn specialized expressions. They want to polish their submissions as much as possible. On the other hand, there's a sense that the AI-corrected writing becomes so smooth that it somehow feels detached from themselves.

Writing is not merely about conveying information. Especially in university and specialized education, writing is a process of confirming "what one understands" and shaping "how one thinks in this field."

In STEM—science, technology, engineering, and mathematics—fields, writing is often treated as a secondary skill. The focus is on formulas, data, experiments, and design, with writing seen merely as a vessel to explain these. However, in reality, even in scientific and engineering fields, the ability to accurately explain ideas, connect hypotheses with evidence, and articulate uncertainties is indispensable.

When students write reports, it reflects not only the depth of their understanding but also how they are trying to enter their field of expertise. How they use technical terms. How they write about areas where they lack confidence. How they connect their experiences and interests. These small choices accumulate to form their "voice."

However, generative AI instantly refines this voice. Grammatical errors disappear, awkward expressions become natural, and logical connections are reinforced. As a result, the writing resembles that of a "well-done student report." But at the same time, it tends to become writing that looks similar regardless of who writes it.

This is not to say that AI makes writing worse. Quite the opposite. AI improves writing. The problem is that this "improvement" becomes so homogeneous, safe, and close to the average exemplary student's writing.

For students, this is a complex experience. AI-refined writing may be easier for instructors to read. Grades might improve. However, when the sense of struggling to think in one's own words diminishes, it becomes ambiguous whether the submission is the student's achievement or the tool's achievement.

What is important here is that "one's own writing" is not necessarily perfect writing. Immature expressions, slightly roundabout explanations, unresolved questions, and personal interests. These might sometimes be points of deduction in grading, but they are very important traces in the learning process.

AI is adept at erasing those traces.

For example, if a student writes, "I found this phenomenon interesting because it relates to my practical experience," AI might refine it to, "This phenomenon holds significant implications in practical contexts." The latter certainly appears more academic, but it lacks the student's personal surprise and interest.

Of course, AI support is not inherently bad. For students who struggle with writing, learn in a second language, or are not familiar with technical terms, AI can be a significant aid. For students who have ideas but struggle to articulate them, AI can be a tool that opens the door. Reducing anxiety about grammar and structure allows them to focus on the actual content.

Therefore, the discussion should not be simplified to "students using AI are cheating." The issue is not whether to use AI or not, but at what stage, for what purpose, and to what extent AI is used.

Even looking at reactions on social media, opinions on this issue are widely divided.

Some see the refinement of writing by AI as an inevitable trend. Grammar checks, summaries, structure proposals, paraphrasing, and tone adjustments are already common in many workplaces. Since AI will be used in society, students should learn how to use it from their school days. Those with this perspective believe AI should be used as an "editor" or "tutor," not as an "author."

On the other hand, there are strong voices of concern. By entrusting report creation to AI, students might receive answers before thinking. By avoiding the struggle of constructing writing, critical thinking and expressive skills may not develop. On social media and forums, comments like "AI-refined writing is easy to read, but they all look the same" and "Student writing is leaning towards a LinkedIn-style safe AI tone" are seen.

Furthermore, distrust of AI detection tools is spreading. Tools that determine whether something is written by AI have issues with false positives. There's even discussion about students deliberately writing poorly to avoid being flagged as AI-generated, which is an ironic situation. Education should aim to enhance students' writing skills, but the suspicion of AI use might lead students to intentionally make their writing unnatural.

Here lies a dual pressure faced by education in the AI era.

On one hand, students feel disadvantaged if they don't use AI. If others are refining their writing with AI and submitting high-quality reports in less time, not using AI puts them at a competitive disadvantage. On the other hand, there's anxiety that using AI might be seen as cheating or that their work might not be considered their own.

In other words, students are caught between "falling behind if they don't use it" and "being suspected if they do."

In this situation, educational institutions face limitations if they rely solely on prohibition or detection. Of course, submitting a report entirely done by AI as one's own achievement is problematic. Submitting writing with non-existent citations or factual inaccuracies is also dangerous. However, treating AI use as completely invisible and only asking students "did you use it or not" does not match reality.

What is needed is not just to regulate the presence or absence of AI use, but to design a way to make visible how students think, where they use AI, and where they make their own judgments.

For example, students could be required to submit not only the final version of a report but also the drafts, instructions given to AI, AI's suggestions, reasons for adopting or not adopting them. Or, they could compare AI-generated writing with their own and explain "what doesn't feel like them" or "which expressions are closer to their thoughts." Such assignments teach how to maintain a distance from AI rather than banning it.

The important thing is to teach students the "technique of preserving their own voice."

In writing education in the AI era, it is not enough to simply teach correct grammar and readable structure. Rather, the ability to "pull back plausible AI-generated writing to one's own thinking" is needed. It is the ability to rewrite AI's suggestions in light of one's own experiences, questions, positions, and professional judgments.

This might be a more advanced task than previous writing education. Because students not only have to correct their immature writing but also have to question seemingly well-done AI-generated writing.

AI writing often appears confident. It is logically organized, and the tone is calm. However, this confidence is not necessarily backed by understanding. If students cannot see through this, the writing may be neat, but the understanding remains shallow.

This issue poses questions not only to students but also to educators. If assignments can be easily completed with AI, are they truly measuring learning? Instead of evaluating only the report as a finished product, how should the process of thinking, questioning, and correcting be evaluated? The advent of AI demands a redesign of educational evaluation itself.

Furthermore, the "standardization" of writing could affect students' sense of belonging. In STEM fields, it has been pointed out that participation opportunities and a sense of belonging can easily differ based on gender, race, immigrant background, and economic status. In such environments, if the fluent and standard writing produced by AI becomes the implicit standard of "good writing," students may increasingly lose confidence in their own words.

My writing is awkward. AI writing seems smarter. So, is my voice unnecessary?

This feeling is not just a matter of writing expression. It is a fundamental issue related to learning, questioning whether one belongs in the field and whether one's thoughts have value.

 

Even those who endorse AI use on social media are not necessarily ignoring this issue. Many believe AI's effectiveness depends on how it is used. It is used to organize one's thoughts, correct grammar mistakes, and improve readability. However, if claims, analysis, and judgments are entrusted to AI, it hollows out learning. This need for delineation is relatively shared between proponents and cautious parties.

However, drawing that line is not easy. Is grammar correction permissible? What about paraphrasing? What about paragraph structure suggestions? What about introductory drafts? Is it acceptable to use AI-suggested points after re-researching them? If judgments differ among students, educators, and universities, confusion will persist on the ground.

Therefore, educational institutions need clear rules and flexible dialogue simultaneously. Simply writing "AI prohibited" will lead students to use it secretly. Simply writing "AI use encouraged" will make it unclear what constitutes one's own achievement. What is needed is to clarify "what this course aims to teach" for each assignment and indicate the scope of AI use in light of that purpose.

For example, in courses that teach writing expression itself, it may be necessary to limit AI use and ensure time for struggling to write in one's own words. On the other hand, in courses emphasizing the application of specialized knowledge or data analysis, partial allowance of AI for organizing writing could be recognized, while strictly questioning the basis of analysis and judgment processes.

In any case, AI can no longer be expelled from the classroom. Students are already using it. Even if told not to use it, those who will use it will use it. Therefore, the educational challenge has shifted from "preventing AI use" to "teaching how to use AI without losing one's own thinking."

The fact that students' writing improves with AI is not inherently bad. Rather, for students who were disadvantaged by differences in writing skills, it could expand access to learning. However, if education continues to evaluate only the smoothness of writing, the homogeneous exemplary student style created by AI will become the standard, and students' own voices will become increasingly invisible.

The real question is not just "Was this written by AI?"

"Does this writing still contain the student's own questions?"
"Is there evidence of thought in this writing?"
"Did the student who wrote this understand a little more deeply than before submission?"
"Even after being refined by AI, does that person's voice remain?"

Generative AI can make writing appear strong. However, strong-looking writing does not necessarily mean strong thinking. Rather, even slightly awkward writing that has been wrestled with in one's own words can sometimes be richer evidence of learning.

What is needed in education in the AI era is neither to scare students away from AI nor to let them entrust everything to AI. It is to teach how to write without losing one's own thoughts, even with the help of AI.

How to balance improving writing skills and preserving one's own voice. That is the challenge students, educators, and universities now face.



Source URL

Refer to the Phys.org article: The issue of Canadian students feeling "lack of authenticity" in AI-refined writing, STEM education, student voices, and proposals for educational design.
https://phys.org/news/2026-05-college-students-aismoothed-strong.html

Refer to the KPMG Canada survey: Background data on the usage rate of generative AI among Canadian students, AI use in assignment creation, and concerns about critical thinking.
https://kpmg.com/ca/en/media/2025/10/generative-ai-boom-among-canadian-students-raises-dilemmas.html

Refer to UNESCO's "Guidance for generative AI in education and research": International perspectives on the use of generative AI in education and research, human-centered viewpoints, and learner autonomy.
https://www.unesco.org/en/articles/guidance-generative-ai-education-and-research

Refer to the University of Waterloo's Generative AI Guide for Students: The necessity for students to adhere to course-specific rules and thoughts on academic integrity.
https://uwaterloo.ca/generative-ai/resources/students

Refer to the University of Waterloo's page for educators on Generative AI and Academic Integrity: The necessity to clarify rules and expectations for AI use in classes.
https://uwaterloo.ca/academic-integrity/artificial-intelligence-and-chatgpt

Refer to Reddit r/PhD discussions: Examples of reactions on social media and forums, including voices supporting the use of AI for writing refinement and concerns about outsourcing and academic integrity.
https://www.reddit.com/r/PhD/comments/1pwlbso/everyone_in_my_class_is_writing_with_ai/

Refer to Reddit r/Professors discussions: Reactions concerning distrust of AI detection tools, false positives, and issues of students deliberately writing poorly to avoid AI detection.
https://www.reddit.com/r/Professors/comments/1rjl5u0/students_are_deliberately_writing_worse_to_avoid/

Refer to discussions spread across Reddit r/Professors / r/UniUK / r/language: Examples of educator reactions to student writing appearing AI-like, social media-like, or LinkedIn-like.
https://www.reddit.com/r/Professors/comments/1srw5hg/does_student_writing_sound_more_like_social/
https://www.reddit.com/r/UniUK/comments/1sru6a0/does_student_writing_sound_more_like_social/
https://www.reddit.com/r/language/comments/1srv42u/does_student_writing_sound_more_like_social/