"Youth Experiencing 'AI Fatigue': They Use It but Don't Trust It"

"Youth Experiencing 'AI Fatigue': They Use It but Don't Trust It"

The Era Where Young People Who Dislike AI Have No Choice But to Use It

Discussions around generative AI tend to swing to extremes.
On one hand, there's talk that "people who can't master AI will be left behind," while on the other, warnings are issued that "AI will take jobs, destroy creativity, and diminish learning abilities." However, what's happening among young people today is neither of these extremes.

They are using AI. And quite frequently at that.
For researching assignments, drafting emails, summarizing materials, assisting with coding, brainstorming ideas, adjusting resumes, and drafting social media posts. AI has already infiltrated schools, workplaces, and job hunting activities.

Yet at the same time, they don't really trust AI.
In fact, the more they use it, the more they start to question, "Is this really a tool for me?"

This paradox highlighted by Kotaku hits at the core of the current AI boom. The generation closest to AI isn't necessarily its most enthusiastic supporters. Rather, the young people who have no choice but to use AI in their daily lives are becoming more sensitive to its limitations, discomforts, and intrusiveness.


"Using because it's convenient" is different from "believing in it"

In the rhetoric of AI advocates, high usage rates are often treated as if they equate to high approval rates.
"Young people are using AI. Therefore, young people welcome AI."
This logic is straightforward, but reality is more complex.

For example, students summarize lecture notes using AI.
Young employees draft internal documents using AI.
Job seekers refine their motivation statements using AI.
These are indeed "uses." However, they don't necessarily equate to "trust" or "favor."

People use things they dislike.
Even if they don't like crowded trains, they use them to commute.
Even knowing it's unhealthy, they eat fast food late at night.
Even if dissatisfied with the usability, they use the software specified for work.

AI is becoming something similar.
They use it because it's convenient. They use it because it's fast. They use it because everyone around them is using it. They use it because they feel disadvantaged if they don't. But that doesn't mean they unconditionally welcome AI's future.

Young people understand that AI is convenient in the short term.
At the same time, they feel that this convenience might erode their learning abilities, thinking skills, and professional value in the long term.

This twisted feeling of "convenient but anxious," "usable but untrustworthy," "necessary but disliked" is a crucial point in understanding the current acceptance of AI.


Gen Z isn't dreaming about AI

AI companies and investors speak of AI as "the future itself."
They promise to streamline all operations, reinvent knowledge work, democratize creation, and optimize education individually. While these words are dazzling, young people's experiences aren't as bright.

What they see is a more familiar and realistic scene.

In classes, they're told "don't use AI," while in other classes, they're told "utilize AI."
In company briefings, they look for "people who can adapt to the AI era."
Job listings suddenly include experience with AI tools.
On social media, AI-generated images are mocked as "lazy," "fake," and "AI slop."
In workplaces, they're ordered to "just use AI to boost productivity."

In other words, young people are receiving contradictory commands about AI simultaneously.
"Don't use it, but use it."
"Think for yourself, but streamline with AI."
"AI will take your job, but you won't be hired if you can't use AI."

This dilemma isn't just anxiety about technology. It's more like a feeling that their future is being rewritten not by their own will, but by the convenience of companies, universities, and markets.


The true feelings of "AI fatigue" visible on social media

Looking at reactions on social media, this dissatisfaction is quite specific.

 

In Reddit's technology community, many users commented on The Verge article, saying things like "I'm forced to use it at work, but I don't use it in my personal life." One user mentioned being encouraged by their company to use AI, but found the forced integration of AI into every task annoying.

Another comment from an IT professional noted being reprimanded by a boss for not using AI on a project, with the company process requiring AI use. However, the internal AI often provides irrelevant answers and is practically useless.

Additionally, there's a noticeable sentiment of "AI always requires double-checking. How much time are we really saving?" This isn't so much a backlash against AI as it is a calm questioning of its practicality. Since AI makes mistakes, humans can't escape verification tasks. In fact, it might just increase the work of verifying AI outputs.

On the other hand, there are also reactions that aren't entirely negative.
Some posts liken AI to a shovel, saying, "It's more convenient than digging by hand, but the act of digging itself is necessary." This acknowledges AI as a tool while pointing out that simply having a tool doesn't automatically grant skills.

Summarizing social media reactions, it's clear that young people and frontline workers are more frustrated with the careless attitude of "AI will solve everything" rather than AI itself.
They know what AI can do. And that's precisely why they also know what AI can't do.


Does AI assist or dilute "creativity"?

The discomfort young people feel towards AI is particularly strong in the realm of creation.

Text, images, music, videos, game materials. Generative AI has made expressions that once required specialized skills suddenly easy. While this presents great potential, it also significantly shakes the perception of the value of creative works.

On social media, criticisms of AI-generated images and text as "unnatural," "shallow," "soulless," and "all too familiar" are spreading. Especially among the younger generation, there's an atmosphere where hiding AI use or even just being suspected of using AI can lower evaluations.

What's important here is that young people aren't simply "afraid of new technology."
Rather, they witness daily how the boundaries between real and fake, effort and laziness, citation and plagiarism, homage and theft are becoming blurred in internet culture.

With the influx of AI-generated content, the online space is increasingly filled with "things that look convincing but are shallow."
Even if they initially appear convenient and interesting, if the same expressions, compositions, and aesthetics continue, they eventually become tiresome.
AI doesn't create novelty but rather a mass of content that leans towards the existing average. Such doubts are growing among young people.


How should schools handle AI?

Confusion surrounding AI is even more severe in educational settings.

For students, AI is a tool to finish assignments quickly and an aid to understand difficult content. But at the same time, it's a temptation that robs them of opportunities to think.

Having AI write text.
Having AI summarize.
Having AI provide answers.
Having AI structure arguments.

Each time, students can shorten their work time. But within that shortened time is also the time they would have needed to ponder, think, rewrite, and understand on their own.

Of course, not all AI use is bad.
Just as dictionaries, search engines, calculators, and translation tools have supported learning, AI can also become a powerful training wheel depending on how it's used.

The problem is when the training wheels inadvertently replace the legs themselves.
Are students using AI to "understand what they don't know," or are they using it to "submit assignments without understanding"? The difference is significant.

What's more troublesome is the lack of unified policies from universities.
In some classes, AI use is prohibited, while in others, it's encouraged. Some professors strictly enforce rules, while others assign AI-based tasks. From a student's perspective, it's hard to see what's allowed and what's considered cheating.

This ambiguity further strengthens distrust towards AI.
There's anxiety whether they use AI or not. Anxiety if they admit to using it. Anxiety if they hide it.
As a result, AI becomes a source of suspicion among students and between students and teachers, rather than a learning tool.


Is AI in the workplace "efficiency" or "surveillance"?

The introduction of AI in workplaces is also amplifying young people's anxieties.

Companies introduce AI as a tool for "improving productivity."
But from the perspective of young employees, it's not necessarily a welcome development.

They're told to use AI.
They're told to work faster with AI.
They're checked on whether they used AI.
Humans correct what AI produces.
And in the end, their work might be deemed replaceable by AI.

In this flow, young people perceive AI not only as a "tool to help themselves" but also as a "device to measure their value."
If they can't use AI, they're not evaluated. But if their work is deemed doable by AI, their job might become unnecessary.

This is an extremely unstable position.

Especially for young people at the start of their careers, the first few years are a time to gain experience, make mistakes, learn from seniors, and develop expertise. If AI steps in and "entry-level tasks are sufficient with AI," young people lose the opportunity to learn.

It's not just about whether AI takes jobs.
It's about whether AI removes the ladder for growth through work.
This fear is at the root of young people's distrust of AI.


The perception of "people who use AI" is also changing

Another aspect not to be overlooked is the societal perception of AI use.

Once, using AI seemed like an advanced, efficient, and smart choice.
But recently, depending on the context, it can be seen as "lazy," "cheating," or "not thinking for oneself."

On social media, AI-generated text or images can become targets of criticism if suspected.
If a creator is suspected of using AI, it can cause a backlash, and if AI-generated materials are used in corporate advertisements, they're labeled as "cheap." In schools, whether AI was used becomes a matter of trust.

This atmosphere is creating a sense of "AI shame" among young people.
They feel they might fall behind if they don't use it.
They feel they might be looked down upon if they do use it.
There's anxiety about using it openly and guilt about using it secretly.

As a result, AI use is becoming something done secretly rather than being an open subject of learning or discussion.
This isn't a healthy state for education or the workplace.

What's truly needed is not simply judging whether AI was used as good or bad.
It's about sharing the process: where AI was used, what was thought independently, what was entrusted to AI, and how the output was verified.
An environment where such processes can be shared is necessary.

But in reality, the pressure to "use AI" and the shame of "using AI" coexist. Young people are stuck in between.


Are young people anti-AI?

It's important not to misunderstand that young people are simply becoming anti-AI.

Many of them acknowledge the convenience of AI.
They refine short emails.
They summarize complex materials.
They find causes of code errors.
They grasp overviews of unfamiliar fields.
They draft ideas.

In these uses, AI is indeed helpful. Especially when used as an aid by those who already have expertise, AI can speed up tasks.

But that's different from saying "just leave it to AI."
Rather, effectively using AI requires human judgment.
Knowledge to discern if AI's answer is correct.
Experience to judge if the output is mundane.
Ethics to decide where to entrust and where to think independently.
Without these, AI becomes a device that produces plausible mistakes rather than a convenient tool.

Young people are beginning to realize this.
So their criticism of AI isn't a rejection by technophobes.
Rather, it's realism born from everyday use.


The issue isn't AI but the "imposed AI society"

If we were to sum up young people's dissatisfaction with AI in one word, it might be anger at "having no choice."

Those who want to use AI should use it.
Those who don't want to use it shouldn't have to.
When using it, they should understand its purpose and limitations.
Ideally, that should have been enough.

However, in reality, AI is rapidly becoming something "you have to use."
Schools and companies, without fully organizing how to use it or the rules, are proceeding with its introduction.
Under the guise of "preparing for the AI era," students and young employees are being treated like test subjects.

Young people aren't rebelling against the existence of AI itself.
It's that their voices are barely reflected in the decision-making surrounding AI.

Their education is changing.
Their employment is changing.
Their work is changing.
The value of their creations and communications is changing.

Even though young people are at the center of these changes