Skip to main content
ukiyo journal - 日本と世界をつなぐ新しいニュースメディア Logo
  • All Articles
  • 🗒️ Register
  • 🔑 Login
    • 日本語
    • 中文
    • Español
    • Français
    • 한국어
    • Deutsch
    • ภาษาไทย
    • हिंदी
Cookie Usage

We use cookies to improve our services and optimize user experience. Privacy Policy and Cookie Policy for more information.

Cookie Settings

You can configure detailed settings for cookie usage.

Essential Cookies

Cookies necessary for basic site functionality. These cannot be disabled.

Analytics Cookies

Cookies used to analyze site usage and improve our services.

Marketing Cookies

Cookies used to display personalized advertisements.

Functional Cookies

Cookies that provide functionality such as user settings and language selection.

Is Looking Too Good to Be True? The Future of Instagram Where "Awkward is Authentic" Accelerates

Is Looking Too Good to Be True? The Future of Instagram Where "Awkward is Authentic" Accelerates

2026年01月03日 10:01

1) What the Declaration "Impossible with the Naked Eye" Means

Distinguishing authenticity by a sense of "AI-like" or "unnatural"—this will no longer be effective. Adam Mosseri, who leads Instagram, has effectively declared defeat, stating that as AI-generated images and videos begin to precisely mimic reality, it will become difficult for the platform to continue identifying fakes based on "visual inspection." An article from NDTV Profit summarizes his statement as "unable to spot AI slop with the naked eye," reporting it as a turning point that changes the premise of social media operation. NDTV Profit


The important point here is that Mosseri did not simply say "it's difficult to distinguish." He anticipates that while detecting and labeling AI content may initially succeed, the more AI evolves, the more "disadvantaged the detection side will become." He proposed a reversal in thinking—**shifting from "chasing fakes" to "fingerprinting real media."** NDTV Profit


2) "Fingerprinting the Real"—The Concept of "Chain of Custody"

What Mosseri envisions is not a method of labeling content somewhere in its "distribution" as it is generated, edited, and reposted, but rather fixing authenticity at the moment of capture. In a proposal introduced by Engadget, cameras (including smartphone manufacturers) would attach a cryptographic signature at the time of shooting, creating a "chain of custody" that can trace whether the content has been tampered with. Engadget


If this becomes a reality, what we will see in the future is not a simple binary label of "Is this AI or not?" but rather

  • information akin to the provenance of the content, such as "when, where, and on which device" the image was taken,

  • if it was edited along the way, what part is original, and what was changed,

  • and who guarantees that history and by what standards.


However, reality is not simple. As the number of "real" signatures increases, content without signatures will lean towards being "suspicious." There are many socially important materials without signatures, such as old photos, scanned images, existing video assets, and anonymous whistleblower videos. While technology can create trust, it can also create a **"trust gap."**


3) An Era Where "The More Perfect, The More Suspicious"? The Argument that "Awkwardness is Authentic"

What stands out in the NDTV Profit article is Mosseri's advice to creators. AI often tends to mass-produce "clean and polished looks." Therefore, for the time being, it is more genuine to deliberately lean towards being **"awkward," "unprocessed," and "raw."** Handheld videos, out-of-focus shots, unfavorable close angles, and uncrafted expressions, like those in Instagram Stories, may hold value as "evidence of reality." NDTV Profit


Business Insider also introduced the flow where Mosseri stated that the era of "polished grids" has ended, shifting the focus towards sharing unprocessed daily life. Business Insider


However, this "awkwardness is authentic" is merely a provisional solution. The argument Mosseri made, as quoted by The Verge, looks further ahead. AI will eventually replicate even "flaws" and "imperfections." When that happens, people will have to shift their judgment axis from "what is shown" to "who is saying it." The Verge


In other words, **in a world where beauty becomes cheap, the next rarity will be "authenticity" and "relationships."**


4) We Are Transitioning to a Society Where "Doubt Comes First"

What NDTV Profit emphasizes is the change in user attitudes. We have so far received images and videos recommended by algorithms as "basically genuine." But this will reverse in the future. Placing skepticism first, checking the motives and background of the sharer, and reading the context of the information—such "viewing with doubt" will become the norm. NDTV Profit


The Decoder explains this point by touching on the biological and psychological premise that "humans tend to believe what they see," and notes that the shift to default skepticism can become "uncomfortable and stressful." The Decoder


When doubting becomes standardized, even light entertainment in our daily scrolling can turn into a "verification task." As the cost of determining authenticity rises, exhausted individuals may easily drift towards "believing nothing" or "only believing what they want to see." Here lies the breeding ground for social division and misinformation spread.


5) To What Extent Is the Platform Responsible?—The Issue of "Who Cleans Up the Mess"

Engadget's writing is harsh. While acknowledging the reality that AI-generated identification is becoming difficult, it critically portrays the situation as if "it's more someone else's problem than Meta's." It touches on the inadequacy of traditional identification measures like watermarks, the lack of clarity in labels, and the history of platforms admitting "undetectability," ultimately illustrating a structure where AI slop prevails. Engadget


Thus, the point of contention is this.

  • Assuming the increase in AI generation is inevitable,who bears the burden of damage (fraud, impersonation, misinformation)?

  • If signatures are entrusted to camera manufacturers, to what extent will OS, apps, and SNS collaborate?

  • Who will pay for cross-border standardization and verification infrastructure?

While "fingerprinting the real" seems technically sound, its implementation and operation are also political and economic issues.


6) Reactions on SNS: Agreement, Irony, and the Discussion on "Trust Economy"

The statement was received in various ways as it spread as news. While direct viewing of Threads was difficult due to environmental reasons, the trend of reactions is visible from surrounding SNS and quoted summaries.


(1) "Labels Are Impossible, Proof of Origin"—A Journalistic Summary
Engadget journalist Karissa Bell summarized Mosseri's argument on Bluesky as "giving up on preemptively attaching AI labels and moving towards cameras proving 'what is real,'" clearly verbalizing the point. Bluesky Social


(2) From a Creator's Perspective: "Seeking Awkwardness Is a Change in Competition Rules"
On LinkedIn, a post summarizing Mosseri's long text into "9 key points" circulated, with discussions on "authenticity = trust as currency" and "in the era of infinite generation, 'who said it' becomes valuable," extending the "trust economy" debate. In the comments, the view that individual consistency and judgment become differentiating factors was also presented. LinkedIn


(3) Criticism: "Is the Platform Pushing the Flood It Created onto Others?"
In comments and secondary dissemination of articles from Engadget and The Verge, the aspect of platforms promoting AI features was considered, with repeated questions about how to address the responsibility if "undetectability" is assumed. Engadget


7) Three Changes Likely to Occur (Predictions)

Finally, how will these points change on-the-ground operations? Based on the article content and surrounding reports, at least the following three are likely to progress.

  1. The Standardization Competition Over "Authenticity Metadata"
    A battle for leadership over what format to hold cryptographic signatures, provenance, and editing history will begin. Engadget

  2. Creators' Strategies Shift from "Beauty" to "Context"
    Simply making things beautiful will become less differentiating, and aspects such as everydayness, relationships, continuity, and narrative—"unique personal accumulation"—will carry more weight. The Verge

  3. Users Judge Based on "Source" Rather Than "Sight"
    Viewing with skepticism becomes the premise, and information such as account creation date, past posts, sharing motives, and related communities become important. Platforms will face pressure to present "account context" rather than "media." NDTV Profit


Reference Articles

"Easier to Fingerprint Real Media"—Instagram Head Admits Naked Eye Not Enough to Spot AI Slop
Source: https://www.ndtvprofit.com/technology/easier-to-fingerprint-real-media-instagram-head-admits-naked-eye-not-enough-to-spot-ai-slop

← Back to Article List

Contact |  Terms of Service |  Privacy Policy |  Cookie Policy |  Cookie Settings

© Copyright ukiyo journal - 日本と世界をつなぐ新しいニュースメディア All rights reserved.