To what extent can journalists use AI? - Could transparency backfire? The "Disclosure Dilemma" faced by news in the AI era

To what extent can journalists use AI? - Could transparency backfire? The "Disclosure Dilemma" faced by news in the AI era

AI has become the "invisible co-editor" in newsrooms. Transcribing interview notes, organizing vast documents, summarizing, proposing headlines, assisting with images... Tasks that once required manpower and time can now be completed in minutes. For the financially strained news industry, AI is an attractive means to "reduce costs and increase speed." However, this convenience simultaneously blurs "accountability" and shakes trust.


This tension has finally surfaced as a "labor-management issue." Journalists at ProPublica, known for independent investigative reporting, have taken a strong stance in negotiations over the use of AI, drawing attention as a major point of contention in labor disputes within the news industry.



1) What is happening on the ground now: AI is "helpful"—but accidents are also increasing

The reason why those on the ground cannot let go of AI is clear. In data-driven reporting, AI simplifies complex tasks and saves time. Automatic transcription of audio has become standard, and search services themselves have started incorporating AI summaries.


On the other hand, "accidents" resulting from hasty implementation are also surfacing. Cases of correcting errors in AI summaries, posts under non-existent author names, and AI-generated content fabricating facts. A symbolic incident was when Ars Technica published an article containing "fabricated quotes" generated by an AI tool, which was later retracted and apologized for. This was a classic example of AI's typical weakness (mixing plausible lies), which even a veteran tech news outlet fell victim to.


In other words, AI can be both an "angel of efficiency" and a "demon of trust erosion." The problem lies in news organizations not being able to decisively determine "where to place the guardrails" given this dual nature.



2) Why is there conflict: AI governance issues focus more on "authority" than "text"

Reducing the issue to merely "whether to write articles with AI" would be a misjudgment. The actual conflict is much broader.

  • Disclosure: How to communicate the use of AI to readers

  • Human-in-the-loop: Which processes require human judgment

  • Jobs: When AI replaces jobs, who is protected and who is relocated

  • Accountability: When mistakes occur, is the responsibility on the reporter, editor, or tool implementer


ProPublica's management argues that "locking in the operation of rapidly changing technology in contracts spanning several years is risky," while the labor union counters that "without locking it in, there are no safeguards." It is truly a tug-of-war over governance.


What is crucial here is that the scope of AI utilization is expanding from a "point" to a "line." From the start of reporting to publication, AI can be involved in countless processes. Therefore, a simple rule like "always display when AI is used" may not accurately capture the reality on the ground.



3) Is "disclosure increases trust" an illusion?—The "disclosure dilemma"

Readers generally say they "want to know about AI usage." However, when AI usage is explicitly stated, trust tends to decrease—this contradiction troubles the reporting field.


Why is this? Several reasons can be considered.
One is that readers directly associate AI with "cost-cutting tools" and "symbols of cutting corners." Another is that the "hallucinations" of generative AI are widely known, strengthening the preconception that "AI = a hotbed of errors." The retraction by Ars Technica only reinforced this distrust.


Moreover, a certain number of people believe that "AI should not be used in reporting in the first place." For such individuals, disclosure becomes a "warning label."


Ultimately, while disclosure can serve as "proof of sincerity," it also carries the risk of evoking "quality degradation" or "absence of reporters." This is the "Catch-22."



4) Rules can't keep up: The speed of change "rots" regulations

The claim that AI evolves too quickly is often heard within the industry. A recently buzzed essay stirred and spread the notion that "if you haven't interacted with AI in the past few months, today's AI looks remarkably different." This has also fueled the argument that "even if regulations are written now, they will soon become obsolete."


However, there is also the perspective that precisely because change is rapid, "fundamental principles" are necessary. For example,

  • Responsibility for fact-checking lies with humans

  • Quotes, proper nouns, and figures should be linked to verifiable sources

  • Sections involving AI should leave auditable logs
    These are "operational frameworks" that should remain applicable even if the type of tool changes.


In fact, Trusting News is encouraging the creation of guidelines regarding the transparency and explanation of AI utilization.



5) Should "law" intervene: The ripples caused by New York State's NY FAIR News Act

The debate has finally spread to politics. In New York State, a bill (NY FAIR News Act) has been proposed that demands clear disclaimers and displays for news content involving generative AI, and mandates reviews by human editorial authority, sparking both support and opposition.


Supporters cite "ensuring transparency," "worker protection," and "reader rights" as reasons. Opponents and those with concerns warn that "the government might intrude on editorial decisions" and that it could infringe on the independence of reporting and freedom of expression.


What is visible here is the reality that AI governance is becoming a discourse on the design of social systems, beyond just "in-house operations."



6) Reactions on social media: Voices from the field are beginning to demand "governance" over "transparency"

It is interesting that the current issue is shifting focus on social media from a simple dichotomy of "is AI good or bad" to "how to govern it."


(A) Labor unions and journalists: Demanding not just employment but also "trust guardrails"

In communications from the NewsGuild, ProPublica's negotiations are seen as a "breakwater to prevent AI abuse," with posts showing a strong stance, even ready to strike. On Bluesky, there is also a tone of solidarity with ProPublica's actions.


The implication is this: "AI will inevitably be introduced. Therefore, bind transparency, human involvement, and minimization of employment impact with contracts and processes."


(B) Tech/reader communities: Pursuing "audit" and "who did it?" in response to Ars Technica's retraction

Regarding Ars Technica's retraction of "fabricated quotes," there are many reactions on forums and communities demanding a process audit, asking "why wasn't it verified" and "who used the tool and where could it have been stopped."


This is more a perspective of "process management failure" than an emotional critique of AI. If AI output is treated as "material," then a person responsible for material inspection and standards is necessary, converging on a reasonable demand.


(C) Reaction to regulatory bills: "Display" alone is insufficient / Conversely, "overstepping"

Reactions to the NY FAIR News Act are divided. On platforms like LinkedIn, there is a broader governance discourse that "AI is not just about tool implementation or display wording, but 'lifecycle governance' including data access, approval authority, and recordability."


On the other hand, there is strong criticism that it might undermine the independence of reporting. There are doubts about whether transparency can be ensured without the law affecting the "content of editing," which is realistically difficult.



7) How should newsrooms "govern": Three practical solutions

In response to the questions raised by this article, the practical solutions are not about "whether to use AI or not," but are summarized in the following three points.

① Define "significant AI use" and strongly bind only those areas

Uniform disclosure across all processes is prone to operational failure. Therefore,

  • Text generation

  • Significant summaries (those that substitute the main points of the news)

  • Generation or alteration of images and videos

  • Generation of quotes (this is treated almost as a prohibition)
    Defining areas that "influence reader judgment and where errors can cause significant damage" as "important" and conducting strong audits and disclosures is a sound approach.

② Design Human-in-the-loop as "responsibility" rather than "editing"

"A human has checked" is weak. Who checked what, and by what standards? The design of logs and authority becomes central to governance. The Ars Technica incident highlighted the fragility of operational design over rule wording.

③ Make explanations to readers "aids to understanding" rather than "get-out-of-jail-free cards"

Disclosure is not a panacea for increasing trust. Therefore, instead of simply labeling "AI was used,"

  • For what purpose it was used (e.g., transcription, organizing materials)

  • Whether it was not used in parts where it should not be (e.g., generation of quotes was not done)

  • Correction procedures in case of errors
    It is necessary to design "explanations" that include these elements.



Conclusion: AI has become a "governance subject" rather than a "tool"

AI increases efficiency in newsrooms. However, it simultaneously increases pathways for errors to enter and dissolves the boundaries of responsibility. The demand for AI clauses by labor unions, the push for transparency by legislation, and the call for audits by reader communities indicate that AI has transformed from merely a "convenient tool" to a "governance subject."


The future of news will not be about whether to include AI, but about competing to "maintain trust in a world with AI." The outcome will be determined more by governance design than model performance.



Source URL