The Double Standard: Blocking AI While Deploying AI

In an era when artificial intelligence threatens to displace traditional journalism, a glaring contradiction has emerged: news organizations that block AI crawlers from accessing their content are increasingly using AI to generate the very content they deny to AI. This move not only undermines the values of transparency and fairness, but also exposes a troubling hypocrisy in the media’s engagement with AI.

Fortifying the Gates Against AI
Many established news outlets have taken concrete steps to prevent AI from accessing their content. As of early 2024, over 88 percent of top news outlets, including The New York TimesThe Washington Post, and The Guardian, were blocking AI data-collection bots such as OpenAI’s GPTBot via their robots.txt files. Echoing these moves, a Reuters Institute report found that nearly 80 percent of prominent U.S. news organizations blocked OpenAI’s crawlers by the end of 2023, while roughly 36 percent blocked Google’s AI crawler.

These restrictions are not limited to voluntary technical guidelines. Cloudflare has gone further, blocking known AI crawlers by default and offering publishers a “Pay Per Crawl” model, allowing access to their content only under specific licensing terms. The intent is clear: content creators want to retain control, demand compensation, and prevent unlicensed harvesting of their journalism.

But Then They Use AI To Generate Their Own Content
While these publishers fortify their content against external AI exploitation, they increasingly turn to AI internally to produce articles, summaries, and other content. This shift has real consequences: jobs are being cut and AI-generated content is being used to replace human-created journalism.
Reach plc, publisher of MirrorExpress, and others, recently announced a restructuring that places 600 jobs at risk, including 321 editorial positions, as it pivots toward AI-driven formats like video and live content.
Business Insider CEO Barbara Peng confirmed that roughly 21 percent of the staff were laid off to offset declines in search traffic, while the company shifts resources toward AI-generated features such as automated audio briefings.
• CNET faced backlash after it published numerous AI-generated stories under staff bylines, some containing factual errors. The fallout led to corrections and a renewed pushback from newsroom employees.

The Hypocrisy Unfolds
This dissonance, blocking AI while deploying it, lies at the heart of the hypocrisy. On one hand, publishers argue for content sovereignty: preventing AI from freely ingesting and repurposing their work. On the other hand, they quietly harness AI for their own ends, often reducing staffing under the pretense of innovation or cost-cutting.

This creates a scenario in which:
AI is denied access to public content, while in-house AI is trusted with producing public-facing content.
Human labor is dismissed in the name of progress, even though AI is not prevented from tapping into the cultural and journalistic capital built over years.
Control and compensation arguments are asserted to keep AI out, yet the same AI is deployed strategically to reshape newsroom economics.

This approach fails to reconcile the ethical tensions it embodies. If publishers truly value journalistic integrity, transparency, and compensation, then applying those principles selectively, accepting them only when convenient, is disingenuous. The news media’s simultaneous rejection and embrace of AI reflect a transactional, rather than principled, stance.

A Path Forward – or a Mirage?
Some publishers are demanding fair licensing models, seeking to monetize AI access rather than simply deny it. The emergence of frameworks like the Really Simple Licensing (RSL) standard allows websites to specify terms, such as royalties or pay-per-inference charges, in their robots.txt, aiming for a more equitable exchange between AI firms and content creators.

Still, that measured approach contrasts sharply with using AI to cut costs internally, a strategy that further alienates journalists and erodes trust in media institutions.

Integrity or Expedience?
The juxtaposition of content protection and AI deployment in newsrooms lays bare a cynical calculus: AI is off-limits when others use it, but eminently acceptable when it serves internal profit goals. This selective embrace erodes the moral foundation of journalistic institutions and raises urgent questions:
• Can publishers reconcile the need for revenue with the ethical imperatives of transparency and fairness?
• Will the rapid rise of AI content displace more journalists than it empowers?
• And ultimately, can media institutions craft coherent policies that honor both their creators and the audience’s right to trustworthy news

Perhaps there is a path toward licensing frameworks and responsible AI use that aligns with journalistic values, but as long as the will to shift blame, “not us scraping, but us firing”, persists, the hypocrisy remains undeniable.

Leave a comment