Thursday, April 09, 2026

New York Times Guild Slams Paper’s AI Policies as ‘Woefully Inadequate’

"When the Times instead publishes AI-generated work, intentionally or not, our readers lose trust in what we do. This is unacceptable," the union members write

Corbin Bolies
THE WRAP-
Wed, April 8, 2026 



New York Times building (Credit: Craig T Fruchtman/Getty Images)

Members of the New York Times’ union slammed the company‘s AI policies in a letter to management as “woefully inadqeuate” on Tuesday, citing TheWrap’s report on how a freelance book critic used AI for a Times book review as evidence that AI-generated content makes “readers lose trust in what we do.”

“Our dedicated human journalists — including and especially the Times Guild’s 1,500 members — make this paper a reliable source for millions of subscribers who want quality reporting and commentary,” the letter, signed by the union’s AI subcommittee members Isaac Aronow, Parker Richards and Lydia DePillis, read. “When the Times instead publishes AI-generated work, intentionally or not, our readers lose trust in what we do. This is unacceptable. At present, the Times’ standards on AI use are woefully inadequate.”

The letter, which was first reported by Axios, was addressed to Times CEO and president Meredith Kopit Levien, publisher A.G. Sulzberger, executive editor Joe Kahn and opinion editor Katie Kingsbury. It was also addressed to managing editors Marc Lacey and Carolyn Ryan, who are the management representatives in contract negotiations.

The staffers highlighted TheWrap’s report from last week, which revealed the paper was cutting ties with freelance book critic Alex Preston after it discovered he used AI to help write a review that incorporated elements of a Guardian piece on the same book. Preston told TheWrap he used the tool “improperly” and failed to catch “overlapping language” with the Guardian review, and the Times called the usage “a serious violation of the Times’s integrity and fundamental journalistic standards.”

The staffers said the Times’ current public guidelines on the technology are “often unclear or open to interpretation” as they said it places the burden on writers and editors instead of company leaders.

“The company calls on employees to use AI ‘transparently,’ but often fails to disclose how AI is used in stories (and, conversely, has at times claimed that AI did work that was in fact done by human Guild members),” the members wrote. “We are told to use AI ‘ethically,’ but given little guidance on what exactly that means.”

The guild, which represents roughly 1,500 Times staffers, did not specify to which stories it was referring. The guild has also asked for the company to include protections around AI in the performance review process, offer clearer disclosures over how the technology is used in stories and strengthen protections over how AI uses a Times staffer’s name, image and likeness.

Negotiations around AI have stunted talks between the Times and its guild as both sides have tried to hammer out a new agreement following its last contract’s Feb. 28 expiration.

Lacey told Times staffers in a letter on Tuesday that both sides agreed that “having strong AI guidelines and standards” would “ensure the integrity of our work and maintain the trust of our readers,” but noted that the guild’s quest to define those guidelines in the contract could dampen how the paper experiments with the evolving technology.

“Where the company conflicts with guild leadership is whether we write AI restrictions and prohibitions into a contract lasting several years,” he wrote. “AI technology is ceaselessly evolving – quickly – and we believe that this rapid change is precisely why we must remain flexible.”

Lacey also said both sides have tentatively agreed to disability accomodation language, a point the company previously tried to tie to its AI proposal.

AI negotiations have spread across newsrooms. Staffers at the Sacramento Bee and the Charlotte Observer, two news outlets owned by McClatchy, expressed concerns with management over a new AI tool meant to repurpose older stories under new headlines, and unionized ProPublica staffers staged a 24-hour walkout on Wednesday after contract talks — including over AI provisions — broke down




Gen Z workers are so fearful AI will take their job they’re intentionally sabotaging their company’s AI rollout

Jake Angelo
Wed, April 8, 2026 


Many employees are refusing to use AI tools, with some even admitting to tampering with performance reviews to make AI appear less effective.(Maskot/Getty Images)More

AI’s capabilities are growing more sophisticated by the day, and business leaders are rushing to adopt the technology to remain competitive.

But one obstacle to AI adoption is catching companies off guard: their own workers.

A new report published Tuesday from enterprise AI agent firm Writer and research firm Workplace Intelligence finds a significant share of employees are actively trying to sabotage their company’s AI rollout. The report—a survey of 2,400 knowledge workers across the U.S., the U.K., and Europe, including 1,200 C-suite executives—found 29% of employees admit to sabotaging their company’s AI strategy. That number jumps to 44% among Gen Z workers.

The sabotage entails entering proprietary information into public AI tools, or using unapproved AI tools. Some employees report outright refusing to use AI tools. Others have even admitted to tampering with performance reviews or intentionally generating low-output work to make AI appear less effective.

As AI becomes ubiquitous across society, many people are growing to hate it. A recent NBC News poll found just 26% of registered U.S. voters have a positive view of AI, while 46% hold a negative view.

Meanwhile, business leaders and AI experts have issued successive warnings about the threat AI poses to human workers. Anthropic CEO Dario Amodei said AI could snatch half of entry-level, white-collar jobs, roles many Gen Z workers hold today. Microsoft AI chief Mustafa Suleyman issued a similar warning earlier this year, saying all white-collar work could be automated in 18 months.

An Anthropic study released last month found AI is already theoretically capable of completing the majority of tasks associated with computer science, law, business, and finance, and other major white-collar fields. As the fear of AI automation slowly materializes into reality, many workers, including a sizable chunk of Gen Z employees, are pushing back against the assumed doomed fate of their careers.
Why employees are sabotaging AI—and why it’s backfiring

Of those workers who admitted to sabotaging their company’s AI technology, 30% cited fear AI would take their job. “FOBO”—fear of becoming obsolete—is widespread. KPMG similarly found in November four in 10 workers fear AI could take their job. But ironically, the survey found workers who refuse to adopt AI are actually more vulnerable to layoffs than those embracing the technology. Sixty percent of executives said they’re considering cutting employees who refuse to adopt AI. Another 28% are concerned about the technology’s security risks. Twenty-six percent think the technology diminishes their creativity or value within the company. Another 26% cite poorly executed company AI strategy.

Even as some companies rush to implement AI agents, an MIT report released last year also found 95% of generative AI pilots at companies are failing not because of the quality of the technology, but the learning gap between tools and organizations.

Yet as some employees drag their feet, researchers found the workers actively implementing AI into their workflows are getting ahead. Dan Schawbel, managing partner at Workplace Intelligence, said AI “super-users,” workers who have mastered generative AI to a high degree of proficiency, are being rewarded for their work more so than laggards.

“The super-users we surveyed were around 3x more likely to have received both a promotion and pay raise in the past year, compared to employees who have been slow to adopt these tools,” Schawbel said in a statement. “Top AI users are also saving nearly nine hours per week using AI—4.5x more than the two hours a week reported by AI laggards.”

A staggering 77% of executives said those employees who refuse to become proficient in AI won’t be considered for promotions or leadership roles as business leaders aim to steer their companies into the future with AI, according to the Writer and Workplace Intelligence report. And 69% are planning AI-related layoffs. But May Habib, CEO and cofounder of Writer, said the most successful companies are not relying on layoffs: They’re optimizing the balance between agentic AI and human capabilities.

“The leaders who are putting in the work to radically redesign operations with human-agent collaboration at the center are the ones compounding their advantage in ways competitors can’t replicate,” Habib said in a statement.



No comments: