Monday, October 27, 2025

 

People don’t worry about losing jobs to AI, even when told it could happen soon




University of California - Merced


As debates about artificial intelligence and employment intensify, new research suggests that even warnings about near-term job automation do little to shake public confidence.

In a survey-based study, political scientists Anil Menon of the University of California, Merced, and Baobao Zhang of Syracuse University examined how people respond to forecasts of the arrival of “transformative AI,” ranging from as early as 2026 to as distant as 2060.

The study will appear in The Journal of Politics.

The researchers found that shorter timelines made respondents slightly more anxious about losing their jobs to automation, but did not meaningfully alter their views on when job losses would occur or their support for government responses such as retraining workers or providing a universal basic income.

Respondents to the survey of 2,440 U.S. adults who read about the rapid development of large language and other generative models — similar to the systems driving ChatGPT or text-to-image programs — did predict that automation might come somewhat sooner. Yet their policy preferences and economic outlooks remained essentially unchanged. When all informational treatments were combined, respondents showed only modest increases in concern about technological unemployment.

“These results suggest that Americans’ beliefs about automation risks are stubborn,” the authors said. “Even when told that human-level AI could arrive within just a few years, people don’t dramatically revise their expectations or demand new policies.”

Menon and Zhang say their findings challenge the assumption that making technological threats feel more immediate will mobilize public support for regulation or safety nets.

The study draws on construal level theory, which examines how people’s sense of time shapes their risk judgments. Participants who were told that AI breakthroughs were imminent were not significantly more alarmed than those given distant timelines (happening in 2026 vs. 2060).
The survey, fielded in March 2024, was quota-representative by age, gender and political affiliation. Respondents were randomly assigned a group: one control group and three groups exposed to short- (2026), medium- (2030), or long-term (2060) automation forecasts. Each vignette described experts predicting that advances in machine learning and robotics could replace human workers in a wide range of professions, from software engineers and legal clerks to teachers and nurses.

After reading the vignette, participants estimated when their jobs and others’ jobs would be automated, reported confidence in those predictions, rated their worry about job loss, and indicated support for several policy responses, including limits on automation and increased AI research funding.

While exposure to any timeline increased awareness of automation risks, only the 2060 condition significantly raised worry about job loss within 10 years, perhaps because that forecast seemed more credible than claims of imminent disruption.
These results arrive amid widespread debate over how large language models and other generative systems will reshape work. Tech leaders have predicted human-level AI may emerge within the decade, while critics argue that such forecasts exaggerate current capabilities.

The study by Menon and Zhang shows that the public remains cautious but not panicked, an insight that may help policymakers gauge when and how citizens will support interventions such as retraining programs or universal income proposals.
The authors noted several caveats. Their design focused on how timeline cues influence attitudes but did not test other psychological pathways, such as beliefs about AI’s economic trade-offs or the credibility of expert forecasts. The researchers also acknowledge that their single-wave survey cannot track changes in individuals’ perceptions over time. Future research, they suggested, could use multi-wave panels or examine reactions to specific types of AI systems.

“The public’s expectations about automation appear remarkably stable,” they said. “Understanding why they are so resistant to change is crucial for anticipating how societies will navigate the labor disruptions of the AI era.”

 

Computer scientists build AI tool to spot risky and unenforceable contract terms



“ContractNerd” designed to identify both illegal and unfair clauses in leases and employment contracts



New York University





Contracts written by employers and landlords often result in second parties—employees and tenants—facing unfair terms because these documents contain unreasonable or ambiguous clauses, leaving the second parties vulnerable to unjust expenses or constraints.

For example, “Tenant must provide written notice of intent to vacate at a reasonable time”—commonly used phrasing in leases—is ambiguous because “reasonable” is undefined. Also, “Employee agrees not to work for any business in the United States for two years following termination,” often included in employee contracts, is unenforceable because many states prohibit broad non-compete agreements. 

To better spot these problematic passages, a team of New York University researchers has created a tool that deploys large language models (LLMs) to analyze contractual agreements and characterize clauses across four categories: missing clauses, unenforceable clauses, legally sound clauses, and legal but risky clauses, identifying the latter as “high risk,” “medium risk,” or “low risk.”

The creators of the tool, ContractNerd, see it as a useful platform for both drafters and signing parties to navigate complex contracts by spotting potential legal risks and disputes. 

“Many of us have to read and decide whether or not to sign contracts, but few of us have the legal training to understand them properly,” says Dennis Shasha, Silver Professor of Computer Science at New York University’s Courant Institute of Mathematical Sciences and the senior author of the research, which appears in the journal MDPI Electronics. “ContractNerd is an AI system that analyzes contracts for clauses that are missing, are extremely biased, are often illegal, or are ambiguous—and will suggest improvements to them.” 

ContractNerd, which analyzes leases and employment contracts in New York City and Chicago, draws from several sources in spotting contractual risk that is “high,” “medium,” and “low.” These sources include Thomson Reuters Westlaw; Justia, a reference for standard rental-agreement language; and Agile Legal, a comprehensive library of legal clauses. It also takes into account state regulations.

To evaluate the tool’s effectiveness, the creators used a series of methods that compared ContractNerd with existing AI systems that analyze contracts. The first comparison showed that ContractNerd yielded the highest scores among these systems based on how accurately each predicted which clauses would be deemed unenforceable in legal cases. 

In the second, an independent panel of laypersons evaluated the output of ContractNerd and  the best of the other systems from the first comparison—goHeather—based on the following criteria:

  • Relevance: How directly the analysis addressed the content and intent of the clause

  • Accuracy: Whether the legal references and interpretations were factually and legally correct

  • Completeness: Whether the analysis covered all significant legal and contextual aspects

For each clause, the reviewers, who were blinded to the actual names of the tools in order to control for bias, indicated which system—“System A” and “System B”—produced the better output. Overall, ContractNerd received the better ratings.

In the third, the creators received the help of NYU School of Law Professor Clayton Gillette, an expert in contracts law, to offer qualitative assessments of both systems using the same criteria. These included analyzing outputs for simple contract clauses, such as “No pets allowed,” to more complicated ones, such as “Tenant shall be responsible for all attorney fees incurred due to breach of this lease agreement.”

In general, Gillette found ContractNerd to be more thorough in its outputs, but saw GoHeather’s analysis easier to comprehend.

“Contracts are of course about law, but they should also be fair to both parties,” says Shasha, who plans on expanding the geographic reach of the tool. “We see ContractNerd as an aid that can help guide users in determining if a contract is both legal and fair, potentially heading off both risky agreements and future legal disputes.”

The paper’s other authors were Musonda Sinkala and Yuge Duan, NYU graduate students at the time the prototype was built, and Haowen Yuan, an NYU undergraduate.

# # #

No comments: