People don’t worry about losing jobs to AI, even when told it could happen soon
As debates about artificial intelligence and employment intensify, new research suggests that even warnings about near-term job automation do little to shake public confidence.
In a survey-based study, political scientists Anil Menon of the University of California, Merced, and Baobao Zhang of Syracuse University examined how people respond to forecasts of the arrival of “transformative AI,” ranging from as early as 2026 to as distant as 2060.
The study will appear in The Journal of Politics.
The researchers found that shorter timelines made respondents slightly more anxious about losing their jobs to automation, but did not meaningfully alter their views on when job losses would occur or their support for government responses such as retraining workers or providing a universal basic income.
Respondents to the survey of 2,440 U.S. adults who read about the rapid development of large language and other generative models — similar to the systems driving ChatGPT or text-to-image programs — did predict that automation might come somewhat sooner. Yet their policy preferences and economic outlooks remained essentially unchanged. When all informational treatments were combined, respondents showed only modest increases in concern about technological unemployment.
“These results suggest that Americans’ beliefs about automation risks are stubborn,” the authors said. “Even when told that human-level AI could arrive within just a few years, people don’t dramatically revise their expectations or demand new policies.”
Menon and Zhang say their findings challenge the assumption that making technological threats feel more immediate will mobilize public support for regulation or safety nets.
The study draws on construal level theory, which examines how people’s sense of time shapes their risk judgments. Participants who were told that AI breakthroughs were imminent were not significantly more alarmed than those given distant timelines (happening in 2026 vs. 2060).
The survey, fielded in March 2024, was quota-representative by age, gender and political affiliation. Respondents were randomly assigned a group: one control group and three groups exposed to short- (2026), medium- (2030), or long-term (2060) automation forecasts. Each vignette described experts predicting that advances in machine learning and robotics could replace human workers in a wide range of professions, from software engineers and legal clerks to teachers and nurses.
After reading the vignette, participants estimated when their jobs and others’ jobs would be automated, reported confidence in those predictions, rated their worry about job loss, and indicated support for several policy responses, including limits on automation and increased AI research funding.
While exposure to any timeline increased awareness of automation risks, only the 2060 condition significantly raised worry about job loss within 10 years, perhaps because that forecast seemed more credible than claims of imminent disruption.
These results arrive amid widespread debate over how large language models and other generative systems will reshape work. Tech leaders have predicted human-level AI may emerge within the decade, while critics argue that such forecasts exaggerate current capabilities.
The study by Menon and Zhang shows that the public remains cautious but not panicked, an insight that may help policymakers gauge when and how citizens will support interventions such as retraining programs or universal income proposals.
The authors noted several caveats. Their design focused on how timeline cues influence attitudes but did not test other psychological pathways, such as beliefs about AI’s economic trade-offs or the credibility of expert forecasts. The researchers also acknowledge that their single-wave survey cannot track changes in individuals’ perceptions over time. Future research, they suggested, could use multi-wave panels or examine reactions to specific types of AI systems.
“The public’s expectations about automation appear remarkably stable,” they said. “Understanding why they are so resistant to change is crucial for anticipating how societies will navigate the labor disruptions of the AI era.”
Journal
The Journal of Politics
Method of Research
Survey
Subject of Research
People
Article Title
Future Shock or Future Shrug? Public Responses to Varied Artificial Intelligence Development Timelines
Computer scientists build AI tool to spot risky and unenforceable contract terms
“ContractNerd” designed to identify both illegal and unfair clauses in leases and employment contracts
New York University
Contracts written by employers and landlords often result in second parties—employees and tenants—facing unfair terms because these documents contain unreasonable or ambiguous clauses, leaving the second parties vulnerable to unjust expenses or constraints.
For example, “Tenant must provide written notice of intent to vacate at a reasonable time”—commonly used phrasing in leases—is ambiguous because “reasonable” is undefined. Also, “Employee agrees not to work for any business in the United States for two years following termination,” often included in employee contracts, is unenforceable because many states prohibit broad non-compete agreements.
To better spot these problematic passages, a team of New York University researchers has created a tool that deploys large language models (LLMs) to analyze contractual agreements and characterize clauses across four categories: missing clauses, unenforceable clauses, legally sound clauses, and legal but risky clauses, identifying the latter as “high risk,” “medium risk,” or “low risk.”
The creators of the tool, ContractNerd, see it as a useful platform for both drafters and signing parties to navigate complex contracts by spotting potential legal risks and disputes.
“Many of us have to read and decide whether or not to sign contracts, but few of us have the legal training to understand them properly,” says Dennis Shasha, Silver Professor of Computer Science at New York University’s Courant Institute of Mathematical Sciences and the senior author of the research, which appears in the journal MDPI Electronics. “ContractNerd is an AI system that analyzes contracts for clauses that are missing, are extremely biased, are often illegal, or are ambiguous—and will suggest improvements to them.”
ContractNerd, which analyzes leases and employment contracts in New York City and Chicago, draws from several sources in spotting contractual risk that is “high,” “medium,” and “low.” These sources include Thomson Reuters Westlaw; Justia, a reference for standard rental-agreement language; and Agile Legal, a comprehensive library of legal clauses. It also takes into account state regulations.
To evaluate the tool’s effectiveness, the creators used a series of methods that compared ContractNerd with existing AI systems that analyze contracts. The first comparison showed that ContractNerd yielded the highest scores among these systems based on how accurately each predicted which clauses would be deemed unenforceable in legal cases.
In the second, an independent panel of laypersons evaluated the output of ContractNerd and the best of the other systems from the first comparison—goHeather—based on the following criteria:
Relevance: How directly the analysis addressed the content and intent of the clause
Accuracy: Whether the legal references and interpretations were factually and legally correct
Completeness: Whether the analysis covered all significant legal and contextual aspects
For each clause, the reviewers, who were blinded to the actual names of the tools in order to control for bias, indicated which system—“System A” and “System B”—produced the better output. Overall, ContractNerd received the better ratings.
In the third, the creators received the help of NYU School of Law Professor Clayton Gillette, an expert in contracts law, to offer qualitative assessments of both systems using the same criteria. These included analyzing outputs for simple contract clauses, such as “No pets allowed,” to more complicated ones, such as “Tenant shall be responsible for all attorney fees incurred due to breach of this lease agreement.”
In general, Gillette found ContractNerd to be more thorough in its outputs, but saw GoHeather’s analysis easier to comprehend.
“Contracts are of course about law, but they should also be fair to both parties,” says Shasha, who plans on expanding the geographic reach of the tool. “We see ContractNerd as an aid that can help guide users in determining if a contract is both legal and fair, potentially heading off both risky agreements and future legal disputes.”
The paper’s other authors were Musonda Sinkala and Yuge Duan, NYU graduate students at the time the prototype was built, and Haowen Yuan, an NYU undergraduate.
# # #
Journal
Electronics
Method of Research
Experimental study
Article Title
ContractNerd: An AI Tool to Find Unenforceable, Ambiguous, and Prejudicial Clauses in Contracts
Article Publication Date
27-Oct-2025
Generative AI can help athletes avoid injuries
Researchers developed an AI model that generates the best motions for athletes to train and to rehab after injury
University of California - San Diego
video:
Comparison of generated samples from baseline models and BIGE. The yellow curve represents the movement of the hip joint over the entire squat cycle. BIGE generates a more realistic squat motion compared to other models.
view moreCredit: University of California San Diego
Researchers at the University of California San Diego have created a model driven by generative AI that will help prevent injuries in athletes and also aid in rehabilitation after an injury. The model could also help athletes train better.
The model, called BIGE (for Biomechanics-informed GenAI for Exercise Science), was trained with athlete movements together with information about the biomechanical constraints on the human body, such as how much force a muscle can develop. The model can generate videos of motions that athletes can mimic to avoid injury when they train. It can also generate motions that athletes can execute to keep exercising when they are injured.
It can be used to generate the best motions athletes can execute during exercise to avoid injury and improve performance, or the best motions for athletes that need rehabilitation after an injury.
“This approach is going to be the future,” predicts Andrew McCulloch, distinguished professor in the Shu Chien-Gene Lay Department of Bioengineering at UC San Diego and one of the paper’s senior authors.
To the best of the researchers’ knowledge, BIGE is the only model that brings together generative AI and realistic biomechanics. Most generative AI models tasked with generating movements such as squats produce results that are not consistent with the anatomical and mechanical constraints that limit real human movements. Meanwhile, methods that do not rely on generative AI to generate these movements require a prohibitive amount of computation.
To train the model, researchers used data from motion-capture videos of people performing squats. They then translated the motions onto 3D-skeletal models and used the computed forces to generate more physically realistic motions.
Next steps include using the model for movements beyond squats and personalizing the models for specific individuals.
“This methodology could be used by anyone,” said Rose Yu, a professor in the UC San Diego Department of Computer Science and Engineering and one of the paper’s senior authors as well.
For example, the model could be used to determine fall risks in the elderly.
The research team recently presented their work at the Learning for Dynamics & Control Conference at the University of Michigan, in Ann Arbor, Michigan.
BIGE : Biomechanics-informed GenAI for Exercise Science
Shubh Maheshwari, Anwesh Mohanty, Yadi Cao, Rose Yu and UC San Diego Department of Computer Science and Engineering
Swithin Razu and Andrew McCulloch, UC San Diego Shu Chien-Gene Lay Department of Bioengineering
Method of Research
Observational study
Subject of Research
People
Article Title
BIGE : Biomechanics-informed GenAI for Exercise Science
Research alert: Rebalancing the gut: how AI solved a 25-year Crohn’s disease mystery
UC San Diego researchers have settled a decades-long debate surrounding the role of the first Crohn’s disease gene to be associated with a heightened risk for developing the auto-immune condition
image:
Co-first author Mahitha Shree Anandachar (center), a Ph.D. student in Biomedical Sciences at UC San Diego, with student research assistant Jasmin Salem (left) and Pradipta Ghosh, M.D. (right).
view moreCredit: UC San Diego Health Sciences
The human gut contains two types of macrophages, or specialized white blood cells, that have very different but equally important roles in maintaining balance in the digestive system. Inflammatory macrophages fight microbial infections, while non-inflammatory macrophages repair damaged tissue. In Crohn’s disease — a form of inflammatory bowel disease (IBD) — an imbalance between these two types of macrophages can result in chronic gut inflammation, damaging the intestinal wall and causing pain and other symptoms.
Researchers at University of California San Diego School of Medicine have developed a new approach that integrates artificial intelligence (AI) with advanced molecular biology techniques to decode what determines whether a macrophage will become inflammatory or non-inflammatory.
The study also resolves a longstanding mystery surrounding the role of a gene called NOD2 in this decision-making process. NOD2 was discovered in 2001 and is the first gene linked to a heightened risk for Crohn’s disease.
Using a powerful machine learning tool, the researchers analyzed thousands of macrophage gene expression patterns from colon tissue affected by IBD and from healthy colon tissue. They identified a macrophage gene signature consisting of 53 genes that reliably separates reactive, inflammatory macrophages from tissue-healing macrophages.
One of these 53 genes encodes a protein called girdin. Further analysis revealed that in non-inflammatory macrophages, a specific region of the NOD2 protein binds to girdin. This suppresses runaway inflammation, clears harmful microbes and allows for the repair of tissues damaged by IBD. But the most common Crohn’s disease mutation to the NOD2 gene deletes the section of the gene that girdin would normally bind to. This results in a dangerous imbalance between inflammatory and non-inflammatory macrophages.
“NOD2 functions as the body’s infection surveillance system,” said senior author Pradipta Ghosh, M.D., professor and cellular and molecular medicine at UC San Diego School of Medicine. “When bound to girdin, it detects invading pathogens and maintains gut immune balance by swiftly neutralizing them. Without this partnership, the NOD2 surveillance system collapses.”
The researchers then confirmed the importance of the interaction between NOD2 and girdin by comparing mouse models of Crohn’s disease lacking the girdin protein to those with girdin intact. They found that mice without girdin suffered an imbalance in their gut microbiome and developed inflammation of the small intestine. They often died of sepsis, a condition in which the immune system mounts an excessive response to an infection, causing inflammation throughout the body and damage to vital organs.
The gut is a battlefield, and macrophages are the peacekeepers,” said co-first author Gajanan D. Katkar, Ph.D., assistant project scientist at UC San Diego School of Medicine. “For the first time, AI has allowed us to clearly define and track the players on two opposing teams.”
By uniting AI-driven classification, mechanistic biochemistry, and mouse models, the study resolves one of the longest-running debates in Crohn’s disease. The findings not only explain how a key genetic mutation drives the disease but could also contribute to the development of treatments aimed at restoring the relationship between girdin and NOD2.
The study was published on October 2 in the Journal of Clinical Investigation.
Electron micrographs show how macrophages expressing girdin neutralize pathogens by fusing phagosomes (P) with the cell’s lysosomes (L) to form phagolysosomes (PL), compartments where pathogens and cellular debris are broken down (left). This process is crucial for maintaining cellular homeostasis. In the absence of girdin, this fusion fails, allowing pathogens to evade degradation and escape neutralization (right).
Credit
UC San Diego Health Sciences
# # #
Journal
Journal of Clinical Investigation
DOI
AI-powered diabetes prevention program shows similar benefits to those led by people
Researchers from Johns Hopkins Medicine and the Johns Hopkins Bloomberg School of Public Health report that an AI-powered lifestyle intervention app for prediabetes reduced the risk of diabetes similarly to traditional, human-led programs in adults.
Funded by the National Institutes of Health and published in JAMA Oct. 27, the study is believed to be the first phase III randomized controlled clinical trial to demonstrate that an AI-powered diabetes prevention program (DPP) app helps patients meet diabetes risk-reduction benchmarks established by the Centers for Disease Control and Prevention (CDC) at rates comparable to those in human-led programs.
An estimated 97.6 million adults in the United States have prediabetes, a condition in which blood sugar levels are above normal but below the threshold for type 2 diabetes, putting them at increased risk of developing type 2 diabetes within the next five years. Previous research has shown that adults with prediabetes who complete a human-led DPP, which help participants make lifestyle changes to diet and exercise, are 58% less likely to develop type 2 diabetes, as shown in the CDC’s original Diabetes Prevention Program (DPP) clinical study. However, access barriers, such as scheduling conflicts and availability, have limited the reach of these programs.
Of the approximately 100 CDC-recognized digital DPPs available, AI-DPPs represent only a minor subset, and data demonstrating their effectiveness compared with human-led programs is lacking.
In the study, the researchers tested whether a fully AI-driven program could provide adults with prediabetes similar health benefits as yearlong, group-based programs led by human coaches.
“Even beyond diabetes prevention research, there have been very few randomized controlled trials that directly compare AI-based, patient-directed interventions to traditional human standards of care,” says Nestoras Mathioudakis, M.D., M.H.S., co-medical director of the Johns Hopkins Medicine Diabetes Prevention & Education Program and study principal investigator, regarding the absence of medical literature on health benefits of AI-based DPPs.
During the COVID-19 pandemic, 368 middle-aged (median age 58 years) participants volunteered to be referred to either one of four remote, 12-month, human-led programs or a reinforcement learning algorithm app that delivered personalized push notifications guiding weight management behaviors, physical activity and nutrition. Overall, participants were 71% female, 61% white, 27% Black, and 6% Hispanic. All participants met race-specific overweight or obese body mass index cutoffs, and had a diagnosis of prediabetes prior to starting the study.
In both groups, a wrist activity monitor was used to track participant physical activity for seven consecutive days each month during the 12-month study.
While participating, study volunteers continued to receive medical care from their primary care providers, but could not participate in other structured diabetes programs or use medications that would affect glucose levels or body weight, such as metformin or GLP-1 agonists.
Once referred, the researchers did not promote engagement in the program and only followed up with both groups at the 6- and 12-month marks.
“The greatest barrier to DPP completion is often initiation, hindered by logistical challenges like scheduling. So, in addition to clinical outcomes, we were interested in learning whether participants were more likely to start the asynchronous digital program after referral,” says study co-first author Benjamin Lalani, currently a medical student at Harvard Medical School and research associate working in the Mathioudakis Lab.
After 12 months, the study team found 31.7% of AI-DPP participants and 31.9% of human-led DPP participants met the CDC-defined composite benchmark for diabetes risk reduction (at least 5% weight loss, at least 4% weight loss plus 150 minutes of physical activity per week, or an absolute A1C reduction of at least 0.2%).
Results demonstrated that similar outcomes can be achieved by a human coach-based program and an AI-DPP. Moreover, the AI-DPP group had higher rates of program initiation (93.4% vs 82.7%) and completion (63.9% vs 50.3%) in comparison to the traditional programs.
Researchers believe ease of access increased participant engagement in the AI group, showing that AI interventions could be an effective alternative to existing human-coached programs. As such, primary care providers may consider AI-led DPPs for patients in need of a lifestyle change program, especially those with considerable logistical constraints.
“Unlike human-coached programs, AI-DPPs can be fully automated and always available, extending their reach and making them resistant to factors that may limit access to human DPPs, like staffing shortages,” says Lalani. “So, while the black-box nature of AI is a commonly cited barrier to clinical adoption, our study shows that the AI-DPP can provide reliable personalized interventions.”
Looking ahead, the study team is interested in exploring how the AI app outcomes they observed translate to broader, underserved, real-world patient populations who may not have the time or resources to engage in traditional lifestyle intervention programs.
Additionally, several secondary analyses are underway, which intend to explore patient preference with AI vs. human modality, the impact of engagement on outcomes in each intervention and costs associated with AI-led DPPs.
As a part of the study, Sweetch Health, Ltd. and the participating DPPs received financial compensation for providing services to participants. The DPPs did not have access to the overall cohort results, did not analyze data from the study, and did not provide interpretations of the results.
Maruthur and The Johns Hopkins University receive royalty distributions related to an online diabetes prevention program not discussed in the publication. The arrangement terms have been reviewed and approved by The Johns Hopkins University in accordance with its conflict-of-interest policies.
The study was funded by the National Institute of Diabetes and Digestive and Kidney Diseases (R01DK125780) and the National Institute on Aging (K01AG076967). Support was also provided by the Johns Hopkins Institute for Clinical and Translational Research, which was partially funded by the National Center for Advancing Translational Sciences (UL1TR001079).
Additional researchers who contributed to this study include Mohammed S. Abusamaan, Defne Alver, Adrian Dobs, John McGready, Kristin Riekert, Benjamin Ringham, Aliyah Shehadeh, Fatmata Vandi, Amal A. Wanigatunga, Daniel Zade, and Nisa M. Maruthur from Johns Hopkins, Brian Kane from Tower Health Medical Group Family Medicine and Mary Alderfer from Reading Hospital Tower Health.
DOI: 10.1001/jama.2025.19563
Journal
JAMA

No comments:
Post a Comment