It’s possible that I shall make an ass of myself. But in that case one can always get out of it with a little dialectic. I have, of course, so worded my proposition as to be right either way (K.Marx, Letter to F.Engels on the Indian Mutiny)
Saturday, January 24, 2026
Malicious AI swarms pose emergent threats to democracy
Summary author: Walter Beckwith
American Association for the Advancement of Science (AAAS)
In a Policy Forum, Daniel Schroeder and colleagues discuss the risks of malicious “Artificial Intelligence (AI) swarms”, which enable a new class of large-scale, coordinated disinformation campaigns that pose significant risks to democracy. Manipulation of public opinion has long relied on rhetoric and propaganda. However, modern AI systems have created powerful new tools for shaping human beliefs and behavior on a societal scale. Large language models (LLMs) and autonomous agents can now generate vast amounts of persuasive, human-like content. When combined into collaborative AI Swarms – collections of AI-driven personas that retain memory and identity – these systems can mimic social dynamics and easily infiltrate online communities, making false narratives appear credible and widely shared. According to the authors, unlike earlier labor-intensive influence operations run by humans, AI systems can operate cheaply, consistently, and at tremendous scale, transforming once isolated disinformation efforts into persistent, adaptive campaigns that pose serious risks to democratic processes worldwide. Here, Schroeder et al. discuss the technology underpinning these malicious systems and identify pathways through which they can harm democratic discourse through widely used digital platforms. The authors argue that defense against these systems must be layered and pragmatic, aiming not for total prevention of their use, which is highly unlikely, but for raising the cost, risk, and visibility of manipulation. Because such efforts would require global coordination outside of corporate and governmental interests, Schroeder et al. propose a distributed “AI Influence Observatory,” consisting of a network of academic groups, nongovernmental organizations, and other civil institutions to guide independent oversight and action. “Success depends on fostering collaborative action without hindering scientific research while ensuring that the public sphere remains both resilient and accountable,” write the authors. “By committing now to rigorous measurement, proportionate safeguards, and shared oversight, upcoming elections could even become a proving ground for, rather than a setback to, democratic AI governance.”
Left: The share of AI-written Python functions (2019-2024) grows rapidly, but countries differ in their adoption rates. The U.S. leads the early adoption of generative AI, followed by European nations such as France and Germany. From 2023 onward, India rapidly catches up, whereas adoption in China and Russia progresses more slowly. Right: Comparing usage rates for the same programmers at different points in time, generative AI adoption is associated with increased productivity (commits), breadth of functionality (library use) and exploration of new functionality (library entry), but only for senior developers, while early-career developers do not derive any statistically significant benefits from using generative AI.
Generative AI is reshaping software development – and fast. A new study published in Science shows that AI-assisted coding is spreading rapidly, though unevenly: in the U.S., the share of new code relying on AI rose from 5% in 2022 to 29% in early 2025, compared with just 12% in China. AI usage is highest among less experienced programmers, but productivity gains go to seasoned developers.
The software industry is enormous. In the U.S. economy alone, firms spend an estimated $600 billion a year in wages on coding-related work. Every day, billions of lines of code keep the global economy running. How is AI changing this backbone of modern life?
In a study published in Science, a research team led by the Complexity Science Hub (CSH) found that by the end of 2024, around one-third of all newly written software functions – self-contained subroutines in a computer program – in the United States were already being created with the support of AI systems.
“We analyzed more than 30 million Python contributions from roughly 160,000 developers on GitHub, the world’s largest collaborative programming platform,” says Simone Daniotti of CSH and Utrecht University. GitHub records every step of coding – additions, edits, improvements – allowing researchers to track programming work across the globe in real time. Python is one of the most widely used programming languages in the world.
REGIONAL GAPS ARE LARGE
The team used a specially trained AI model to identify whether blocks of code were AI-generated, for instance via ChatGPT or GitHub Copilot.
“The results show extremely rapid diffusion,” explains Frank Neffke, who leads the Transforming Economies group at CSH. “In the U.S., AI-assisted coding jumped from around 5% in 2022 to nearly 30% in the last quarter of 2024.”
At the same time, the study found wide differences across countries. “While the share of AI-supported code is highest in the U.S. at 29%, Germany reaches 23% and France 24%, followed by India at 20%, which has been catching up fast,” he says, while Russia (15%) and China (12%) still lagged behind at the end of the study.
“It's no surprise the U.S. leads – that's where the leading LLMs come from. Users in China and Russia have faced barriers to accessing these models, blocked by their own governments or by the providers themselves, though VPN workarounds exist. Recent domestic Chinese breakthroughs like DeepSeek, released after our data ends in early 2025, suggest this gap may close quickly,” says Johannes Wachs, a faculty member at CSH and associate professor at Corvinus University of Budapest.
EXPERIENCED DEVELOPERS BENEFIT MOST
The study shows that the use of generative AI increased programmers’ productivity by 3.6% by the end of 2024. “That may sound modest, but at the scale of the global software industry it represents a sizeable gain,” says Neffke, who is also a professor at Interdisciplinary Transformation University Austria (IT:U).
The study finds no differences in AI usage between women and men. By contrast, experience levels matter: less experienced programmers use generative AI in 37% of their code, compared to just 27% for experienced programmers. Despite this, the productivity gains the study documents are driven exclusively by experienced users. "Beginners hardly benefit at all," says Daniotti. Generative AI therefore does not automatically level the playing field; it can widen existing gaps.
In addition, experienced software developers experiment more with new libraries and unusual combinations of existing software tools. "This suggests that AI does not only accelerate routine tasks, but also speeds up learning, helping experienced programmers widen their capabilities and more easily venture into new domains of software development," says Wachs.
ECONOMIC GAINS
What does all of this mean for the economy? “The U.S. spends an estimated $637 billion to $1.06 trillion annually in wages on programming tasks, according to an analysis of about 900 different occupations,” says co-author Xiangnan Feng from CSH. If 29% of code is AI-assisted and productivity rises by 3.6%, that adds between $23 and $38 billion in value each year. “This is likely a conservative estimate,” Neffke points out, “the economic impact of generative AI in software development was already substantial at the end of 2024 and is likely to have increased further since our analysis.”
LOOKING AHEAD
Software development is undergoing profound transformation. AI is becoming central to digital infrastructure, boosting productivity and fostering innovation – but mainly for people who already have substantial work experience.
“For businesses, policymakers, and educational institutes, the key question is not whether AI will be used, but how to make its benefits accessible without reinforcing inequalities,” says Wachs. “When even a car has essentially become a software product, we need to understand the hurdles to AI adoption – at the company, regional, and national levels – as quickly as possible,” Neffke adds.
In April 2025, OpenAI’s popular ChatGPT hit a milestone of a billion active weekly users, as artificial intelligence continued its explosion in popularity.
But with that popularity has come a dark side. Biases in AI’s models and algorithms can actively harm some of its users and promote social injustice. Documented biases have led to different medical treatments due to patients’ demographics and corporate hiring tools that discriminate against female and Black candidates.
New research from Texas McCombs suggests both a previously unexplored source of AI biases and some ways to correct for them: complexity.
“There’s a complex set of issues that the algorithm has to deal with, and it’s infeasible to deal with those issues well,” says Hüseyin Tanriverdi, associate professor of information, risk, and operations management. “Bias could be an artifact of that complexity rather than other explanations that people have offered.”
With John-Patrick Akinyemi, a McCombs Ph.D. candidate in IROM, Tanriverdi studied a set of 363 algorithms that researchers and journalists had identified as biased. The algorithms came from a repository called AI Algorithmic and Automation Incidents and Controversies.
The researchers compared each problematic algorithm with one that was similar in nature but had not been called out for bias. They examined not only the algorithms but also the organizations that created and used them.
Prior research has assumed that bias can be reduced by making algorithms more accurate. But that assumption, Tanriverdi found, did not tell the whole story. He found three additional factors, all related to a similar problem: not properly modeling for complexity.
Ground truth. Some algorithms are asked to make decisions when there’s no established ground truth: the reference against which the algorithm’s outcomes are evaluated. An algorithm might be asked to guess the age of a bone from an X-ray image, even though in medical practice, there’s no established way for doctors to do so.
In other cases, AI may mistakenly treat opinions as objective truths — for example, when social media users are evenly split on whether a post constitutes hate speech or protected free speech.
AI should only automate decisions for which ground truth is clear, Tanriverdi says. “If there is not a well-established ground truth, then the likelihood that bias will emerge significantly increases.”
Real-world complexity. AI models inevitably simplify the situations they describe. Problems can arise when they miss important components of reality.
Tanriverdi points to a case in which Arkansas replaced home visits by nurses with automated rulings on Medicaid benefits. It had the effect of cutting off disabled people from assistance with eating and showering.
“If a nurse goes and walks around to the house, they will be able to understand more about what kind of support this person needs,” he says. “But algorithms were using only a subset of those variables, because data was not available on everything.
“Because of omission of the relevant variables in the model, that model was no longer a good enough representation of reality.”
Stakeholder involvement. When a model serving a diverse population is designed mostly by members of a single demographic, it becomes more susceptible to bias. One way to counter this risk is to ensure that all stakeholder groups have a voice in the development process.
By involving stakeholders who may have conflicting goals and expectations, an organization can determine whether it’s possible to meet them all. If it’s not, Tanriverdi says, “It may be feasible to reach compromise solutions that everyone is OK with.”
The research concludes that taming AI bias involves much more than making algorithms more accurate. Developers need to open up their black boxes to account for real-world complexities, input from diverse groups, and ground truths.
“The factors we focus on have a direct effect on the fairness outcome,” Tanriverdi says. “These are the missing pieces that data scientists seem to be ignoring.”
This survey study found that artificial intelligence (AI) use was significantly associated with greater depressive symptoms, with magnitude of differences varying by age group. Further work is needed to understand whether these associations are causal and explain heterogeneous effects.
Corresponding Author: To contact the corresponding author, Roy H. Perlis, MD, MSc, email rperlis@mgb.org.
Editor’s Note: Please see the article for additional information, including other authors, author contributions and affiliations, conflict of interest and financial disclosures, and funding and support.
# # #
Embed this link to provide your readers free access to the full-text article
About JAMA Network Open: JAMA Network Open is an online-only open access general medical journal from the JAMA Network. On weekdays, the journal publishes peer-reviewed clinical research and commentary in more than 40 medical and health subject areas. Every article is free online from the day of publication.
No comments:
Post a Comment