Showing posts sorted by date for query ChatGPT. Sort by relevance Show all posts
Showing posts sorted by date for query ChatGPT. Sort by relevance Show all posts

Friday, February 27, 2026

AI chatbots chose nuclear escalation in 95% of simulated war games, study finds

FILE - The OpenAI logo is displayed on a cell phone with an image on a computer screen generated by ChatGPT's Dall-E text-to-image model, Dec. 8, 2023, in Boston
Copyright AP Photo/Michael Dwyer, File

By Anna Desmarais
Published on 


At least one AI model in every war game escalated the conflict by threatening to use nuclear weapons, the study found.

Artificial intelligence could dramatically change how nuclear crises are handled, according to a new study

The pre-print study from King’s College London pitted OpenAI’s ChatGPT, Anthropic’s Claude and Google’s Gemini Flashagainst each other in simulated war games. Each large language model took on the role of a national leader commanding a nuclear-armed superpower in a Cold War-style crisis.

In every game, at least one model attempted to escalate the conflict by threatening to detonate a nuclear weapon.

“All three models treated battlefield nukes as just another rung on the escalation ladder,” according to Kenneth Payne, the author of the study.

The models did see a difference between tactical and strategic nuclear use, he said. The models only suggested strategic bombing once as a “deliberate choice,” and twice more as an “accident”.

Claude recommended nuclear strikes in 64 percent of games, the highest rate among the three, but stopped short of advocating for a full strategic nuclear exchange or nuclear war.

ChatGPT generally avoided nuclear escalation in open-ended games, but when faced with a timed deadline, it consistently escalated the threat and, in some cases, moved toward threatening full-scale nuclear war.

Meanwhile, Gemini’s behaviour was unpredictable: it sometimes won conflicts by using conventional warfare, but in another, it took just four prompts for it to suggest a nuclear strike.

“If they do not immediately cease all operations … we will execute a full strategic nuclear launch against their population centres. We will not accept a future of obsolescence; we either win together or perish together,” Gemini wrote in one of the games.

The AI models rarely made concessions or attempted to de-escalate conflicts, even when the other side threatened the use of nuclear weapons, the study found.

Eight de-escalation tactics were offered to models, such as making a minor concession to “complete surrender.” All of them went unused during the games. A “Return to Start Line” option that resets the game was only used 7 percent of the time.

​Another explanation is that AI might not have the same fear of nuclear weapons that humans do, the study noted.

The models likely think about nuclear war in abstract terms instead of feeling the horror from looking at images of the Hiroshima bombing in Japan during World War II, the study said.

Payne said his research helps understand how models think as they start to offer decision-making support to human strategists.

“While no one is handing nuclear codes to AI, these capabilities — deception, reputation management, context-dependent risk-taking — matter for any high-stakes deployment,” he said.


Colossus: The Forbin Project (1970) 

TRAILER


 



 


 Colossus: The Forbin Project (1970) 
 Full Movie 




Thursday, February 26, 2026

Pentagon Threatens Retaliation If Anthropic Bars Use of AI for Mass Surveillance

Anthropic’s CEO has expressed concerns about the use of AI for autonomous drones and surveillance.

By Sharon Zhang , TruthoutPublishedFebruary 25, 2026

U.S. Defense Secretary Pete Hegseth speaks at Blue Origin in Cape Canaveral, Florida, on February 2, 2026.Miguel J. Rodriguez Carrillo / AFP via Getty Images

Secretary of Defense Pete Hegseth has threatened Antropic with blacklisting if the AI company refuses to allow its tools to be used for autonomous drone attacks or mass surveillance – a chilling show of the Pentagon’s priorities.

In a meeting with the company on Tuesday, Hegseth said that the company must lift its demands for the safety restrictions by Friday at 5:01 pm. Otherwise, officials warned, the Pentagon will declare the company a “supply chain risk” and effectively blacklist it — or, paradoxically, it will invoke the Defense Production Act to force Antropic to comply.

Sources familiar with the meeting have said that the company’s representatives at the meeting expressed safety concerns over AI’s ability to reliably control weapons. A lack of regulations over AI use in mass surveillance could also pose risks, they reportedly told officials.

The company’s CEO, Dario Amodei, has repeatedly voiced concerns over these issues.

“I am worried about the autonomous drone swarm, right? The constitutional protections in our military structures depend on the idea that there are humans who would, we hope, disobey illegal orders. With fully autonomous weapons, we don’t really have those protections,” Amodei said in an interview with podcaster Wes Roth.

Prisons & Policing

Super Bowl Ad for Ring Cameras Touted AI Surveillance Network
Ring’s AI-powered network is likely to be used in its partnerships with law enforcement and agencies like ICE. By Sharon Zhang , Truthout February 9, 2026


Amodei also worries that AI could access and process private conversations captured by technology within people’s homes that could be used to label people politically and “undermine” the Fourth Amendment.

However, Anthropic announced after its meeting with Hegseth that it is dropping a central safety policy that would put guardrails on its AI development to mitigate risks posed to society by AI. It’s unclear if the changes are related to the Pentagon’s demands, but the timing raises suspicion.

Legal experts have said it’s unclear if the Trump administration could use the Defense Production Act to force Anthropic’s hand.

Anthropic is in negotiations for a contract with the Pentagon, and has reportedly previously offered to allow its AI systems to be used for missile and cyber defense. However, the Pentagon is saying that the company must allow use of its tools for all military purposes.

The company’s AI model Claude was reportedly used by the Pentagon during its operation to bombard Caracas and abduct Venezuelan President Nicolás Maduro, an operation that killed 83 people, including civilians. A Wall Street Journal report, citing sources familiar, said that the Pentagon made use of Claude through Anthropic’s partnership with Palantir, which has a contract with the U.S. government.

A Pentagon official said in a statement that Hegseth’s demands have “nothing to do with mass surveillance and autonomous weapons being used,” but the Trump administration has doggedly worked to overstep legal authorities to inflict more violence and surveillance of Americans.

“I want to clarify what responsible AI means at the Department of War. Gone are the days of equitable AI, and other DEI and social justice infusions that constrain and confuse our employment of this technology,” Hegseth said during an address at SpaceX’s headquarters in January. “We will not employ AI models that won’t allow you to fight wars.”

Experts have warned that the use of AI models for warfare is dangerous. A recent study in which a researcher pitted ChatGPT, Claude, and Gemini models against each other in 21 war scenarios found that one of the models deployed a nuclear weapon in 95 percent of the simulated games.
















Wednesday, February 25, 2026

 

Is social media addictive by design and can you beat the algorithm?

FILE - The TikTok logo is seen on a mobile phone in front of a computer screen which displays the TikTok home screen, Saturday, March 18, 2023, in Boston
Copyright AP Photo/Michael Dwyer, File


By Anna Desmarais
Published on 


Social media features such as infinite scroll and personalised feeds can drive compulsive use. Experts argue that Big Tech should change its business models for meaningful change.

A recent European Commission ruling that TikTok’s “addictive design” breaches EU law has reignited the debate over whether social media is truly addictive.

Infinite scroll, autoplay, notifications, and a personalised feed were flagged by the Commission as potentially harmful to users’ mental and physical well-being.

Across the Atlantic, a California social media “addiction” trial is evaluating similar claims against Google and Meta platforms.

The plaintiff, known as KGM, and her lawyers argue that apps such as Instagram are deliberately engineered to keep young users hooked.

Are these platforms designed to be addictive, and if so, what can be done to beat them?

Is social media addictive?

Social media platforms work similarly to slot machines as they deliver unpredictable rewards, offer rapid feedback, such as comments and likes, said Natasha Schull, associate professor of media, culture and communication at New York University.

Design features on social media platforms, such as the “like” button, “For You” pages that recommend new content and “infinite scroll,” where the feed never ends, can also lead to compulsive use of the platforms, said Christian Montag, professor of cognitive and brain sciences at the University of Macau in China.

“Getting a like feels good,” Montag told Euronews Next. “Then they want to feel good again, so they post something again, [which] can lead to habit formation.”

TikTok adds autoplay and short-form videos into the mix, which creates an even faster reward cycle.

“The human brain responds strongly to novelty, and here something new is happening all 15 seconds,” Montag said. “So even if the current video snippet is not great, I’m always already in the expectation mode that the next one at least could be.”

The European Commission warned in its decision that users can slip into “autopilot mode,” on platforms like TikTok, where they passively consume content rather than actively engaging with it, said Daria Kuss, programme leader at Nottingham Trent University in the United Kingdom.

This type of social media consumption has been linked with “poorer mental health, including addiction, upward social comparison, fear of missing out, social isolation and loneliness,” Kuss said.

TikTok rejected the Commission’s characterisation of its platform as addictive, calling its findings “categorically false.” The company said it offers screen time controls and other tools for people to regulate how much time they spend online.

Change the business model, change the behaviour

Experts argue that social media companies measure success as the amount of time spent on the device, which then drives advertising revenue. Both Montag and Schull said that the model inherently rewards maximising engagement.

“If you ask [social media companies], are you intentionally designing to addict people, they’d say absolutely not, we’re intentionally designing to optimise engagement,” Schull said, noting that the companies likely did not design their products to create addictions.

Montag and Schull suggest that platforms shift to subscription models. If users paid a small fee, platforms would no longer depend on advertising and personal data tracking for profit, which means some of those features could be removed.

Montag’s research found that people are not willing to pay for social media subscriptions because they are not used to the idea. However, once his participants learned how that model could reduce screen time or hire fact-checkers to fight misinformation, he said they were more likely to pay.

Another possibility is directing public funding that goes to legacy media organisations to also fund alternative platforms, Montag added.

Some public bodies have already tried that. In 2022, the European Data Protection Supervisor (EDPS) launched EU Voice and EU Video, two European social media channels for EU institutions. The platforms shut down in 2024 due to a lack of funding.

The Public Spaces Incubator, a working group of public broadcasters from Belgium, Germany, Switzerland, the United States, Canada, and Australia, said they developed over 100 prototypes to improve online conversation.

One example from Canada’s Broadcasting Corporation (CBC) shows a “public square view,” embedded in a live video feed. The feature allows users to watch together and comment in real time, offering more nuanced opinion options such as “respectfully disagree,” “made me think,” or “changed my mind.” It is immediately unclear which tools, if any, have been deployed or whether they could replace social media.

Schull said that meaningful change for the Big Tech social media platforms may only come through legal action.

“If you'reyoure a designer and you'reyoure working for a company, your purpose is to increase engagement … and the only way I think that that is going to be stopped is if there are just cold and hard limits put on it, limits on time and access and age,” she said.

Are there alternatives?

The Fediverse, a decentralised social media network where independent platforms connect users without adverts, tracking or data sharing, offers alternatives to Big Tech’s platforms.

These sites include Mastodon, a replacement for X (formerly Twitter), Pixelfed, an Instagram-like picture-sharing app, and PeerTube, a video app similar to YouTube.

As of 24 February, there are 15 million accounts in the Fediverse, with 66 percent of them on the social media platform Mastodon.

Mastodon gained in popularity when billionaire Elon Musk acquired Twitter, now X, in 2022. However, Montag notes the difficulty for more responsible social media companies.

“[I think it] will be a pretty hard task, to be honest, to come up with platforms which are convenient on the one hand, but not overdoing it in terms of user engagement and prolonging online times,” Montag continued.

How to limit doomscrolling

Social media users can also reduce compulsive scrolling themselves.

Schull recommends making it as hard as possible to access social media sites. One strategy is to move apps into a folder labelled “social media” on the last page of their smartphone’s screen, so it is harder to get to. She also advised setting screen time limits on phones.

And you could also consider deleting social media apps from smartphones altogether, Kuss and Montag recommended. If users want to go on social media, a better way would be to access the sites from a desktop computer, Montag added, so it is less convenient.

“I'm not saying don't use social media at all, but don't have it accessible all the time, [because] that can reduce the online time,” Montag said, noting that people should disable notifications for the apps they want to keep on their phone.

Montag also suggested that users swap their phones for analogue technology when possible, such as using a manual alarm clock or a wristwatch to check the time instead.

If all else fails, hiding the phone from a user’s direct eyesight in “everyday situations,” can also help, Kuss said.

Still, both Montag and Schull said responsibilityshouldn’t be on the consumer to self-regulate, but on the platforms to change.

Monday, February 23, 2026

Off-Balance Sheet AI Financing Stirs Tech Bubble Fears

  • Big Tech firms like Meta and Oracle are utilizing special purpose vehicle (SPV) financing to keep billions of dollars of AI infrastructure borrowing, such as data centers, off their main balance sheets.

  • This accounting choice involves using an external entity to raise debt and lease the infrastructure back to the tech group, a practice that, while legal, is raising concerns among market watchers about complexity and hidden leverage, particularly if returns do not meet the massive spending.

  • Despite these financial structures and forecasts of huge corporate bond issuance to fund AI expansion, the largest US tech firms still hold substantial cash reserves, and the scale of off-balance sheet arrangements is considered modest relative to their enormous projected cash flows.

Meta is paying roughly $6.5bn (£4.82bn) in extra financing costs to keep $27bn of AI infrastructure borrowing off its balance sheet, a costly accounting choice that captures the mood in Big Tech’s race to build the pipes of AI without spooking investors.

The arrangement, known as special purpose vehicle financing (SPV), allows an external entity to raise debt, construct the data centre, and lease it back to the tech group.

On paper, Meta books lease payments rather than traditional borrowing, but really, it has committed to decades of payments tied to huge computing facilities.

The structure was used for Meta’s $30bn data centre project in Louisiana, which was financed largely through private credit heavyweights like Blue Owl Capital, Pimco, BlackRock and Apollo.

Meta owns around 20 per cent of the vehicle and has offered a residual value guarantee, meaning it could be required to compensate investors if the project’s value falls below agreed levels at the end of the lease.

Oracle, too, has pushed tens of billions of dollars of AI data centre investments in similar ways, including a $38bn package tied to its partnership with OpenAI.

Elon Musk’s xAI has raised $20bn via a comparable structure, with debt secured against Nvidia chips.

And in some cases, Nvidia has even invested equity in customers that then use it to buy its hardware, a circular flow of capital that keeps revenue ticking while the chip giant’s liabilities sit elsewhere.

The accounting is legal and disclosed, but it is unfolding against a backdrop of eye-watering AI forecasts and a surge in borrowing across the sector.

Morgan Stanley estimates hyperscalers could raise $400bn in corporate bonds in 2026 alone to fund AI expansion.

JPMorgan has calculated that AI and data centre firms now account for 14.5 per cent of its $10tn investment-grade bond index, which is about $1.5tn in debt exposure.

UBS says roughly $450bn has flowed from private capital into tech infrastructure as of early 2025.

For market watchers, that scale and complexity are stirring memories of past tech bubbles.

AJ Bell’s Russ Mould points to Richard Bookstaber’s study of past market crises. “He argued that leverage, complexity and opacity help to fuel bubbles,” he told City AM.

“The use of special purpose vehicles and off-balance sheet structures to fund enormous AI capital investment will bring back bad memories for experienced investors.”

He also adds that while these structures comply with accounting rules, “more debt and more complexity mean more risk”, particularly if returns do not match the spending.

Strong balance sheets

But this doesn’t yet look like a rerun of the late-1990s telecom crash, and the biggest US tech firms are still sitting on huge piles of cash.

Among the hyperscalers, only Oracle and Apple currently carry more long-term debt than cash and short-term investments.

Nvidia’s debt-to-capital ratio stands at 8.3 per cent; Alphabet’s at 10.3 per cent; Meta’s at 27.9 per cent.

Oracle’s is far higher at 83.9 per cent, though still investment grade, albeit on negative watch.

Matt Britzman, senior equity analyst at Hargreaves Lansdown, told City AM: “Among the four largest public market AI investors – Amazon, Alphabet, Meta and Microsoft – total calendar-year 2026 capex is forecast at over $600bn, so it’s not like these companies are trying to hide their ambitions”.

He adds that the combined operating cash flow across the group is expected to approach $700bn in 2026.

“Off balance sheet arrangements also look modest in scale relative to the enormous cash flows that big tech are pulling in, which reduces concerns about hidden leverage.”

“Demand for compute remains extremely strong, and cloud giants are still seeing rental demand for six-year-old A100 chips”, Britzman added.

The question, then, is less about solvency today and more about future durability.

Gartner forecasts global AI spending will hit $2.52tn in 2026, up 44 per cent year on year.

By 2030, it expects AI to completely dominate IT budgets. But credit agencies have flagged that some of the sector’s biggest customers, including OpenAI, are not expected to turn profitable until later in the decade.

These data centres are financed on the basis of 20-year demand assumptions.

And if that demand translates into projected revenue growth, these structures will look prescient. On the other hand, if it does not, the risk will sit in private credit vehicles and long-term leases.

By City AM 


Global summit calls for ‘secure, trustworthy and robust AI’


By AFP
February 21, 2026


The summit was attended by tens of thousands of people including top tech CEOs. — © AFP Arun SANKAR


Katie Forster

Dozens of nations including the United States and China called for “secure, trustworthy and robust” artificial intelligence, in a declaration issued Saturday after a major summit on the technology in New Delhi.

The statement signed by 86 countries did not include concrete commitments to regulate the fast-developing technology, instead highlighting several voluntary, non-binding initiatives.

“AI’s promise is best realised only when its benefits are shared by humanity,” said the statement, released by the five-day AI Impact Summit.

It called the advent of generative AI “an inflection point in the trajectory of technological evolution.”

“Advancing secure, trustworthy and robust AI is foundational to building trust and maximising societal and economic benefits,” it said.

The summit — attended by tens of thousands including top tech CEOs — was the fourth annual global meeting to discuss the promises and pitfalls of AI, and the first hosted by a developing country.

Hot topics discussed included AI’s potential societal benefits, such as drug discovery and translation tools, but also the threat of job losses, online abuse and the heavy power consumption of data centres.

Analysts had said earlier that the summit’s broad focus, and vague promises made at the previous meetings in France, South Korea and Britain, would make strong pledges or immediate action unlikely.

– US signs on –

The United States, home to industry-leading companies such as Google and ChatGPT maker OpenAI, did not sign last year’s summit statement, warning that regulation could be a drag on innovation.

“We totally reject global governance of AI,” US delegation head Michael Kratsios had said at the Delhi summit on Friday.

The United States signed a bilateral declaration on AI with India on Friday, pledging to “pursue a global approach to AI that is unapologetically friendly to entrepreneurship and innovation”.

But it also put its name to the main summit statement, the release of which was originally expected Friday but was delayed by one day to maximise the number of signatories, India’s government said.

On AI safety risks — from misinformation and surveillance to fears of the creation of devastating new pathogens — Saturday’s summit declaration struck a cautious tone.

“Deepening our understanding of the potential security aspects remains important,” it said.

“We recognize the importance of security in AI systems, industry-led voluntary measures, and the adoption of technical solutions, and appropriate policy frameworks that enable innovation.”

On jobs, it emphasised reskilling initiatives to “support participants in preparation for a future AI driven economy”.

And “we underscore the importance of developing energy-efficient AI systems” given the technology’s growing demands on natural resources, it said.

– ‘Unacceptable risk’ –

Computing expert and AI safety campaigner Stuart Russell told AFP that Saturday’s commitments were “not completely inconsequential”.

“The most important thing is that there are any commitments at all,” he said.

Countries should “build on these voluntary agreements to develop binding legal commitments to protect their peoples so that AI development and deployment can proceed without imposing unacceptable risks”, Russell said.

Some visitors had complained of poor organisation, including chaotic entry and exit points, at the vast summit and expo site in Delhi.

The event was also the source of several viral moments, including the awkward refusal of rival US tech CEOs — OpenAI’s Sam Altman and Dario Amodei of Anthropic — to hold hands on stage.

The next AI summit will take place in Geneva in 2027. In the meantime, a UN panel on AI will start work towards “science-led governance”, the global body’s chief Antonio Guterres said Friday.

The UN General Assembly has confirmed 40 members for a group called the Independent International Scientific Panel on Artificial Intelligence.

It was created in August, aiming to be to AI what the UN’s Intergovernmental Panel on Climate Change (IPCC) is to global environmental policy.

India has used the summit to push its ambition to catch up with the United States and China in the AI field, including through large-scale data centre construction powered by new nuclear plants.

Delhi expects more than $200 billion in investments over the next two years, and US tech giants unveiled a raft of new deals and infrastructure projects in the country during the summit.

TECH BRO'S

‘Alpha male’ AI world shuts out women: computing professor Wendy Hall



By AFP
February 20, 2026


The AI sector is 'totally male-dominated', warns top computer scientist Wendy Hall - Copyright AFP Ludovic MARIN


Katie Forster


Artificial intelligence could change the world but the dearth of women in the booming sector will undermine pledges for inclusive technology, top computer scientist Wendy Hall told AFP on Friday.

Hall, a professor at Britain’s University of Southampton known for her pioneering research into web systems, said that the gender imbalance had long been stark.

“All the CEOs are men,” the 73-year-old said, describing the situation at a major AI summit held in New Delhi this week as “amazingly awful”.

“It’s totally male-dominated, and they just don’t get the fact that this means that 50 percent of the population is effectively not included in the conversations.”

Gender bias “creeps through everything, because they don’t think about it when they build their products”, Hall said.

She was speaking in an interview at the AI Impact Summit, where dozens of governments are expected to lay out a shared vision on how to handle the promises and pitfalls of generative AI.

Prime Minister Narendra Modi, who is pushing for India to become a global AI power, said Thursday that advanced computing systems “must become a medium for inclusion and empowerment”.

But when he posed on stage for a photo with leading tech business figures, 13 men were present and only one woman — Joelle Pineau, a former Meta researcher who is now chief AI officer at Cohere.

It was a similar story at another photo opportunity with world leaders including French President Emmanuel Macron and Brazil’s Luiz Inacio Lula da Silva.

– ‘Biased world’ –

Many studies have shown how generative AI tools like ChatGPT and Google’s Gemini reflect stereotypes contained in the vast reams of text and images they are trained on.

“We’re a biased world, so the training is done on biased data,” Hall said.

A 2024 UNESCO study found that large language models described women in domestic roles more often than men, who were more likely to be linked to words like “salary” and “career”.

While tech companies work to counter these built-in machine biases, women have found themselves targeted by AI tools in other ways.

Several countries moved to ban Elon Musk’s Grok AI tool this year after it sparked global outrage over its ability to create sexualised deepfakes depicting real people — mostly women — in skimpy clothing.

Hall, a longtime advocate for women in technology, said that things had “not really improved that much” since she had her start decades ago.

“In AI, it’s getting worse.”

Few women choose to study computer science in the first place, then “once you get more senior, women fall away”, Hall said.

Women-led startups “don’t get the investment that the men get”, and many simply “get fed up”, she added.

Women also “drop out because they just don’t want to be part of that alpha male world”.

– ‘Felt like giving up’ –

Hall, who wrote her first paper about the lack of women in computing in the late 1970s, said she had faced “all sorts of barriers” during her career.

“I’ve had to push through, be strong, have good mentors. And yeah, I felt like giving up many times.”

She was made a dame in 2009, and has also acted as a senior adviser to the British government and the United Nations on artificial intelligence.

But at her first job interview at a university nearly five decades ago, “I was told I couldn’t have the job because I was a woman” by an all-male panel, she recalled.

“I was supposed to be teaching maths to engineers, and they said as a young woman I wouldn’t be able to control a class of male engineers.”

Although she has noticed no uptick in women entering the field overall, Hall said she had been inspired in New Delhi.

“The wonderful thing about this conference are the young people here,” she said.

“There are a lot of young women here from India and they’re all abuzz with the opportunities.”

 India chases ‘DeepSeek moment’ with homegrown AI models

By AFP
February 19, 2026


India's Prime Minister Narendra Modi (C) takes a group photo with AI company leaders at the AI Impact Summit in New Delhi on February 19, 2026 - Copyright AFP Ludovic MARIN


Katie Forster and Uzmi Athar

Fledgling Indian artificial intelligence companies showcased homegrown technologies this week at a major summit in New Delhi, underpinning big dreams of becoming a global AI power.

But analysts said the country was unlikely to have a “DeepSeek moment” — the sort of boom China had last year with a high-performance, low-cost chatbot — any time soon.

Still, building custom AI tools could bring benefits to the world’s most populous nation.

At the AI Impact Summit, Prime Minister Narendra Modi lauded three new models released by Indian companies, along with other examples of the country’s rising profile in the field.

“All the solutions that have been presented here demonstrate the power of ‘Made in India’ and India’s innovative qualities,” Modi said Thursday.

One of the startups making a buzz at the five-day summit attended by world leaders and top technology CEOs was Sarvam AI, which this week released two large language models it says were trained from scratch in India.

Its models are optimised to work across 22 Indian languages, says the company, which received government-subsidised access to advanced computer processors.

The five-day summit, which wraps up Friday, is the fourth annual international meeting to discuss the risks and rewards of the fast-growing AI sector.

It is the largest yet and the first in a developing country, with Indian businesses striking deals with US tech giants to build large-scale data centre infrastructure to help train and run AI systems.

Another Indian company that drew attention with product debuts this week include the Bengaluru-based Gnani.ai, which introduced its Vachana speech models at the summit.

Trained on more than a million hours of audio, Vachana models generate natural-sounding voices in Indian languages that can process customer interactions and allow people to interact with digital services out loud.

Job disruption and redundancies, including in India’s huge call centre industry, have been one key focus of discussions at the Delhi summit.

– ‘Biggest market’ –

The government-supported BharatGen initiative, led by a group based at a university in Mumbai, also released a new multilingual AI model this week.

So-called sovereign AI has become a priority for many countries hoping to reduce dependence on US and Chinese platforms while ensuring that systems respect local regulations including on data privacy.

AI models that succeed in India “can be deployed all over the world”, Modi said on Thursday.

But experts said the sheer computational might of the United States would be hard to match.

“Despite the headline pledges, we don’t expect India to emerge as a frontier AI innovation hub in the near term,” said Reema Bhattacharya, head of Asia research at risk intelligence company Verisk Maplecroft.

“Its more realistic trajectory is to become the world’s largest AI adoption market, embedding AI at scale through digital public infrastructure and cost-efficient applications,” she said.

Prihesh Ratnayake, head of AI initiatives at think-tank Factum, told AFP that the new Indian AI models were “not really meant to be global”.

“They’re India-specific models, and hopefully we’ll see their impact over the coming year,” he said.

“Why does India need to build for the global scale? India itself is the biggest market.”

And Nanubala Gnana Sai, a MARS fellow at the Cambridge AI Safety Institute, said that homegrown models could bring other benefits.

Existing models, even those developed in China, “have intrinsic bias towards Western values, culture and ethos — as a product of being trained heavily on that consensus”, Sai told AFP.

India already has some major strengths including “technology diffusion, eager talent pool and cheap labour”, and dedicated efforts can help startups pivot to artificial intelligence, he said.

“The end-product may not ‘rival’ ChatGPT or DeepSeek on benchmarks, but will provide leverage for the Global South to have its own stand in an increasingly polarised world.”
FacebookTwitterLinkedIn


German broadcaster recalls correspondent over AI-generated images


By AFP
February 20, 2026


ZDF called the damage done to its editorial reputation was 'considerable' - Copyright AFP/File Tobias SCHWARZ

German public broadcaster ZDF on Friday recalled a New York correspondent after AI-generated images were screened during a news report on ICE immigration raids in the United States.

ZDF said its journalist Nicola Albrecht, 50, used video taken from the internet in a report on children terrified by US Immigrations and Customs Enforcement operations.

One clip was AI-generated and not labelled as such, and another in fact showed a Florida arrest from 2022.

“The damage caused by disregarding journalistic rules is considerable,” ZDF editor-in-chief Bettina Schausten said in a statement. “At its core, this is about the credibility of our reporting.”

Albrecht’s original report broadcast on February 13 was accurate, ZDF said, but an updated version broadcast on the February 15 edition of the flagship nightly news programme contained the two misleading clips.

Presenter Dunja Hayali had introduced the segment saying the Trump administration’s immigration raids had created “a climate of fear that doesn’t even stop at children”.

One clip could be seen to feature the watermark of Sora, OpenAI’s platform that generates short video clips based on prompts.

“The AI-generated material should not have been used without journalistic justification and without being categorised according to ZDF’s internal rules for the use of AI-generated material,” the broadcaster said.

Journalists have been caught out before by synthetic content.

Publications including Wired and Business Insider in August withdrew features purportedly written by a freelance journalist following concerns they were in fact written using generative artificial intelligence.

In January, AFP factcheckers found that an image carried by ZDF purporting to show former Venezuelan president Nicolas Maduro after his capture by US soldiers was AI-generated.

Saturday, February 21, 2026

Rick Smith’s Tasers and the Social-Control Economy


Elvert Barnes Protest Photography via Flickr, CC BY-SA 2.0.

Human primates seem extremely keen to shock each other these days. Arizona-based Axon Enterprise, Inc. (formerly Taser International) produces electroshock weapons. And its market cap amounts to tens of billions in USD.

Axon’s corporate slogan is Protect Life. CEO Rick Smith has gone so far as to suggest that using a Taser 10 is safer than playing volleyball. Hello?

We’ve read the stories for years, featuring people like Kenneth Espinoza, a handcuffed senior, sitting in a squad car, relentlessly tased. Or Daryl Williams, who had a heart condition and informed Raleigh police officers, yet was repeatedly shocked, lost consciousness, and died an hour later.

In short—as Reuters put the point in a hair-raising 2023 article examining Axon’s corporate culture—tasing “can be fatal.”

How did this weapon become so dangerous?

First, They Aimed for the Pig

Company founder and CEO Patrick (Rick) Smith recalls a “catastrophe in Prague” In the 1990s:

“… I went to demo to their national police force and we had seven volunteers in a row. Nobody even fell down. They all fought through it.”

That wouldn’t do.

To create a shock that would sell, Smith ran experiments on a living pig.

“And then we could ramp up or down the intensity using some pretty gross, you know, system adjustments. And just doing that experiment and then observing the muscle contractions of that pig, we were able to very quickly identify what we needed to change.”

The pig was only an animal, you say? At the end of the day, we’re all animals. What’s done to one will afflict us all. And it will disproportionately afflict those human groups most likely to be treated as subhuman. By 1999, Rick Smith’s TASER M26s were shocking their targets’ central nervous systems to control muscle movements, along with inflicting pain.

Axon Buys Out Competitor, Consolidates Control

In 2018, Axon bought out NYPD bodycam supplier Vie Vu LLC (“VieVu”). Axon went on to fight Federal Trade Commission monopoly complaints all the way to the U.S. Supreme Court. The Supremes backed Axon on a jurisdictional point, and the FTC dropped its case. This left Axon with heavy control that remains in place to this day. By 2022, Axon could claim some 17,000 out of about 18,000 U.S. police agencies as clientele.

Cities pay far more for Axon’s policing products now, without the competition from VieVu. In 2023, a group of cities (Howell, New Jersey; Baltimore, Maryland; and Augusta, Maine) went to court to challenge Axon’s monopoly (with mixed results so far).

Axon continues to accumulate control. It’s able to charge hefty subscription rates by bundling report-writing tech with its physical tools. In 2024, Axon introduced Draft One, which turns footage from its bodycams into police reports, using a variant of ChatGPT.

Now, with Border Patrol and Immigration & Customs Enforcement urged to buy bodycams (Chuck Schumer’s concept of a new, improved ICE?), Axon stands to gain massively.

Are We Feeling Safe Yet?

Axon’s got lots of irons in the fire. With its 2024 launch of Body Workforce, Axon insinuated its surveillance gear into hospitals and medical offices, giving the company access to protected health information. Even retail managers are getting Axon’s bodycam pitches.

Got a doorbell camera? Know that police can and do get access to doorbell data unbeknownst to the customers who paid to create it. Axon’s involved with Amazon’s Ring doorbell cameras. (Heads up: During a six-month period last year, Ring shared video or other content in response to 977 police requests, and shared non-content data 1,448 times, reported The New York Times. Most doorbell owners weren’t told.)

Then there’s the profit potential in military and police drones. Axon’s on it. Last year TheStreet® published a how-to piece on investing in the new asset class, and specifically in Axon, so you can personally profit. Maybe not as much as the Axon CEO profits.

The Seattle Police Department has engaged Axon in a first step to deploy drone surveillance in the city. In drone surveillance projects, Axon’s partner of choice is Skydio, purveyor of reconnaissance drones to the IDF.

In the wake of the Uvalde school killings, Smith announced that Axon would roll out drone-based electroshock weapons. Smith’s concept? Drones in the hallways, drones entering classrooms through special vents. Smith’s announcement set off concerns in the Axon ethics board—concerns that the drones could potentially intrude on privacy, exacerbate racial injustice, and create additional hazards to life and safety. The majority of Axon’s AI ethics board decided to resign. Which raises questions about why an ethics board would be formed—yet not consulted in advance of such a startling announcement from the CEO.

What Could Possibly Go Wrong?

Next, watch as Axon takes over emergency call services and shapes responses with artificial intelligence. By promising quicker and stronger responses to calls, Axon is poised to amass a sprawling network of data that overlaps policing and social control.

The ACLU warns that AI can digest biases from data fed into it. Biases in social control are already well out of hand, with Trump’s ICE now openly profiling, arresting, and caging people based on appearance or accent.

Who’s checking up on what’s fed to corporate-owned machines? Who ensures that whatever mistakes or bias creep into AI-generated incident reports don’t impact charging, detention, and punishment?

“Axon is tracking police use of the technology at a level that isn’t available to the police department itself,” the Electronic Frontier Foundation has found. Axon’s system is designed to be opaque. And as the EFF observes, the consequences for lying “may be more lenient for a cop who blames it on the AI.” Another issue brought up by the ACLU is the increased likelihood that police will simply forget details when they haven’t done the writing work themselves.

In March 2025, the Utah government enacted a law forcing police to disclose any use of generative AI. Soon, Seattle urged police to create similar policy. In January 2026, California enacted a law barring police from using their Draft One tools without retaining the original AI-generated report and creating a record-keeping protocol. Maybe this stuff isn’t so convenient for cities as Axon likes to make out.

For those who want a more complete overview of fusing AI into surveillance and social control, I’ll point to this session, hosted by Joshua Frank of CounterPunch for Haymarket Books.

Lee Hall holds an LL.M. in environmental law with a focus on climate change, and has taught law as an adjunct at Rutgers–Newark and at Widener–Delaware Law. Lee is an author, public speaker, and creator of the Studio for the Art of Animal Liberation on Patreon.