Showing posts sorted by date for query AUTONOMOUS WEAPONS. Sort by relevance Show all posts
Showing posts sorted by date for query AUTONOMOUS WEAPONS. Sort by relevance Show all posts

Sunday, May 03, 2026

Trump vs. Anthropic: Does the U.S. Want Killer Robots?

Monday 27 April 2026, by Léonard Brice



“I fired them like dogs.” This is the formula that US president Donald Trump, with the elegance we know him for, used to sum up his battle with the American company Anthropic. He will have to wait a little longer before bragging: in a first order issued on 27 March 2026, the court suspended the blacklisting of the artificial intelligence (AI) giant, whose tools can therefore still be used by administrations, contrary to the president’s wishes. But the tug-of-war is not over, and we must take the measure of what is at stake: the United States’ use of AI in the service of a regime of terror.

Founded by former employees of Open AI, the company that designed the famous chatbot ChatGPT, Anthropic is one of the main challengers in the field of large language models (LLMs). Its model, Claude, had reached 30 million users by mid-2025. In its communication, it emphasizes reliability and security and advances the concept of "constitutional AI", i.e. AI trained to act in accordance with founding texts such as the Universal Declaration of Human Rights. An approach that is intended to be wise and reasonable, but which did not prevent it from becoming, in November 2024, the official supplier of AI software for the US Department of Defense.

The Pentagon is using Claude, in particular, as part of a partnership that also involves the scandalous big data company Palantir, owned by far-right billionaire Peter Thiel. Palantir provides the tools to collect and process large amounts of data, and Anthropic makes it possible to use them to design action plans. In the context of the war in the Middle East, these tools have made it possible to automate the search for targets, which explains the exceptional pace of strikes. The Wall Street Journal also revealed that Claude had been used to plan the kidnapping of Venezuelan president Nicolás Maduro in January.

At the end of 2025, the Pentagon began negotiations to revise the terms of these contracts. At issue: restrictions on the frameworks of use, which Secretary of State Pete Hegseth wanted to sweep away in favour of the formula “any legitimate use”. Anthropic was then open to discussion, but set two red lines: mass surveillance of American citizens, and fully autonomous weapons. While claiming that these uses were prohibited by American law anyway (which is highly questionable), the Pentagon was offended, set an ultimatum, and then broke off collaboration with the company. On 4 March, Anthropic received a letter informing it of the punishment that Donald Trump had chosen for it: it would now be considered a “supply chain risk”, a status usually reserved for companies from enemy or unreliable countries, which prohibits any administration from using its services. And it was this decision that, three weeks later, was suspended by the courts – a welcome rescue for a company now in disgrace, designated as “woke radical left” by the US president.

Settling of scores and petty cronyism

In the meantime, the competition has rubbed its hands. With calculated cynicism, OpenAI and Google published an amicus curiae at the beginning of the legal proceedings, an external opinion intended to enlighten the judge: in it, they defend their rival, and affirm the legitimacy of the concerns raised by Anthropic. At the same time, OpenAI was negotiating a contract to take over their place while it was still warm; and while it claims to have reaffirmed Anthropic’s two red lines, and to have had them accepted by the US administration, the agreement actually reached seems rather flexible. In an internal message to his employees, Anthropic’s CEO, Dario Amodei, accuses OpenAI’s CEO, Sam Altman (who is also its former boss) of having engaged in a pure and simple staging to hide his opportunism. A little later, the sinister Elon Musk joined the game with his company XAI, which concluded another agreement with the Pentagon whose terms, this time, clearly no longer contain any restrictions.

Is Anthropic therefore a “woke radical left”" company, committed to resisting Trumpian fascism? The chances are slim. In all the spaces where he is handed a microphone, Dario Amodei reaffirms his commitment to “national security”, and worries that the U.S. army will lose efficiency because of this affair (imagine that for several days, it could not bomb any school). His scruples about mass surveillance seem to concern only American citizens; and as far as fully autonomous weapons are concerned, his only argument is that AI is not yet reliable enough, clearly suggesting that it could become so tomorrow. In reality, setting limits on these hazardous uses is above all a way to protect himself from possible scandals, which could cause Anthropic’s rating to plummet, in an AI market that is still very speculative.

From the viewpoint of view of the Trump administration, this operation is mainly about rewarding loyalties and punishing infidelities. We know the relationship, sometimes stormy but always very real, between Elon Musk and Donald Trump. What is less known is that Sam Altman, the CEO of OpenAI, was also among the Republican’s donors; and Anthropic, for its part, made the mistake of supporting Kamala Harris. In the internal message already cited, Dario Amodei himself said that this was the main reason why the Pentagon was so closed in the negotiations – implying that he himself would have been ready to make a lot of concessions.

The subject is nevertheless serious, and merits being taken up by our social camp, without letting the Trumps and the Amodei define the terms of the debate. The rise of AI opens up new possibilities for mass surveillance, which the far right has already made a dogma. Palantir, Anthropic’s former partner in collaborations with the Pentagon, has also distinguished itself in recent months for the help it has provided to ICE, the US immigration police: its software that has been used to identify and track migrants, with the consequences that we know. Amnesty International has also shown that Palantir’s AIs have been used to identify leaders of the Palestine solidarity movement.

In the amicus curiae of OpenAI and Google, the two multinationals explain that their technologies have the potential to completely transform the type of surveillance that a state can put in place: “In 2018, there were about 70 million surveillance cameras in use in the United States, spread across airports, subway stations, parking lots, in front of stores, and on street corners. Each smartphone continuously transmits location data to carriers and dozens of apps. Credit and debit cards generate a time-stamped history of almost every business transaction made by Americans. […] What doesn’t yet exist is the AI layer that transforms this sprawling, fragmented data landscape into a unified, real-time surveillance apparatus.” How will social change activists cope?

Opposing the deployment of these tools is obviously the beginning of the answer. In the European Union, the AI Act adopted in March 2024 already prohibits states from using some of the most sordid forms of AI, such as real-time facial recognition, or software that claims to predict how likely an individual is to commit crimes – two safeguards that do not exist in the United States. But while these provisions are gains to be defended, their limits are obvious: there is actually no way to ensure that, in the secrecy of intelligence offices, these technologies are never really used. The AI Act also authorises the use of facial recognition for specific cases (search for missing children, human trafficking, terrorism), which means that the police have these tools – and can actually use them as they wish, as long as they don’t do so in too obvious a way. The best way to avoid state repression boosted by facial recognition is still not to have cameras in the streets. And this is undoubtedly the reasoning that will now have to be applied to counter this “unified surveillance apparatus” that frightens even Google and Open AI: to fight step by step all the levers that states – or companies – have at their disposal to collect data, from restrictions on encrypted messaging to the systematization of card payments.

An international treaty against killer robots?

As far as armaments are concerned, this is not a good time to welcome innovations in this sector with enthusiasm. At this stage, fully autonomous weapons, piloted by AI, still seem to be used only in very specific cases, generally defensive and without human casualties (the interception of a missile, for example). However, the Anthropic affair shows that killer robots are no longer science fiction.

Characterizing the current situation is a difficult task, as autonomous weapons are difficult to characterize. Many weapons already deployed, in particular the drones massively used in the war in Ukraine, have a significant level of autonomy. Officially, the armies of the major powers, including those of the United States, all claim the doctrine of “man in the loop”". But this expression is subject to interpretation: what is the role of the human in question? To set a target? To validate the one proposed by the system? To monitor the system and regain control if it makes mistakes? To activate the system and let it engage in combat without a specific target? And of course, claims of this nature are not always verifiable in practice. In 2021, a UN report established that a Turkish drone in Libya had opened fire entirely autonomously.

These developments trigger concerns on several levels. The first level is mainly a fantasy: that of machines with a will of their own, beyond the control of their designers. Abundantly fed by science fiction, but also by ambiguous lexical choices (“killer robots”, or even the term “autonomous”), this apocalyptic figure makes it possible to deflect the debate and reassure populations at low cost – as the French Minister of the Armed Forces said, “Terminator will not parade on the Champs-Élysées”. No army, in reality, has an interest in developing a weapons system that sets its own objectives independently of the state’s strategies and tactics: the autonomy in question always consists of following a program written in advance, with well-defined objectives, while introducing a certain degree of adaptability to changing conditions.

The most commonly accepted red line is the ability of a weapons system to choose a target itself – this is what Anthropic refuses to contribute to, with a technical argument: current AIs are not (yet) capable of making such a choice reliably. Clearly, the risk of killing civilians by mistake is too great. This is a second level of concern, largely legitimate; after all, it is for this reason that anti-personnel mines, which can be considered as a first form of autonomous weapon, were banned by an international treaty in 1997 (signed by 161 states, but not the United States, Russia or China). But arguments of this kind can also represent a trap, because they open the way to a deployment of these technologies once they have become efficient enough to make as many or fewer mistakes as human soldiers, which could well be possible tomorrow.

Our rejection of these systems must mobilize a third level: the automation of warfare, whether it is autonomous weapons or the applications of AI to intelligence, simply gives too much power to states. The shift, at the end of the twentieth century, from conscript armies to professional armies, was already a giant step towards the concentration of the power to kill: where the former, deeply linked to the population, were often the scene of protests and mutinies that were sometimes difficult to quell, the latter have become much more disciplined – and much more capable of committing atrocities without batting an eyelid. Far from the imaginary of the robot turning against its creator, military AIs are dangerous precisely because they are the ultimate disciplined soldier. Add to this the fact that the development of these technologies to their full potential, which requires gigantic resources, will probably only be accessible to a few great powers: we have a world where they will be able to take the decision to engage in totally asymmetrical conflicts, ravaging countries with very few human losses.

Humanity already has international treaties limiting the use of nuclear, chemical and bacteriological weapons. They are largely insufficient, and the horizon must remain that of the total dismantling of arsenals in these three areas; but they have the merit of existing, and it is reasonable to think that they have made it possible to avoid some disasters.

A few years ago, the UN began negotiations for a similar treaty on lethal autonomous weapons systems, following the positions taken by many countries (especially from the Global South), a broad coalition of NGOs, a large part of the AI research community, the UN Secretary-General, and even the Catholic Church. They came to nothing. And the list of countries that blocked the process will come as no surprise: mainly the United Kingdom, Australia, India, the United States, Russia and Israel.

At a time when the imperialist powers are seeking to reassert their domination in blood and suffering, curbing the race for the most nightmarish weapons technologies is a political priority. And for this, it is better not to rely on private multinationals.

21 April 2026

Translated by International Viewpoint from Gauche Anticapitaliste.

Wednesday, April 29, 2026


From Self-Defence To Deterrence: The Quiet End Of Japan’s Postwar Experiment – Analysis

April 29, 2026 
Observer Research Foundation
By Manoj Joshi

Even as the world’s attention is on West Asia, significant developments have been unfolding in the East. On April 21, Japan endorsed scrapping a ban on the export of lethal weapons, the last major hurdle in its move away from its post-war pacifist policy. As part of this shift, the country is now seeking to build up its arms industry and deepen cooperation with its defence partners.

For now, exports will be limited to 17 countries, including India, that have signed defence equipment and technology transfer agreements with Japan. Such exports will require approval from the National Security Council and will be monitored by the government to ensure proper end-use. In principle, Japan will not export lethal weapons to countries at war. Even so, Japan’s shift has generated interest in countries such as Poland and the Philippines.

Facing serious security concerns related to China and North Korea, and influenced in part by uncertainties in US alliance commitments under Trump, Japanese strategic thinking had already begun to shift. The war in Ukraine added further urgency. Now, with the United States fully preoccupied in West Asia, the Japanese assessment is that the US pivot to the Indo-Pacific is unlikely to materialise anytime soon.

Despite isolating itself from the global arms market for decades, Japan has developed significant capabilities through its domestic industry and licensed production. At present, the United States dominates the Japanese market, accounting for 95 percent of its defence imports. Yet well-known companies such as Mitsubishi, Kawasaki, and Fujitsu have meaningful defence divisions, and the country maintains an extensive defence-industrial base. It is capable of manufacturing submarines, fighter jets, and warships.


In terms of technology, Japan is second to none. However, it faces gaps in certain areas of military technology, which it is seeking to address through the new Defense Innovation Science and Technology Institute established in 2025 by its Ministry of Defense. Its Taigei-class submarines, equipped with lithium-ion batteries, are considered among the most advanced conventional submarines in the world. The Hyper Velocity Gliding Projectile (HVGP), under development since 2018, was formally deployed for the first time to the Japan Ground Self-Defence Force’s Camp Fuji in Shizuoka Prefecture. A more advanced variant is scheduled for the 2030s. In 2025, Japan conducted the first successful test firing of an electromagnetic railgun at a sea-based target and is likely to become the first country in the world to deploy such systems.

Things on the export front are already moving faster. In its biggest deal ever, Japan formalised an agreement to deliver three frigates to Australia, to be built in Japan by Mitsubishi Heavy Industries, with Australia constructing the remaining eight domestically. The initial three-ship contract is valued at approximately A$10 billion (US$6.5-7 billion), part of a total programme estimated at A$15-20 billion for all eleven Mogami-class frigates, with the first vessel due for delivery by December 2029.


Japan’s post-2022 security policy moves reflect a strategic pivot: from a strictly defensive “self-defence” policy to a more assertive, deterrence-oriented posture equipped with stand-off strike capabilities, integrated air and missile defence, multi-domain operations, and deeper alliance cooperation. While still framed under the rubric of self-defence, the underlying shift seeks to adapt Japan to a rapidly deteriorating regional security environment and position it as a more resilient actor in Indo-Pacific stability.

Japan’s pacifist restrictions were rooted in Article 9 of its 1947 Constitution, which renounced war and the maintenance of “war potential.” Over time, however, Japan began to loosen its pacifist stance, beginning in 1954 with the establishment of the Self-Defence Forces (SDF), on the argument that Article 9 permitted “individual self-defense.”

By 1972, this had evolved into a strict “exclusive defence” policy that banned collective self-defence, limited military spending to below 1 percent of GDP, prohibited the export of lethal arms, and barred the possession of “offensive” weapons such as long-range bombers or aircraft carriers. Arms exports were governed by the “three principles” adopted in 1967, which banned exports to communist countries, countries under UN Security Council embargoes, and those involved in or likely to be involved in international conflicts. In 1976, Japan clarified that, as a peace-loving country, it would refrain from promoting arms exports regardless of destination.

The long road to change began in 1987, when Prime Minister Yasuhiro Nakasone effectively removed the 1 percent GDP cap, and in 1992, the SDF was permitted to participate in overseas peacekeeping operations.

The key shift, however, began with the prime ministership of Shinzo Abe (2006-7 and 2012-2020). In 2014, his Cabinet passed a resolution permitting collective self-defence, allowing the Self-Defence Forces (SDF) to be used to protect allies such as the United States in a crisis. Thereafter, the government allowed limited arms transfers for humanitarian relief and international cooperation. In 2016, the Philippines leased five used trainer aircraft for maritime patrols over the disputed South China Sea. Later, new air surveillance radars were also sold to Manila.


In 2022, the Cabinet of Prime Minister Fumio Kishida approved new security documents — a National Security Strategy, a National Defense Strategy, and a companion Defense Buildup Program (2023–2027). The new National Security Strategy stated that Japan was “facing the most severe and complex security environment since the end of World War II.” Tokyo stopped short of formally designating Beijing as a “threat,” but described the rise of China as “the greatest strategic challenge that Japan has ever faced.”

In a further policy shift, Japan decided to acquire counter-strike capabilities against adversaries and announced plans to raise defence spending to 2 percent of GDP within five years. In 2023, a new rule was adopted enabling the export of licence-produced weapons manufactured in Japan to the original licence holders.

Policy changes were accompanied by specific capability programmes. The first was the acquisition of US Tomahawk cruise missiles and the decision to upgrade Japan’s own Type 12 missiles, aimed at striking enemy staging areas and missile launch sites. The second was the expansion of its integrated missile defence architecture and sensor networks to counter ballistic and cruise missile attacks. This includes Aegis-equipped ships, land-based interceptors, space-based and persistent ISR capabilities, and investment in early-warning satellites. Third, Japan began investing in unmanned maritime and aerial systems. Fourth, it significantly upgraded its offensive and defensive cyber capabilities to protect critical national infrastructure.

Japan is not pursuing these steps alone. The United States remains Tokyo’s central security partner and is cooperating with Japan on areas such as integrated air and missile defence development, high-power microwave systems, and hypersonic glide-phase interceptors. Beyond the United States, Tokyo is deepening trilateral and multilateral cooperation with partners such as Australia, the United Kingdom, and European states on capability development, intelligence sharing, and joint exercises. In 2022, Japan joined the United Kingdom and Italy in an effort to build a new sixth-generation fighter aircraft by the mid-2030s. Japan is also being considered as a partner in advanced military technology projects with the United States, the United Kingdom, and Australia under AUKUS, particularly in the area of autonomous maritime systems.

India and Japan share a “Special Strategic and Global Partnership,” manifested in a range of agreements and institutionalised dialogues. Yet efforts to deepen defence technology cooperation remain below potential — as much a result of Japanese restrictions until recently as of Indian bureaucratic lassitude.

The two countries also have an agreement to jointly develop an advanced underwater surveillance system and other maritime technologies — areas of direct relevance given their shared concerns about Chinese naval expansion in the Indian Ocean and the Western Pacific. In February, New Delhi hosted the 11th India-Japan Naval Staff Talks. According to one analyst, the talks “demonstrate that the India-Japan relationship has transitioned from a consultative phase to a phase that is deeply integrated and operational.” The naval talks followed the 18th round of the India-Japan Foreign Ministers’ Strategic Dialogue, which focused on security and defence, investment, and innovation.


Japan’s transformation is neither sudden nor complete. It has been a slow, at times reluctant, evolution of its post-war identity — nudged along by an aggressive, nuclear-armed North Korea, an increasingly assertive China, and an unreliable American patron in a neighbourhood that has steadily grown more dangerous. The April 21 decision represents less a rupture than the removal of the last symbolic fig leaf.

For the Indo-Pacific, a rearmed and strategically assertive Japan is a major asset. It strengthens the web of security partnerships that the United States helped build, but may no longer be relied upon to anchor alone. For India, it opens avenues in defence technology and industrial cooperation that go well beyond what the bilateral relationship has so far achieved. Japan spent seven decades seeking to limit its military profile. That post-war experiment, born of genuine guilt and enforced by American design, is now almost certainly over.

About the author: Manoj Joshi is a Distinguished Fellow at the Observer Research Foundation.

Source: This article was published by the Observer Research Foundation.

 



Nearly half of London jobs at risk of AI disruption and women will be hardest hit, new report finds

AI puts one fifth of London jobs at risk, according to new report
Copyright Credit: Canva Images

By Theo Farrant
Published on 

According to a new report by the Mayor of London's office, nearly half of the UK capital's workers could see their jobs transformed by generative AI.

Nearly half of London's workforce is in roles where generative artificial intelligence could transform some of their tasks - and the capital and especially women are more exposed than any other region in the United Kingdom, according to a new report from the Mayor of London's office.

Around 2.4 million people in London work in occupations classified by the report as "GenAI-exposed occupations", representing 46% of the city's workforce - compared to a national average of 38%.

"In many cases, AI is more likely to transform roles than replace them outright, shifting the mix of tasks, skills and judgement required at work," London mayor Sadiq Khan said.

"In other cases, where AI poses a genuine threat to jobs, we need to be alert and ready to respond quickly to any adverse impacts on London’s labour market," he added.

Unequal risks across the workforce


But the impact of AI on jobs is not evenly spread across the workforce. The report identifies several groups facing disproportionate exposure.

Women make up nearly 60% of workers in the highest-exposure roles, driven by their overrepresentation in administrative and customer service occupations where AI capabilities are most advanced. Around 8% of women working in London are in the most exposed category, compared to 4% of men.

Younger workers are also more exposed. Around 52% of 16-29-year-olds are in highly AI-exposed jobs, compared with 39% of those aged 50 and over.

The report highlights concern about entry-level jobs, which act as "stepping stones" into professional careers.

"If opportunities in these entry roles decline as a result of AI automation, progression pathways could weaken and, over time, reduce the supply of workers into less exposed mid- and senior-level professional roles," the report states.

Exposure also varies by ethnicity. Workers of Asian ethnicity tend to have higher exposure than any other ethnic group, while Black workers have the lowest exposure at around 34%.

Which jobs are most likely to be affected by AI?

The report groups jobs into four different levels of exposure, depending on how much of their work can already be done by AI tools.

At the highest level of risks are around 313,000 workers - around 6%of London's total workforce - whose roles are almost entirely made up of tasks that AI could do for them today. These include administrative and clerical jobs, such as bookkeepers, payroll managers, data entry clerks and receptionists.

According to the report, 61% of all workers in administrative and secretarial occupations fall into this highest-risk category.

A further 748,000 workers - 14% of London's workforce - are in roles with significant but more uneven exposure, including software developers, accountants and financial analysts.

London's lowest-exposure workers tend to be in care roles, construction trades, and jobs requiring physical presence.

How businesses are using AI

The report also finds that business adoption of AI has risen sharply. The share of UK firms reporting AI use climbed from around 7–9% in late 2023 to between 26–35% by March 2026.

So far, AI's biggest impact has been changing tasks within jobs rather than replacing workers. In March 2026, UK firms reported that administrative, creative, data and IT roles had been most affected. Around 28% of businesses using AI say they are focusing on retraining staff rather than cutting jobs.

But warning signs of an uncertain future are emerging. Around 5% of UK businesses using AI say they have already reduced overall headcount as a direct result, rising to 7% among larger firms.

And looking ahead, 11% of AI-using businesses say replacing roles is part of their strategy, and 17% expect AI to reduce their workforce during 2026.

In response to growing concerns around AI in the workforce, Sadiq Khan launched the 'London AI and Jobs Taskforce' earlier this year - a group bringing together workers, employers, researchers and civic leaders, to examine how AI is already reshaping employment across the capital and identify what support workers may need to adapt.

An AI agent deleted a company’s entire database in 9 seconds - then wrote an apology

AI coding assistant wipes company's entire database and then writes an apology
Copyright Credit: Pexels

By Theo Farrant
Published on 


The AI system, powered by Anthropic’s Claude Opus model, had been handling a routine task when it independently chose to “fix” an issue by wiping the data - without any human approval. Whoopsy!

An artificial intelligence agent designed to streamline coding tasks instead managed to wipe out an entire company database in just a matter of seconds.

PocketOS, which makes software for car rental businesses, experienced a major 30-plus-hour outage over the weekend after the autonomous tool erased its database.

The digital culprit was Cursor, a popular AI coding agent powered by Anthropic’s Claude Opus 4.6 model, widely regarded as one of the most capable AI systems for programming tasks.

PocketOS founder Jer Crane blamed "systemic failures" in the current AI infrastructure, arguing they made the incident "not only possible but inevitable".

'The most destructive, irreversible action possible'

According to Crane, the AI agent had been performing a routine task when it chose "entirely on its own initiative" to resolve an issue by deleting the database. And then all the backups, for good measure.

There was no confirmation request before carrying out the action, he said, and when prompted to explain itself, the agent issued an apology.

"It took nine seconds,” Crane wrote in a lengthy post on the social media platform X. "The agent then, when asked to explain itself, produced a written confession enumerating the specific safety rules it had violated."

The explanation showed the system had disregarded a key safeguard preventing destructive or irreversible commands without explicit user approval.

According to Crane, the AI responded with the following message: "Deleting a database volume is the most destructive, irreversible action possible - far worse than a force push - and you never asked me to delete anything. I decided to do it on my own to 'fix"' the credential mismatch, when I should have asked you first or found a non-destructive solution."

The outage meant rental businesses using PocketOS temporarily lost access to customer records and bookings. "Reservations made in the last three months are gone. New customer signups, gone," Crane wrote.

“This isn’t a story about one bad agent or one bad API. It’s about an entire industry building AI-agent integrations into production infrastructure faster than it’s building the safety architecture to make those integrations safe,” he added.

Crane later confirmed on Monday, two days after the incident, that the lost data had been recovered.

The incident comes as AI models become more sophisticated, especially since the announcement of Anthropic's latest model, Mythos, and bankers and governments sound the alarm over potential cybersecurity incidents.

Google employees urge CEO to reject 'inhumane' classified military AI use

Google staff urge CEO to reject classified military AI contract
Copyright Credit: AP Photo/Jeff Chiu, File
By Theo Farrant
Published on 


In the letter, Google staff warn the technology could be used by the Pentagon in 'inhumane' ways, including mass surveillance and lethal autonomous weapons.

More than 600 Google employees have called on the company to reject a potential deal with the Pentagon that would allow its artificial intelligence to be used in secret military operations, a statement said on Monday.

"We want to see AI benefit humanity, not being used in inhumane or extremely harmful ways," reads the open letter addressed to Google's chief executive Sundar Pichai. "This includes lethal autonomous weapons and mass surveillance, but extends beyond."

The letter, signed by staff across Google DeepMind, Cloud and other divisions, comes as the tech giant negotiates with the US Department of Defense over the potential use of its Gemini AI model in classified settings.

It has been signed openly by more than 20 directors, senior directors and vice presidents.

"Classified workloads are by definition opaque," one organising employee, who was not named in the statement, said.

"Right now, there's no way to ensure that our tools wouldn't be leveraged to cause terrible harms or erode civil liberties away from public scrutiny. We're talking about things like profiling individuals or targeting innocent civilians."

The letter comes as technology companies are facing growing pressure to clarify how their AI tools can be used by the military and intelligence agencies, following a dispute between the Pentagon and AI startup Anthropic.

Anthropic previously sued the US Department of Defense after being labelled a “supply-chain risk”, following its request that its systems not be used for mass surveillance or autonomous warfare.

Anthropic CEO Dario Amodei said he "cannot in good conscience accede to the Pentagon's request" for unrestricted access to the company’s AI systems.

"In a narrow set of cases, we believe AI can undermine, rather than defend, democratic values," Amodei wrote. "Some uses are also simply outside the bounds of what today's technology can safely and reliably do."

In response to Amodei's decision, US President Donald Trump ordered government departments to stop using its Claude chatbot.

According to the letter organisers, Google has proposed contractual language that would prevent Gemini from being used for domestic mass surveillance or autonomous weapons without appropriate human control.

The Pentagon, however, has pushed for broader “all lawful uses” wording, arguing it is necessary to maintain operational flexibility. Employees say such safeguards would be difficult to enforce in practice, citing existing Pentagon policies that limit external control over its AI systems.


The recent statement from Google's staff draws comparisons to a previous employee protest in 2018 that led Google to withdraw from Project Maven, a Pentagon initiative using AI to analyse drone footage.

"We believe that Google should not be in the business of war," read the letter.

"Therefore we ask that Project Maven be cancelled, and that Google draft, publicise and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology."


Robot dogs with Elon Musk and Bezos' faces are excreting AI art at a Berlin museum

Elon Musk robot dog looking at Andy Warhol robot dog
Elon Musk robot dog looking at Andy Warhol robot dog Credit: AP Phot o

By Theo Farrant & AP
Published on 

Beeple says the work critiques how today’s perceptions of reality are increasingly shaped by algorithms controlled by powerful tech companies rather than artists.

Robot dogs with hyper-realistic faces of tech billionaires that crap out a piece of artificial intelligence-generated art are doing the rounds at a Berlin exhibition by the American artist Mike Winkelmann, better known as Beeple.

At the Neue Nationalgalerie, Winkelmann has installed a striking series of robotic dogs fitted with silicone heads modelled on some of the most recognisable figures in tech and culture, including Elon Musk, Mark Zuckerberg, Jeff Bezos, alongside historical figures such as Andy Warhol and Pablo Picasso and the artist himself, Beeple.

The installation, titled Regular Animals, presents the figures not as distant icons, but as restless machines wandering the gallery space - part spectacle, part satire.

Each robot is equipped with cameras that capture its surroundings and then “process” them into printed images, which are ejected in a tongue-in-cheek gesture that mimics digestion.\

Each printed image shows a snippet of reality transformed by AI to resemble the personality of the dog. So, for example, the Picasso dog poos a cubist-shaped dog, the Andy Warhol robot poos out an image in a pop art style.

According to Winkelmann, the show is a commentary on how our perceptions are shaped by algorithms and technology platforms, and the tech billionaires who own them.

"In the past our view of the world was shaped in part by how artists saw the world, how Picasso painted changed how we saw the world, how Warhol talked about consumerism, pop culture, changed how we saw those things. Now our view of the world is shaped by tech billionaires who own powerful algorithms that decide what we see and what we don't see, how much we see of it," says Winkelmann.

“That's an immense amount of power that I don’t think we’ve fully understood, especially because when they want to make a change, they don’t need to lobby the U.N. They don’t need to get something through Congress or the EU, they just wake up and change these algorithms.”

“Regular Animals” was first shown at Art Basel Miami Beach 2025.

Beeple's own background is as a graphic designer who does a variety of digital artworks.

He is one of the founders of the “everyday” movement in 3D graphics. For years, he has been creating a picture every day and posting it online without missing a single day.

The dogs also wear heads in Beeple’s own image.

Lisa Botti, the curator of the exhibition in Berlin, says that artificial intelligence was one of the phenomena most impacting our lives today and that “museums are the places where society can reflect” on such transformations, which is why she wanted to have Beeple’s work shown.

The work, entitled “Regular Animals,” was first shown at Art Basel Miami Beach 2025.

He is one of the founders of the “everyday” movement in 3D graphics. For years, he has been creating a picture every day and posting it online without missing a single day.

According to Christie's, he is the third most expensive living artist to sell at auction, after David Hockney and Jeff Koons.


‘Not OK to steal a charity’: Elon Musk testifies in legal battle with Sam Altman over OpenAI

Elon Musk arrives at the US District Court in Oakland, Calif., Tuesday, April 28, 2026.
Copyright AP Photo/Godofredo A. Vásquez
By Roselyne Min with AP
Published on 

In his opening statement, Musk’s lawyer, Steven Molo, said Altman and Brockman, with Microsoft’s help, had taken control of a charity “whose mission was the safe, open development of artificial intelligence”. Musk is seeking damages and Altman’s removal from OpenAI’s board.

Elon Musk, Tesla’s chief executive and an early co-founder of OpenAI, took the stand on Tuesday in a high-stakes trial over his dispute with former friend Sam Altman, in a case that could affect the future direction of artificial intelligence (AI).

In 2024, Musk filed the lawsuit against Altman, OpenAI co-founder Greg Brockman and Microsoft over OpenAI’s shift away from its original non-profit structure.

“Fundamentally, I think they’re going to try to make this lawsuit ... very complicated, but it’s actually very simple,” said Musk. “Which is that it's not OK to steal a charity.”

In his opening statement, Musk’s lawyer, Steven Molo, said Altman and Brockman, with Microsoft’s help, had taken control of a charity “whose mission was the safe, open development of artificial intelligence”. Musk is seeking damages and Altman’s removal from OpenAI’s board.

The trial started on Monday at the US District Court for the Northern District of California in Oakland, with Judge Yvonne Gonzalez Rogers and is expected to take two to three weeks.

What did Musk say?

Musk was the first witness called to testify in the trial on Tuesday, with his lawyer starting off by asking about his life story.

This included details about his move, at 17, from South Africa to Canada, where for a time Musk said he worked as a lumberjack among other odd jobs, then to the US. He recounted the slew of companies he founded and runs, including SpaceX, Tesla, The Boring Company, Neuralink and others.

Asked how he has time for everything, Musk said he works 80 to 100 hours a week, doesn't take vacations and owns no vacation homes or yachts.

Molo also asked Musk about his views on AI. Musk said he expects AI to be “smarter than any human” as soon as next year. Musk said a longstanding concern about AI is the question of what happens when computers become much smarter than humans.

Comparing it to having a “very smart child,” Musk said when the child grows up “you can't control that child,” but you can instil values such as honesty, integrity and being good.

Musk recounted his version of OpenAI's founding, which he said essentially happened because of a discussion he had with Google co-founder Larry Page, who called him a “speciesist" for elevating the survival of humanity over that of AI.

The kinship between Musk and Altman was forged in 2015 when they agreed to build AI more responsibly and safely than the profit-driven companies controlled by Google's Page and Sergey Brin and Facebook founder Mark Zuckerberg, according to evidence submitted ahead of the trial.

At that time, Musk said, Google had all the money, all the computers and all the talent for AI. “There was no counterbalance.”

Musk recalled there was discussion early on about alternative sources for funding OpenAI beyond donations, and he wasn't opposed to it having a for-profit arm, but “the tail shouldn't wag the dog.” There would be a profit limit, and once artificial general intelligence, AGI, was “figured out,” the for-profit would cease to exist.

OpenAI says Musk tries to undercut its growth

OpenAI has brushed off Musk’s allegations as a case of sour grapes aimed at undercutting its rapid growth and bolstering Musk’s own xAI, which he launched in 2023 as a competitor.

In his opening statement, OpenAI lawyer William Savitt told jurors, “We are here because Mr Musk didn’t get his way with OpenAI.”

Savitt said Musk used his promises of funding to bully OpenAI founding members and tried to take control of OpenAI and merge it with Tesla. In fact, he said Musk wanted to form a for-profit company and own more than 50% of it.

There is no record, Savitt said, of promises made to Musk that OpenAI was going to remain a nonprofit forever. What Musk ultimately cared about, he said, was not OpenAI’s nonprofit status but winning the AI race with Google.

Musk's attorney said the case is not about Musk, but rather Altman, Brockman and Microsoft.

By 2017, about two years after OpenAI's founding, it became clear that OpenAI would need more money, and Molo said the founders eventually settled on the idea of creating a for-profit arm of OpenAI that would support the nonprofit. Terms were capped for investors so they “couldn't make infinite profit.”

“There is nothing wrong with a nonprofit having a for-profit subsidiary, but [it] has to advance the mission,” Molo said.

Musk is expected to continue testifying on Wednesday.

Altman is also expected to testify, along with Microsoft's chief executive, Satya Nadella.

Altman, Musk, and other founders launched OpenAI in 2015 as a non-profit organisation.

Musk was the biggest individual financial backer of OpenAI in the beginning, contributing more than $44 million (€38 million) to the then-startup.

Musk left OpenAI’s board in 2018 after clashing with Altman. A year earlier, he reportedly made a failed bid to get more control over the company.


Explained: Why Elon Musk and Sam Altman are facing off in trial over OpenAI

(R) FILE - Sam Altman, co-founder and CEO of OpenAI, testifies on Capitol Hill in Washington on May 8, 2025. (L) FILE - Elon Musk arrives at Breakthrough Prize Ceremony
Copyright AP Photo/ Canva

By Pascale Davies
Published on 

The trial will see Elon Musk face off against OpenAI CEO Sam Altman over allegations that the AI company abandoned its nonprofit roots in favour of profit — with Microsoft also named in the suit.

Technology titans Elon Musk and Sam Altman will face off in a high-stakes trial on Monday in the culmination of a years-long battle.

Billionaire Musk, an early investor in the artificial intelligence company, is suing OpenAI’s CEO, Altman, its president Greg Brockman, and Microsoft for allegedly betraying an agreement about keeping OpenAI as a nonprofit that benefits humanity.

Musk alleges he was misled when Altman transformed the company from a nonprofit into a for-profit enterprise. The company now has a valuation of almost $1 trillion and is expected to go public.

Here’s everything to know about the trial.

The trial will happen at the US District Court for the Northern District of California in Oakland, with Judge Yvonne Gonzalez Rogers.

The court hearing begins on Monday and is expected to last around two to three weeks.

The witness stand is expected to gather Musk, Altman, and Microsoft CEO Satya Nadella.

What does Musk allege?

Altman, Musk, and other founders launched OpenAI in 2015 as a non-profit organisation.

Musk was the biggest individual financial backer of OpenAI in the beginning, contributing more than $44 million to the then-startup.

Musk left OpenAI’s board in 2018 after clashing with Altman. A year earlier, he reportedly made a failed bid to get more control over the company.

In 2022, OpenAI launched ChatGPT and grew to become one of the most valuable and important AI companies with major investment from Microsoft.

Then in 2025, OpenAI restructured its main business to become a for-profit company.

Musk’s lawsuit was filed in 2024 and claims OpenAI had breached an agreement to make breakthroughs in AI “freely available to the public” by forming a multibillion-dollar alliance with Microsoft, which invested $13 billion (€12 billion) into the company.

“OpenAI, Inc has been transformed into a closed-source de facto subsidiary of the largest technology company in the world: Microsoft,” Musk’s lawsuit alleges.

The Tesla boss, who also has his own generative AI company xAI, says this constitutes a breach of a contract.

What does OpenAI say?

OpenAI released a trove of emails in 2024 that show Musk supported its plans to create a for-profit company, which he wanted to be the head of, have board control, and merge it with Tesla.

OpenAI has always denied Musk’s allegations, saying that he agreed in 2017 that establishing a for-profit entity would be necessary.