Google’s AI workers want to cure cancer, not kill for the US and Israeli military. That’s why they are now trying to unionise for the first time ever, writes an employee at DeepMind.

Today, members of the Communication Workers Union (CWU) at the London office of Google DeepMind have informed management in a letter that, having gained a significant number of union members, they now intend to seek formal union recognition.

This is the first time a major artificial intelligence (AI) lab has applied for union recognition. The CWU’s United Tech and Allied Workers (UTAW) branch will seek to negotiate with employers over how our work is used, as well as discussing recent military contracts that Google has signed, and assurances over automation-led redundancies.

Our situation has been brewing for some time. Much of the interest in organising came from last year, when Google ‘amended’ its AI principles document by dropping its pledge against developing AI for military or government surveillance purposes — a move that was made without any input from DeepMind’s safety and responsibility team, whose role is to guide appropriate and ethical applications of AI.

When several hundred employees signed a petition against this change, Google chose to ignore it. For many of us working in DeepMind, this lack of transparency and employee engagement was part of a much broader trend of altering and changing policy to meet commercial demands, and applying an increasingly hands-on approach from Google in doing so.

Before concerns over the AI charter, employees also showed great unease over ‘Project Nimbus’, a $1.2 billion cloud computing contract the Israeli government arranged with Google. DeepMind workers suspected that AI developed by both Google and DeepMind employees was being used directly in the Gaza genocide. This was confirmed last year, when a customer support whistleblower at Google Cloud revealed that the Israeli military was using Google Gemini to analyse drone footage.

Google is the most profitable publicly traded company on earth, and these contracts are far from an economic necessity. To prove the point, the rival lab Anthropic had its products removed from the US Department of War after the lab’s CEO, Dario Amodei, refused to allow its work to be used for domestic surveillance or for fully autonomous weapons. Amodei stated that his employees ‘would rather not work with the Pentagon than agree to uses of its tech that may undermine, rather than defend, democratic values.’ There is no reason why Google cannot extend the same assurances to us.

Our members want the ability to step away from work which violates their personal morals. Many of us at DeepMind work here because we bought into the mission to ‘build AI responsibility to benefit humanity’. Even though these military contracts make a mockery of that slogan, some work continues to happen in our offices which reflect it. Union members work on the application of AI to predict extreme weather events, for example. They also optimize energy grids and contribute to research on Alzheimer’s and cancer. Union members worked on the advances in protein sequencing which won the 2024 Nobel Prize in Chemistry. These are applications of cutting-edge technology that contribute to the social good.

The push to use these advanced models to murder or spy on people is a betrayal of what many of us thought we were working towards. As balloting and negotiations between the union and Google take place over the coming months, the changes won here could have implications for the direction of other AI labs as both OpenAI and Anthropic seek to scale up their London offices and impact the British tech sector as a whole.

Certainly, technological advances can compromise and complicate morals. Modern AI models can quite easily generalise beyond the context for which they were trained, so AI can be developed for war without the engineers who built it having intended any military applications. The crucial point is that this kind of protection can only be reached through political means. This is why so many of us felt compelled to act and unionise: so that we can be heard, and so that agreements can be forged which help to shape a new legal framework under British employment law.

The idea of tech workers organising may seem strange to many. After all, we are paid well, and our sector is nearly entirely non-unionised. But we are not alone; we are merely some of the first. And in our new industry, we are expressing some of the oldest principles of trade unionism: firstly, that we demand a greater say over what we spend our lives creating, and secondly, that our effort need not facilitate brutality and cruelty — it can help create a better world.

This article was originally published by The Tribune; please consider supporting the original publication, and read the original version at the link above.