Saturday, March 07, 2026

 Four Union Strategies to Fight  A.I.

Source: Labor Notes

A corporate artificial intelligence frenzy is sowing fear for workers on a massive scale. Seventy-one percent of people in the U.S., according to a Reuters poll on A.I., are concerned “too many people will lose jobs.”

Wall Street and Big Tech are running a huge hype machine to back up their massive, risky investment in A.I., pledging it will drive a “productivity surge,” meaning fewer workers and more profits.

But workers can take heart that, so far, it’s mostly hot air. To date, A.I. is making little profits. It can be helpful at a few tasks—rough drafts of computer code, summaries of reams of data—but is rarely the equal of human talent otherwise.

Nonetheless, investors are on track to pour more than $5 trillion worldwide into A.I. over the next five years. To make good on that cash outlay, expect CEOs to sell A.I. as the salve for everything from logistics to loneliness.

A.I is a management power grab, disguised as an inevitable technical upgrade. To fight it, workers can use four strategies proven in the past: name the real problem; unionize it; ransom it; and block it.

NAME THE REAL PROBLEM

The first step for workers is to cut through the hype. At your job, what are the specific uses of automation or A.I. that management aims to roll out?

Which uses are likely to be a dud, and which are a real threat to union power, job security, and the quality of what you do? Are there uses that your co-workers want, on their own terms?

These tough questions are best answered collectively, with knowledge from different departments and job types, whether that discussion takes place in union meetings or on lunch breaks.

At the United Caucuses of Rank-and-File Educators conference last summer, teacher activists from across the country held a discussion like this. Many hated A.I. being pushed into the classroom. Others felt it could make onerous parts of their job easier.

The teachers opposed to A.I. shared examples of how it had been used against workers and how it was promoting plagiarism and misinformation. Participants keyed in on a few uses they might want as options, like class planning or reviewing students’ past work, but agreed it should never be mandated by management.

National Nurses United released “A.I. justice” principles last year that highlight specific threats, like an automated algorithm deciding how many nurses to schedule on shift or which tests should be ordered for a patient. The union argues that computer systems can’t replace human expertise.

Executives often tell on themselves. To stay ahead of management’s game, unions can recruit member volunteers to read what CEOs in your sector are bragging about in the business press and scour the web for what they’re promising their higher-ups.

In fact, the heaviest A.I. users are in the C-suite. A recent survey of the U.S. and five other countries found 87 percent of executives and 57 percent of managers were using A.I. tools, versus 27 percent of employees. These tools can’t nurse a patient, but they can hack a passable version of management’s tasks: surveilling workers, summarizing information, and telling investors what they want to hear.

Job cuts from A.I. may be a real threat in your sector, but not because automation can actually do your work well. Executives may not care whether students are nurtured, real facts are reported, or patients are healed. They just want to make a buck. A.I. gives them cover to allow the quality of work to degrade.

Software executive and critic Anil Dash recently observed that half a million tech workers have been laid off since the release of ChatGPT mainly because execs “now have A.I. to use as an excuse for going after workers they’ve wanted to cut all along.”

Junior programming jobs have been heavily cut, while senior engineers are kept on to fix the buggy code dreamed up by A.I. But where will the next generation of senior engineers come from, if they’re not learning on the job as junior coders? These short-sighted cuts are creating new leverage for experienced programmers, who could push worker-run solutions for training the next generation.

UNIONIZE IT

New tech could become an excuse to outsource your work to non-union hands. To keep it union, you can bargain contract language, make direct demands on management, and take a proactive union approach to learning technology.

In the 1970s and ’80s, Mike Parker, an Auto Workers electrician and Labor Notes co-founder, kept track of auto company plans for robotics and computers, and developed union training programs on the new gear.

When managers proposed to bring in the robots, they said non-union specialists would have to take on installation and maintenance. Parker and his co-workers asserted they were ready to handle the work on union terms, and often won.

It’s too bad the union as a whole didn’t follow his lead. Every decade since the late 1940s, auto company CEOs have made grand promises of automation by robotics, and Auto Workers top officers generally gave up the fight. Still, most job cuts were caused by work speed-up, mandatory overtime, and outsourcing to third-party parts suppliers and non-union Southern factories.

As the San Francisco-to-Oakland Bay Bridge got rebuilt two decades ago, private contractors planned to outsource the work on massive new welding machines to non-union workers. “The company came to the union and said, ‘We’ve got a contract with you, but you don’t have welders certified on those machines locally,’” said Mike Munoz, then a leader with the Pile Drivers in Oakland.

“Our union bought one of the machines and started teaching the members to weld on it,” said Munoz. “We can train our members to do anything. We certified all the welders who went out on the Bay Bridge. It became our work because we threw ourselves into it.”

When it comes to new A.I. and automation schemes from management, workers can refuse to let non-union contractors take charge. An army of consultants has sprung up to advise bosses on A.I. implementation for hospitals and schools, grifting millions from actual education and care.

Your union contract may already have language requiring management to bargain over major changes in unit work. Where it doesn’t, you can push for specific new language. If you accept some A.I. tools, like to summarize a thousand pages of patient records, which union job classifications will run the robots and double-check their work? Keeping the work in union hands is a first step to steer what A.I. is and isn’t used to do.

RANSOM IT

Another union strategy worth considering: force management to pay workers extra, as a condition of rolling out new technology.

The most famous deal like this, for longshore workers, shows short-term gains and big long-term limits for the approach.

In a landmark 1960 agreement, the militant West Coast Longshore union (ILWU) agreed to allow mechanization and shipping containers at the ports, in exchange for expanded pay, pensions, and a guarantee of a certain number of union jobs at each port. If the port owners dropped hiring below that number, they still had to pay that number of union members indefinitely.

The agreement came with big tradeoffs, as members were split into three tiers with radically different job security. Only the A-tier got the guaranteed jobs or payouts. When port owners slashed hiring, A-tier longshore workers and union officers didn’t feel the urgency to organize the jobs in new hubs of the supply chain.

“The containers go inland,” said Peter Olney, who came into the union as a lead organizer decades later. “Do you follow the work inland, unloading and warehousing them? That fell by the wayside.”

Another kind of ransom can be won by those building out the new technology and its infrastructure. Construction workers have a particularly direct kind of leverage over the A.I. boom: it can’t be built without them.

Much of the massive data center construction behind A.I. is getting unionized, even in far-flung boomtowns. That’s because building trade unions have national networks of trained, traveling members to call up through their hiring halls, and can meet the labor demand fast.

In the next wave, many “hyperscale” data centers are planned to be 10 times the size of those already built. The largest will guzzle as much electricity as the entire city of Philadelphia.

The vast labor demand of those projects gives building trade unions leverage, if they seize it: to bring new members in, to turn down work on the projects facing the most local opposition, and to demand concessions for public services and the environment.

An upsurge of local grassroots campaigns blocked 25 data centers last year. When unions partner with community groups, they both can squeeze more from developers and governments, like dropping the billion-dollar data center tax giveaways that can bankrupt local schools and roads. In California, such alliances unionized gas and solar power plants and won a few community demands.

At best, these kinds of “ransom” deals can raise the costs for management to force in a new technology, and buy time for workers to go on offense with organizing.

BLOCK IT

With enough strength, workers may manage to draw the line against certain uses of A.I. altogether.

In their 2023 Hollywood strikes, the Writers Guild and Screen Actors won restrictions on the use of A.I. writing or replicas of actors’ faces and voices. But in a media industry that’s getting more consolidated and corporate every year, bosses are finding workarounds, and unions are fighting to keep up.

The NewsGuild launched a national campaign in December for “News, Not Slop,” using contract negotiations and public pressure to demand limits on A.I.-generated news content.

In their recent strike, 15,000 New York City nurses won language against some kinds of A.I. misuse.

Oil refinery Steelworkers, in national pattern bargaining this year, aim to block management from using A.I. tools to monitor workers’ movements, assess their productivity, and dish out automatic discipline.

Existing contract language on working conditions could be used against degrading uses of A.I. Use your discipline process to limit the use of automated demerits. Use worker oversight of safety to push back against allowing A.I. tools to make risky decisions. Use staffing limits to draw a line against bigger workloads disguised as high-tech efficiency.

The most degrading effect of A.I., after all, isn’t just to our work, but to our skills and imaginations. When music and movies are made by a robot cobbling together past works, it cheats audiences and artists alike of newer, wilder dreams.

Even in more rote work, we learn by doing. A.I. is no unstoppable force of progress. In fact, if it’s done how CEOs want, it would dry up the well of progress: worker know-how.

Standing up to management’s technological power grab is one big step to take responsibility for the world we make on the job—and to keep open the path to a better one.

Source: Global Policy Journal

Artificial Intelligence (AI) is one of the hottest topics out there. And for a good reason. AI is transforming industries and everyday life. But how much do we understand about AI? How powerful is it? Is it comparable with capitalism? How will it affect the workforce? Is it becoming “too important to fail?” Is there a progressive alternative to AI? 

C. P. Chandrasekhar, a world-renowned scholar of finance, financial policy and development and Senior Research Scholar at the Political Economy Research Institute (PERI) at the University of Massachusetts Amherst, addresses these questions in the interview that follows. He is emeritus professor at the Centre for Economic Studies and Planning, Jawaharlal Nehru University, New Delhi where he taught for more than 30 years. In addition to many articles in academic journals and serving as a regular economic columnist for Frontline (Economic Perspectives), Business Line (Macroscan) and Economic and Political Weekly, he is the author of scores of books, including Karl Marx’s Capital and the Present. In 2009Chandrasekhar received the Malcolm Adiseshaiah Award for contributions to economics and development studies. 

C. J. Polychroniou: Artificial Intelligence (AI) has been integrated into business and our daily lives. Among other sectors of the economy, AI is said to be transforming the finance and banking industries.  I’d like to start by asking you about capitalism and AI. Is capitalism compatible with AI?

C. P. Chandrasekhar: Innovation and technological change under capitalism are shaped by the needs of Capital in pursuit of profits. But that does not make technological change under that capitalism all bad. Both under capitalism and beyond, these technologies can be shaped and deployed to serve the needs of a more people-centric sustainable development agenda.

A matter of concern is how the observed evolution of Artificial Intelligence is being influenced by the needs of Capital. One obvious way is through the displacement of labor with attendant implications for employment and the conditions of labor. But, in the hype over AI, the transformation of activity induced by AI in the rest of the economy is expected to give rise to new opportunities for employment that would neutralize AI’s expected substitution of humans. Whether labor substitution would only reduce the probability of human error, or go awry, driven by sycophantic or hallucinating bots or software robots is a moot question. According to the hype, however, Artificial Intelligence (AI), a generic, general-purpose technology would ensure revolutionary transformation of almost every area of human activity, with a combination of productivity increase, employment expansion and improved human well-being through effects on delivery of health and educational services, for example. None of that is as yet validated by experience.

The other impact of concern seems to be an intensification of the atomization of society where relations with bots increasingly substitute for a wide variety of human relations resulting in new forms alienation. The effects of this are already being widely observed and reported.

An overarching problem is that since the evolution of AI and its deployment in multiple activities is being largely controlled and driven by private capital, there is little effort to assess and counter what could be socially and economically disruptive consequences of that development. Oligopolistic competition in AI development only makes its evolution more “autonomous” and “spontaneous.” Moreover, the speed of evolution and (as some have correctly argued) the ambiguity as to why, given the way large language models are developed and trained, they tend to deliver the “capabilities” they do makes regulation difficult and slow to respond.

So, the question is not whether capitalism is compatible with AI, but whether social cohesion and human well-being are compatible with the AI evolution delivered by Capital in unbridled pursuit of profit.

C. J. Polychroniou: How exactly is AI transforming finance and what should we expect its impact to be on financial markets?

C. P. Chandrasekhar: There are two issues to be discussed here. The first is how AI is transforming finance. The other is how yield-hungry finance is subordinating and driving AI development.

The transformation of finance and financial markets by AI follows the ongoing automation of code writing tasks in the finance space and the introduction of algorithms. Algorithmic trading speeds-up investment responses to market movements and even determines the size and sequencing of components of a large transaction based on stored instructions, to ensure better returns without the need for continuous human intervention prone to error. There is considerable evidence suggesting that this leads to increased market volatility and even “flash crashes.” With AI agents that are trained to do such tasks more “independently” and faster, these tendencies have only intensified, with fears that a range of human-run interventions would now be performed by digital proxies.

But the more concerning trend is the subordination of AI by finance in search of high returns based on exploding valuations driven by speculation. The surge in the Nasdaq and S&P 500 indices is reflective of this speculation-driven boom. A few firms have driven the rising trend in these indices, exaggerating their weights in determining market performance. Leading them have been the “Magnificent Seven” (Alphabet, Amazon, Apple, Meta, Microsoft, NVIDIA and Tesla) that have been the best performing stocks. Six of those seven firms are not merely so-called “tech firms,” but have a presence of one kind or another in the Artificial Intelligence (AI) space. They account for close to 30 per cent of the weight by market capitalisation in the S&P 500, driving the movement in that index. Ten leading firms accounted for almost 80 per cent of the S&P 500’s net income growth in the year to November 2025.

The spike in the share prices of these firms has meant that the price earnings ratio of many of them are well above the average of around 19-20 for S&P 500 firms. NVIDIA, which crossed the $5 trillion market capitalization mark, recorded a price to earnings (P/E) ratio (calculated by dividing the company’s stock price by its earnings per share over the previous 12 months) of 58 in August of 2025.  Oracle, which is diversifying into the AI space from being a provider of database software, also recorded high figures. And there are other smaller companies breaking records. Palantir, which is an AI-powered data mining company, notorious in some circles for allegedly facilitating state surveillance, is being contracted by a range of commercial firms seeking to deploy artificial intelligence. It has seen its stock price more than double three years running.

A high P/E ratio indicates that investors are betting on the revenues of these firms rising significantly in the future, relative to their performance in the recent past, warranting higher stock price and capitalization values today. More diversified firms may be able to protect and raise their revenues irrespective of the future of AI. But AI-specific firms would soar, survive or crash depending on how successful AI is.

That explains why AI firms have been hyping up the potential of a future AI-powered world. And wherever those activities are commercial, it is seen as promising rapid and large increases in profits following deployment of the technology.

There is, however, much cause for scepticism. According to an MIT study,  just 5 per cent of AI projects are extracting value, while the majority do not record any measurable profit impact. In that assessment: “Most GenAI systems do not retain feedback, adapt to context, or improve over time.”

That would mean that unless there is a dramatic transformation of AI performance in use, deployment and willingness to pay would taper off (or even decline) till an uncertain turnaround actually materializes. There is evidence that business uptake of AI tools is stalling. The problem is that heavily AI dependent firms may not be able to wait for long for revenues and returns, given the huge investments being made in AI development

C. J. Polychroniou: Financial instability is inherent to capitalism. Doesn’t AI bring extra risks to the financial system? Indeed, is there a conceptual framework for assessing the systemic implications of AI for the financial system?

C. P. Chandrasekhar: The biggest threat arises because the speculative spike in share values has fueled huge expenditure outlays on acquisition of chips, investments in data centers, employee remuneration amplified by competitive poaching of talent, and investments in the power needed to support the development boom. Firms like OpenAI have been outlaying huge sums on their own operations as well as on contracts with chip makers like NVIDIA, investments in subsidiaries (like Amazon Web Services, Microsoft Azure, and Google Cloud Platform), and purchases of the services of independent data centers and cloud computing firms like Oracle. Those hardware and service providers in turn are investing large sums to expand operations and production to meet this rapid growth in demand. S&P Global estimates that as a result of the persistence of the “global construction frenzy that shows no signs of slowing,” investments in the land, buildings, hardware and energy to establish data centers totaled $60.8 billion in 2024 and $61 billion by November in 2025. But that is just what is happening in one corner of the AI space. According to Goldman Sachs, “AI hyperscalers” spent $106 billion in capital expenditure in just the third quarter of 2025, with that figure reflecting a 75 per cent year-on-year growth rate. It estimates that spending in 2026 as a whole would exceed $525 billion.

These firms and those investing in and lending to them are predicting rapid growth based on projected demand hinging on an AI boom. But, it appears that if and when AI models find their feet, they may not be able to extract as much revenues as expected because of competition. Competition between OpenAI’s ChatGPT, Google’s Gemini and surprise entrants like DeepSeek from China (promising to offer comparative features with much lower investment) could belie expectations of firms outlaying huge sums of capital in pursuit of promise high yields.

That prospect is particularly daunting for two reasons. First the role of debt in financing the investments in the AI space. AI-related companies in the US alone have issued bonds valued in excess of $200 billion in 2025, with sale of bonds to the tune of $180 billion by a few firms like Meta, Alphabet and Oracle accounting for a quarter of corporate borrowing in 2025. The unbridled spending by these firms has been encouraged not just by hugely optimistic estimates of the future of AI, but by the large volumes of still cheap liquidity available in the system because of the easy money policies of central banks and the liberalized financial system which has seen non-bank financial players, such as private equity/credit firms, mobilizing that liquidity and deploying it for profit. It is now becoming clear that a disproportionate share of such funds is being directed to AI and AI-related firms. 

Those are liabilities that need to be serviced independent of the revenues earned. But the risk involved is being discounted because of the large volume of cheap and yield-hungry liquidity in circulation.

The fragility that derives from such trends is substantially more than visible, because of the practice of “circular financing”. NVIDIA, for example, intended to invest $100 billion in OpenAI, which in turn has promised to buy $100 billion of NVIDIA chips for its ChatGPT development. That kind of entangled and concentrated exposure of firms riding on the promise of a huge AI profit boom increases fragility and the adverse impact that a collapse of that boom may entail.

Thus, the euphoric rise in substantially leveraged investments rides on the expected performance of a few entangled firms in a single tech space. This concentrated exposure of financial firms and investors based on mere expectations of dazzling future earnings has raised concerns that once again the US is the centre of a bubble that could unravel, as occurred in 2008. Yet the government and regulators are not stepping in to temper if not end the euphoria, because the investments that the boom is giving rise to and the luxury consumption that the beneficiaries of the financial boom are indulging in are partly responsible for much of the growth the US economy records.

C. J. Polychroniou: There are concerns about an AI bubble and that it may burst because of vast AI investments, but David Sacks, who is President Trump’s artificial intelligence and crypto czar, has gone on record stating that there will be no government bailout of the AI industry. Yet Sarah Myers West and Amba Kak argued in a recent Op-Ed in the Wall Street Journal that the government is already bailing out AI. What are your own thoughts on this matter? 

C. P. Chandrasekhar: The massive AI spend financed with debt and fueled by and fueling share price valuation spikes has exposed many sectors of the US economy to the AI bubble. This implies that if and when the AI bubble bursts the fall-out would be wide and severe. This makes the AI sector “too important to fail.” So, the state would have no option but attempt a bailout. The uncertainty relates to the ability of the government to implement a bailout and therefore to the success of that effort. The Federal Reserve’s balance sheet is so bloated that it would be hard pressed to inject as much cheap liquidity into the system to save financial and non-financial firms as it did last time. And the elasticity of the spending power of the US Treasury is also likely to be limited by the political standoff over deficits and spending that have led to repeated prolonged shutdowns of the US government. With the capacity to bail out firms and the economy thus restricted, stalling the downturn would be difficult. But then, those governing capitalism never learn enough from history to prevent periodic collapses—even when that error could precipitate a crisis that is as bad as it was in the 1930s.

C. J. Polychroniou: One last question regarding AI and capitalism. There are many experts, including Geoffrey Hinton, the so-called “Godfather of AI,” who predict that AI will produce mass unemployment because that’s how capitalism works. Is there a leftist alternative to AI?

C. P. Chandrasekhar: Since so little is really known about where this technology is going, framing an “alternative” is not easy as of now. The progressive perspective on intervention today must focus on the evolution of the technology. It is imperative that the development of the technology is released from domination and subordination by finance and the big “tech” firms that have grown in size and control by riding on the boom. It also requires regulating both the evolution of the technology and its deployment, given the possibility that it can significantly change the way humans interact with each other and the world they inhabit. Leaving the development and deployment of a technology that can have far-reaching consequences to a spontaneous and uncontrolled “learning” process necessitated by the desire to extract huge profits in the short run is a recipe for disaster.Email

CP Chandrasekhar is a world-renowned scholar of finance, financial policy and development and Senior Research Scholar at the Political Economy Research Institute (PERI) at the University of Massachusetts Amherst. He is emeritus professor at the Centre for Economic Studies and Planning, Jawaharlal Nehru University, New Delhi where he taught for more than 30 years.

No comments: