Thursday, November 20, 2025

Executive Order Attacking State AI Laws ‘Looks a Lot Like’ Industry Dictating Trump’s Policies

“Big Tech companies have spent the past year cozying up to Trump,” said one critic, “and this is their reward. It’s a fabulous return on a very modest investment—at the expense of all Americans.”



Tech executives including Nvidia cofounder and CEO Jensen Huang and Varda Space Industries cofounder Delian Asparouhov give a standing ovation to US President Donald Trump as he takes the stage at an AI summit on July 23, 2025 in Washington, DC.
(Photo by Chip Somodevilla/Getty Images)


Julia Conley
Nov 20, 2025
COMMON DREAMS

The White House is rapidly expanding on its efforts to stop state legislatures from protecting their constituents by passing regulations on artificial intelligence technology, with the Trump administration reportedly preparing a draft executive order that would direct the US Department of Justice to target state-level laws in what one consumer advocate called a “blatant and disgusting circumvention of our democracy”—one entirely meant to do the bidding of tech giants.

The executive order would direct Attorney General Pam Bondi to create an AI Litigation Task Force to target laws that have already been passed in both red and blue states and to stop state legislators from passing dozens of bills that have been introduced, including ones to protect people from companion chatbots, require studies on the impact of AI on employment, and bar landlords from using AI algorithms to set rent prices.

The draft order takes aim at California’s new AI safety laws, calling them “complex and burdensome” and claiming they are based on “purely speculative suspicion” that AI could harm users.

“States like Alabama, California, New York and many more have passed laws to protect kids from harms of Big Tech AI like chatbots and AI generated [child sexual abuse material]. Trump’s proposal to strip away these critical protections, which have no federal equivalent, threatens to create a taxpayer-funded death panel that will determine whether kids live or die when they decide what state laws will actually apply. This level of moral bankruptcy proves that Trump is just taking orders from Big Tech CEOs,” said Sacha Haworth, executive director of the Tech Oversight Project.

The task force would operate on the administration’s argument that the federal government alone is authorized to regulate commerce between states.

Shakeel Hashim, editor of the newsletter Transformer, pointed out that that claim has been pushed aggressively in recent months by venture capital firm Andreessen Horowitz.

President Donald Trump “and his team seem to have taken that idea and run with it,” said Hashim. “It looks a lot like the tech industry dictating government policy—ironic, given that Trump rails against ‘regulatory capture’ in the draft order.”




The DOJ panel would consult with Trump and White House AI Special Adviser David Sacks—an investor and cofounder of an AI company—on which state laws should be challenged.

The executive order would also authorize Commerce Secretary Howard Lutnick to publish a review of “onerous” state AI laws and restrict federal broadband funds to states found to have laws the White House disagrees with. It would further direct the Federal Communications Commission to adopt a new federal AI law that would preempt state laws.

The draft executive order was reported days after Trump called on House Republicans to include a ban on state-level AI regulations in the must-pass National Defense Authorization Act, which House Majority Leader Steve Scalise (R-La.) indicated the party would try to do.

The multipronged effort to stop states from regulating the technology, including AI chatbots that have already been linked to the suicides of children, comes months after an amendment to the One Big Beautiful Bill Act was resoundingly rejected in the Senate, 99-1.

Travis Hall, director for state engagement at the Center for Democracy and Technology, suggested that legal challenges would be filed swiftly if Trump moves forward with the executive order.

“The president cannot preempt state laws through an executive order, full stop,” Hall told NBC News. “Preemption is a question for Congress, which they have considered and rejected, and should continue to reject.”

David Dayen, executive editor of The American Prospect, said harm the draft order could pose becomes clear “once you ask one simple question: What is an AI law?”

The draft doesn’t specify, but Dayen posited that a range of statutes could apply: “Is that just something that has to do with [large language models]? Is it anything involving a business that uses an algorithm? Machine learning?”

“You can bet that every company will try to get it to apply to their industry, and do whatever corrupt transactions with Trump to ensure it,” he continued. “So this is a roadmap to preempt the vast majority of state laws on business and commerce more generally, everything from consumer protection to worker rights, in the name of preventing ‘obstruction’ of AI. This should be challenged immediately upon signing.”

The draft order was reported amid speculation among tech industry analysts that the AI “bubble” is likely about to burst, with investors dumping their shares in AI chip manufacturer Nvidia and an MIT report finding that 95% of generative AI pilot programs are not presenting a return on investment for companies. Executives at tech giant OpenAI recently suggested the government should provide companies with a “guarantee” for developing AI infrastrusture—which was widely interpreted as a plea for a bailout.

At Public Citizen, copresident Robert Weissman took aim at the White House for its claim that AI does not pose risks to consumers, noting AI technologies are already “undermining the emotional well-being of young people and adults and, in some cases, contributing to suicide; exacerbating racial disparities at workplaces; wrongfully denying patients healthcare; driving up electric bills and increasing greenhouse gas emissions; displacing jobs; and undermining society’s basic concept of truth.”

Furthermore, he said, the president’s draft order proves that “for all his posturing against Big Tech, Donald Trump is nothing but the industry’s well-paid waterboy.”

“Big Tech companies have spent the past year cozying up to Trump—doing everything from paying for his garish White House ballroom to adopting content moderation policies of his liking—and this is their reward,” said Weissman. “It’s a fabulous return on a very modest investment—at the expense of all Americans.”

JB Branch, the group’s Big Tech accountability advocate, added that instead of respecting the Senate’s bipartisan rejection of the earlier attempt to stop states from regulating AI, “industry lobbyists are now running to the White House.”

“AI scams are exploding, children have died by suicide linked to harmful online systems, and psychologists are warning about AI-induced breakdowns, but President Trump is choosing to protect his tech oligarch friends over the safety of middle-class Americans,” said Branch. “The administration should stop trying to shield Silicon Valley from responsibility and start listening to the overwhelming bipartisan consensus that stronger, not weaker, safeguards are needed.”

Trump Calls for GOP to Ram Through AI Regulation Ban in Must-Pass Military Spending Bill

“If lawmakers are serious about AI governance, they must create strong, enforceable national protections as a regulatory floor—not wipe out state laws so Big Tech can operate without consequence,” said one consumer advocate.


House Majority Leader Steve Scalise (R-La.) said on November 18, 2025 that Republicans will attempt to insert an amendment into the National Defense Authorization Act that would bar states from regulating artificial intelligence.
(Photo by Daniel Heuer/AFP via Getty Images)

Julia Conley
Nov 19, 2025
COMMON DREAMS

A Republican push to stop state legislatures from regulating artificial intelligence, including chatbots that have been found to pose harm to children, resoundingly failed over the summer, with 99 out of 100 senators voting against the provision in the One Big Beautiful Bill Act—but the previous rejection of the idea isn’t stopping President Donald Trump and GOP lawmakers from trying again to impose a moratorium.

On Tuesday, Trump posted on his Truth Social platform that House Republicans should take action against “overregulation by the States” in the AI field.

Claiming that “DEI ideology” in AI models in some states will “undermine this Major Growth ‘Engine’” and that “Investment in AI is helping to make the U.S. Economy the ‘HOTTEST’ in the World”—despite tech industry leaders’ warnings that the value of AI investments may have been wildly overestimated and the bubble may be on the cusp of bursting—Trump called on Republicans to include the state regulations ban in the National Defense Authorization Act (NDAA), “or pass a separate Bill.”

Also on Tuesday, House Majority Leader Steve Scalise (R-La.) told Punchbowl News that the GOP is considering adding language to the NDAA that would effectively ban state AI regulations, which have been passed in both Democratic- and Republican-led states. Those laws would be nullified if Republicans follow through with the plan.

Since the annual defense spending bill is considered a must-pass package by many lawmakers, inserting amendments related to other legislative goals is a common strategy used in Congress.

Trump previously tried to circumvent Congress’ rejection of the moratorium in July, when he announced his AI Action Plan.

Emphasizing that the anti-regulatory effort has been rejected by “an alliance of Democrats, Republicans, social conservatives, parents rights groups, medical professionals, and child online protection groups,” the consumer advocacy group Public Citizen on Tuesday called Trump’s renewal of the push “highly inappropriate” and said it “would risk stripping away vital civil rights, consumer protection, and safety authority from states without putting any federal guardrails in place.”

JB Branch, Big Tech accountability advocate at Public Citizen, said that “AI preemption strips away the safeguards states have enacted to address the very real harms of AI.”

“Big Tech and its allies have spent months trying to ban states from protecting their own residents, all while refusing to support any meaningful federal AI safeguards,” said Branch. “Congress should reject this maneuver outright. If lawmakers are serious about AI governance, they must create strong, enforceable national protections as a regulatory floor—not wipe out state laws so Big Tech can operate without consequence.”

On Tuesday, the Republican-controlled House Committee on Energy and Commerce held a hearing on “AI Chatbot Advantages and Disadvantages,” where one witness, psychologist Marlynn Wei, warned that “AI chatbots endorse users 50% more than humans would on ill-advised behaviors.”

In September, several grieving parents testified before the Senate Judiciary Committee that their children had died by suicide after being encouraged to take their own lives by AI chatbots.

At Tuesday’s hearing, Ranking Member Frank Pallone (D-NJ) said that “Congress must be sure to allow states to put in place safeguards that protect their residents.”

“There is no reason for Congress to stop states from regulating the harms of AI when Congress has not yet passed a similar law,” he said.

Rep. Lori Trahan (D-Mass.) also addressed the issue, suggesting it was surprising that the Republican members would bother holding a hearing on the harms of AI when they are planning to strip state lawmakers of their ability to protect their constituents from those harms.

“I’m having real difficulty in reconciling this hearing and all that we’ve heard about the risks of AI chatbots, especially to our children, with the attempt by the House Republican leadership to ban state-level AI regulations,” said Trahan. “Republicans’ push for this regressive, unconstitutional, and widely condemned AI policy is real and it’s unrelenting.”



“Let’s just say in public what you are pushing in private,” she added. “Don’t be holding these hearings about the risks of AI chatbots while behind closed doors you kneecap state legislatures from protecting their constituents. I mean, if the AI moratorium is the topic in the speaker’s office let’s make it so in this hearing room, because the American people deserve to know where you truly stand on AI regulation.”

Warren, Jayapal Introduce Bill Aimed at Curbing Corporate Meddling in Regulations

“While Donald Trump keeps selling away influence over our government, we’re fighting to ensure the rules are being written to help working Americans, not corporate interests,” said Sen. Elizabeth Warren.


Rep. Pramila Jayapal (D-Wash.) speaks at a rally to free Kilmar Abrego Garcia at Lafayette Park near the White House in Washington, DC on May 1, 2025.
(Photo by Bryan Dozier/Middle East Images/Middle East Images/AFP via Getty Images)


Brad Reed
Nov 19, 2025
COMMON DREAMS

Two progressive Democrats are teaming up to push legislation to curb corporate America’s capture of the federal government’s regulatory process.

Rep. Pramila Jayapal (D-Wash.) and Sen. Elizabeth Warren (D-Mass.) on Wednesday announced a new bill called the Experts Protect Effective Rules, Transparency, and Stability (EXPERTS) Act that aims to restore the role of subject matter experts in federal rulemaking.

Specifically, the bill would codify the Chevron doctrine, a 40-year legal precedent overturned last year by the US Supreme Court, which held that courts should be broadly deferential to decisions made by independent regulatory agencies about interpretations of congressional statutes.

The legislation would also push for more transparency by requiring the disclosure of funding sources for all “scientific, economic, and technical studies” that are submitted to agencies to influence the rulemaking process.

Additionally, the bill proposes speeding up the regulatory process by both “excluding private parties from using the negotiated rulemaking process” and reinstating a six-year limit for outside parties to file legal challenges to agencies’ decisions.

In touting the legislation, the Democrats pitched it as a necessary tool to rein in corporate power.

“Many Americans are taught in civics classes that Congress passes a law and that’s it, but the reality is that any major legislation enacted must also be implemented and enforced by the executive branch to become a reality,” said Jayapal. “We are seeing the Trump administration dismantle systems created to ensure that federal regulation prioritizes public safety. At a time when corporations and CEOs have outsized power, it is critical that we ensure that public interest is protected. This bill will level the playing field to ensure that laws passed actually work for the American people.”

Warren, meanwhile, argued that “giant corporations and their armies of lobbyists shouldn’t get to manipulate how our laws are implemented,” and said that “while Donald Trump keeps selling away influence over our government, we’re fighting to ensure the rules are being written to help working Americans, not corporate interests.”

The proposal earned an enthusiastic endorsement from Public Citizen co-president Lisa Gilbert, who described it as “the marquee legislation to improve our regulatory system.”

“The bill aims directly at the corporate capture of our rulemaking process, brings transparency to the regulatory review process and imposes a $250,000 fine on corporations that submit false information, among other things,” she said. “The bill is essential law for the future of our health, safety, environment, and workers. Public Citizen urges swift passage in both chambers.”

 

Half of novelists believe AI is likely to replace their work entirely, research finds




University of Cambridge






Just over half (51%) of published novelists in the UK believe that artificial intelligence is likely to end up entirely replacing their work as fiction writers, a new University of Cambridge report shows.

Close to two-thirds (59%) of novelists say they know their work has been used to train AI large language models (LLMs) without permission or payment.

Over a third (39%) of novelists say their income has already taken a hit from generative AI, for example due to loss of other work that facilitates novel writing. Most (85%) novelists expect their future income to be driven down by AI.

In new research for Cambridge University’s Minderoo Centre for Technology and Democracy (MCTD), Dr Clementine Collett surveyed 258 published novelists earlier this year, as well as 74 industry insiders – from commissioning editors to literary agents – to gauge how AI is viewed and used in the world of British fiction. *

Genre authors are considered the most vulnerable to displacement by AI, according to the report, with two-thirds (66%) of all those surveyed listing romance authors as “extremely threatened”, followed closely by writers of thrillers (61%) and crime (60%). 

Despite this, overall sentiment in UK fiction is not anti-AI, with 80% of respondents agreeing that AI offers benefits to parts of society. In fact, a third of novelists (33%) use AI in their writing process, mainly for “non-creative” tasks such as information searches.   

However, the report outlines profound concerns from the cornerstone of a publishing industry that contributes an annual £11bn to the UK economy, and exports more books than any other country in the world.

Literary creatives feel that copyright laws have not been respected or enforced since the emergence of generative AI. They call for informed consent and fair remuneration for the use of their work, along with transparency from big tech companies, and support in getting it from the UK government.

Many warn of a potential loss of originality in fiction, as well as a fraying of trust between writers and readers if AI use is not disclosed. Some novelists worry that suspicions of AI use could damage their reputation.

“There is widespread concern from novelists that generative AI trained on vast amounts of fiction will undermine the value of writing and compete with human novelists,” said Dr Clementine Collett, BRAID UK Research Fellow at Cambridge’s MCTD and author of the report, published in partnership with the Institute for the Future of Work.

“Many novelists felt uncertain there will be an appetite for complex, long-form writing in years to come.”

“Novels contribute more than we can imagine to our society, culture, and to the lives of individuals. Novels are a core part of the creative industries, and the basis for countless films, television shows, and videogames,” said Collett.  

“The novel is a precious and vital form of creativity that is worth fighting for.”

Tech companies have the fiction market firmly in their sights. Generative AI tools such as Sudowrite and Novelcrafter can be used to brainstorm and edit novels, while Qyx AI Book Creator or Squibler can be used to draft full-length books. Platforms such as Spines use AI to assist with publishing processes from cover designs to distribution.

“The brutal irony is that the generative AI tools affecting novelists are likely trained on millions of pirated novels scraped from shadow libraries without the consent or remuneration of authors,” said Collett.

Along with surveying a total of 332 literary creatives, who participated on condition of anonymity, Collett conducted focus groups and interviews around the country, and convened a forum in Cambridge with novelists and publishers. **   

Many novelists reported lost income due to AI. Some feel the market is increasingly flooded with AI-generated books, with which they are forced to compete. Others say they have found books under their name on Amazon which they haven’t written.

Some novelists also spoke of online reviews with telltale signs of AI, such as jumbled names and characters, that give their books bad ratings and jeopardise future sales.

“Most authors do not earn enough from novels alone and rely on income streams such as freelance copywriting or translation which are rapidly drying up due to generative AI,” said Collett. *** 

Some literary creatives envisioned a dystopic two-tier market emerging, where the human-written novel becomes a “luxury item” while mass-produced AI-generated fiction is cheap or free.

When it came to working practices, some in the study consider AI valuable in speeding up repetitive or routine tasks, but it was seen to have little to no role to play in creativity.  

Almost all (97%) novelists were “extremely negative” about AI writing whole novels, or even short sections (87% extremely negative). The aspects novelists felt least negative about using AI for were sourcing general facts or information (30% extremely negative), with around 20% of novelists saying they use AI for this purpose.

Around 8% of novelists said they use AI for editing text written without AI. However, many find editing to be a deeply creative process, and would never want AI involved. Almost half (43%) of novelists felt “extremely negative” about using AI for editing text.

Forum participant Kevin Duffy, founder of Bluemoose Books, the publisher behind novels such as The Gallows Pole and Leonard and Hungry Paul – both now major BBC TV dramas – is on record in the report saying:

“[W]e are an AI free publisher, and we will have a stamp on the cover. And then up to the public to decide whether they want to buy that book or not. But let’s tell the public what AI is doing.” Many respondents echoed this sentiment. ****

The research found widespread backlash against a “rights reservation” copyright model as proposed by the UK government last year, which would let AI firms mine text unless authors explicitly opted out.

Some 83% of all respondents say this would be negative for the publishing industry, and 93% of novelists said they would ‘probably’ or ‘definitely’ opt out of their work being used to train AI models if an opt out model was implemented.

The vast majority (86%) of all literary creatives preferred an ‘opt in’ principle: rights-holders grant permission before AI scrapes any work and are paid accordingly. The most popular option was for AI licensing to be handled collectively by an industry body – a writers’ union or society – with half of novelists (48%) selecting this approach.   

“Our creative industries are not expendable collateral damage in the race to develop AI. They are national treasures worth defending. This report shows us how,” said Prof Gina Neff, Executive Director of the Minderoo Centre for Technology and Democracy.

Some novelists worry AI will disrupt the “magic” of the creative process. Stephen May, writer of acclaimed historical novels such as Sell Us the Rope expressed anxiety over AI taking the required “friction” and “pain” out of a first draft, diminishing the final product.

“Novelists, publishers, and agents alike said the core purpose of the novel is to explore and convey human complexity,” said Collett. “Many spoke about increased use of AI putting this at risk, as AI cannot understand what it means to be human.”

Authors fear AI may weaken the deep human connection between writers and readers at a time when reading is already at historically low levels, particularly among the next generation: only a third of UK children say they enjoy reading in their free time.

Many novelists want to see more AI-free creative writing on the school curriculum, and government-backed initiatives aimed at finding new voices from underrepresented groups to counter risks of “homogeneity” in fiction brought about by generative AI.

The research reveals a sector-wide belief that AI could lead to ever blander, more formulaic fiction that exacerbates stereotypes, as the models regurgitate from centuries of previous text. Some suggest the AI era may see a boom in “experimental” fiction as writers work to prove they are human, and push the artistry further than AI.

“Novelists are clearly calling for policy and regulation that forces AI companies to be transparent about training data, as this would help with the enforcement of copyright law,” added Collett.

“Copyright law must continue to be reviewed and might need reform to further protect creatives. It is only fair that writers are asked permission and paid for use of their work.”

 


Notes:

* Of the published novelists involved in the survey, 90% are published “traditionally” through publishing houses, while 10% are self-published.  

** This consisted of 32 literary agents for fiction (9%), 258 published novelists (78%), and 42 professionals who work in fiction publishing (13%). The survey took place between February and May 2025.

*** The median income for an author in the UK in 2022 was just £7,000, far lower than the minimum wage. [Thomas, A., Battisti, M., & Kretschmer, M. (2022). UK Authors’ Earnings and Contracts 2022.]

**** Full quote from Kevin Duffy as featured in the report: “'For a small independent publisher of literary fiction like Bluemoose Books, our only stand is to say we don’t want any part of this, we are AI free, and we are an AI free publisher, and we will have a stamp on the cover. And then up to the public to decide whether they want to buy that book or not. But let’s tell the public what AI is doing. It’s got brilliant capacity to do fantastic things in other avenues, but for the creative industries and for literary fiction in particular, it is very limited.”

The full report, The Impact of Generative AI on the Novel, will be published on the Minderoo Centre for Technology and Democracy website: https://www.mctd.ac.uk/

The research in the report was supported by the BRAID (Bridging Responsible AI Divides) programme with funds received from the Arts and Humanities Research Council [grant number AH/X007146/1]. This report is published in association with the Minderoo Centre for Technology and Democracy at the University of Cambridge, and with the Institute for the Future of Work.

McCartney to release silent AI protest song


By AFP
November 18, 2025


Ex-Beatle Paul McCartney is releasing a silent song as a protest against UK copyright laws on the use of AI - Copyright GETTY IMAGES NORTH AMERICA/AFP KEVIN WINTER

Pop legend Paul McCartney will release a silent music track next month as part of a silent album to protest UK copyright law changes that would give exemptions to tech firms.

Other artists such as Hans Zimmer and singer Kate Bush have joined the project, highlighting what they say are the dangers artificial intelligence (AI) poses to the creative industries.

McCartney’s contribution to the album “Is This What We Want.” It will draw “attention to the damning impact on artists’ livelihoods controversial government proposals could cause,” the artists behind the project said in a statement.

Called “Bonus Track” it is a two minute 45 seconds recording of an empty studio featuring a series of clicks.

More than 1,000 artists, including Annie Lennox, Damon Albarn and Jamiroquai, have collaborated on the silent album which was first released in February.

They maintain that the government’s law changes “would make it easier to train AI models on copyrighted work without a licence.”

“Under the heavily criticised proposals, UK copyright law would be upended to benefit global tech giants. AI companies would be free to use an artist’s work to train their AI models without permission or remuneration,” they added.

The changes “would require artists to proactively ‘opt-out’ from the theft of their work – reversing the very principle of copyright law,” they added.

Only 1,000 copies of the vinyl album have been pressed.

In May, some 400 writers and musicians including Elton John and Bush condemned the proposals as a “wholesale giveaway” to Silicon Valley in a letter to The Times newspaper.

Other signatories included the 83-year-old McCartney, singer-songwriters Ed Sheeran, Dua Lipa and Sting, and writers Kazuo Ishiguro, Michael Morpurgo and Helen Fielding.

Prime Minister Keir Starmer has previously said the government needs to “get the balance right” with copyright and AI while noting that the technology represented “a huge opportunity”.

“They have no right to sell us down the river,” Elton John told the BBC in May, urging Starmer to “wise up” and “see sense.”

According to a study by UK Music last week two out of three artists and producers fear that AI poses a threat to their careers.

More than nine out of 10 surveyed demanded that their image and voice to be protected and called for AI firms to pay for the use of their creations.

 


 Greed (1924)
 A Silent Masterpiece of Human Obsession

 
EU moves to delay ‘high-risk’ AI rules, cut cookie banners


By AFP
November 19, 2025


Campaigners drove across Brussels on Wednesday with large billboards calling on EU chief Ursula von der Leyen to stand up to US President Donald Trump and Big Tech - Copyright AFP Joe Klamar

The EU executive proposed rolling back key AI and data privacy rules on Wednesday as part of a push to slash red tape and help Europe’s high-tech sector catch up with global rivals.

The landmark EU tech rules have faced powerful pushback from the US administration under President Donald Trump — but also from businesses and governments at home complaining they risk hampering growth.

Brussels denies bowing to outside pressure, but it has vowed to make businesses’ lives easier in the 27-nation bloc — and on Wednesday it unveiled proposals to loosen both its rules on artificial intelligence and data privacy.

Those include:

– giving companies more leeway to access datasets to train AI models like personal data when it is “for legitimate interests”

– giving companies extra time — up to 16 months — to apply ‘high-risk’ rules on AI

– in a plan many Europeans will welcome, Brussels wants to reduce the number of cookie banner pop-ups users see, which it says can be done without putting privacy at risk.

“We have talent, infrastructure, a large internal single market. But our companies, especially our start-ups and small businesses, are often held back by layers of rigid rules,” EU tech chief Henna Virkkunen said in a statement.

After cheering the so-called “Brussels effect” whereby EU laws were seen as influencing jurisdictions around the world, European lawmakers and rights defenders increasingly fear the EU is withdrawing from its role as Big Tech’s watchdog.

Campaigners from different groups including People vs Big Tech drove across Brussels on Wednesday with large billboards calling on EU chief Ursula von der Leyen to stand up to Trump and the tech sector, and defend the bloc’s digital rules.



– Striking a ‘balance’ –



The commission says the plans will help European businesses catch up with American and Chinese rivals — and reduce dependence on foreign tech giants.

For many EU states, the concern is that the focus on regulation has come at the expense of innovation — although Brussels insists it remains committed to protecting European citizens’ rights.

But experts say the EU lags behind the bigger economies for several reasons including its fragmented market and limited access to the financing needed to scale up.

The EU raced to pass its sweeping AI law that entered into force last year, but dozens of Europe’s biggest companies — including Airbus, Lufthansa and Mercedes-Benz — called for a pause on the parts they said risked stifling innovation.

Brussels met them part of the way by agreeing to delay applying provisions on “high-risk” AI — such as models that could endanger safety, health or citizens’ fundamental rights.

With the proposed change on cookie banners, an EU official said the bloc wanted to address “fatigue” at the pop-ups seeking users’ consent for tracking on websites, and “reduce the number of times” the windows appear.

The commission wants users to be able to indicate their consent with one click, and save cookie preferences through settings on browsers and operating systems.

Brussels has insisted European users’ data privacy will be protected.

“It is essential that the European Union acts to deliver on simplification and competitiveness while also maintaining a high level of protection for the fundamental rights of individuals — and this is precisely the balance this package strikes,” EU justice commissioner Michael McGrath said.

Amazon, Microsoft cloud services could face tougher EU rules


By AFP
November 18, 2025


Image: © GETTY IMAGES NORTH AMERICA/AFP/File SPENCER PLATT


Raziye Akkoc

Amazon and Microsoft cloud services could face stricter EU competition rules after Brussels on Tuesday launched probes to assess their market power.

Brussels had been under pressure to include the services under the scope of a major law because of the dominance of US cloud providers, which hold around two thirds of market share in the 27-nation bloc.

The European Commission, the EU’s digital regulator, said it will investigate whether Amazon Web Services (AWS) and Microsoft’s Azure should come under the scope of the Digital Markets Act (DMA).

Despite being the third largest, Google Cloud was not included.

The DMA is part of the European Union’s bolstered legal armoury that seeks to make the digital market fairer with a list of do’s and don’ts for Big Tech companies, which it refers to as “gatekeepers” such as Apple.

The twin probes aim to assess whether AWS and Microsoft “should be designated as the gatekeepers on cloud computing,” EU tech chief Henna Virkkunen said at a Berlin summit focused on pushing greater European digital sovereignty.



Their cloud services are in the EU’s crosshairs – Copyright AFP/File BAY ISMOYO

In a statement the commission said it would analyse whether the two “act as important gateways between businesses and consumers, despite not meeting the DMA gatekeeper thresholds for size, user number and market position”.

EU regulators will seek to conclude the investigations within a year.

Microsoft and AWS insisted the cloud sector was competitive.

“We’re confident that when the European Commission considers the facts, it will recognise what we all see — the cloud computing sector is extremely dynamic, with companies enjoying lots of choice, unprecedented innovation opportunity, and low costs,” an AWS spokesperson said.

“Designating cloud providers as gatekeepers isn’t worth the risks of stifling invention or raising costs for European companies,” the spokesperson added.

“The cloud sector in Europe is innovative, highly competitive and an accelerator for growth across the economy. We stand ready to contribute” to the probe, a Microsoft spokesperson said.

Brussels announced it would also open a third probe to find out whether it needs to update the DMA to make sure it can combat practices that “may limit competitiveness and fairness in the cloud computing sector in the EU”.

– Dominant US cloud –

AWS leads the cloud computing market, followed closely by Microsoft Azure, with Google Cloud in third place.

Brussels defended the decision not to probe Google.

“Our preliminary evidence shows that Google is playing a less important role for now on our market than the two ones that we’re investigating,” EU digital affairs spokesman Thomas Regnier told reporters.

There has also been growing concern after a raft of outages in recent months.

In October Microsoft cloud clients experienced widespread service disruptions. Among them was Alaska Airlines, whose customers were unable to check in.

That came after Amazon cloud troubles last month forced popular services ranging from streaming platforms to messaging apps offline for hours.

Amazon and Microsoft already face stricter rules for their other services including Amazon Marketplace and Microsoft’s LinkedIn platform.

The DMA gives the EU the power to impose fines of up to 10 percent of a company’s total global turnover in the event of any violations.
Trump, Who Incited Jan 6 Insurrection, Wants Sedition Charges for Lawmakers Reminding Troops of Duty to Disobey Illegal Orders

“If you’re threatening Dems for reminding the military that they are obligated to not follow illegal orders, you’re admitting your orders are illegal.”


President Donald Trump delivers a speech in front of US Navy personnel on board the USS George Washington aircraft carrier at the base in Yokosuka, Japan on October 28, 2025.
(Photo by Andrew Caballero-Reynolds/AFP via Getty Images)

Jessica Corbett
Nov 20, 2025
COMMON DREAMS

Nearly five years after inciting an attempted insurrection, President Donald Trump on Thursday called for sedition charges against Democrats in Congress who reminded members of the US military and intelligence services that “you must refuse illegal orders.”

“We know you are under enormous stress and pressure right now,” says Sen. Elissa Slotkin (Mich.), a former Central Intelligence Agency analyst, in the 90-second video circulated on social media Tuesday.

Sen. Mark Kelly (Ariz.), a former Navy captain, notes in the video that “like us, you all swore an oath” to the US Constitution

Reps. Jason Crow (Colo.), Chris Deluzio (Pa.), Maggie Goodlander (NH), and Chrissy Houlahan (Pa.)—all veterans of the US military and intelligence community—join the senators in calling on service members to stand up to any illegal orders from the Trump administration and “don’t give up the ship.”



Miles Taylor, a former chief of staff for the Department of Homeland Security who anonymously spoke out against Trump in a high-profile op-ed and book during his first term, said that it is “pretty insane that we are living in a moment where a video message like this [is] necessary.”

Also responding to the video on the platform X, Stephen Miller, White House deputy chief of staff for policy and homeland security adviser, claimed that “Democrat lawmakers are now openly calling for insurrection.”

Kelly hit back, citing the January 6, 2021 attack: “I got shot at serving our country in combat, and I was there when your boss sent a violent mob to attack the Capitol. I know the difference between defending our Constitution and an insurrection, even if you don’t.”

Slotkin also responded, saying: “This is the law. Passed down from our Founding Fathers, to ensure our military upholds its oath to the Constitution—not a king. Given you’re directing much of a military policy, you should buff up on the Uniformed Code of Military Justice.”



Trump weighed in on his Truth Social platform just after 9:00 am on Thursday morning, writing: “It’s called SEDITIOUS BEHAVIOR AT THE HIGHEST LEVEL. Each one of these traitors to our Country should be ARRESTED AND PUT ON TRIAL. Their words cannot be allowed to stand—We won’t have a Country anymore!!! An example MUST BE SET.”

“This is really bad, and Dangerous to our Country. Their words cannot be allowed to stand. SEDITIOUS BEHAVIOR FROM TRAITORS!!! LOCK THEM UP???,” Trump continued, linking to the right-wing Washington Examiner‘s coverage and signing both posts “President DJT.”

Just over an hour later, the president added, “SEDITIOUS BEHAVIOR, punishable by DEATH!”

Responding with a lengthy joint statement, the lawmakers behind the video reiterated their commitment to the oaths they took, and said that “what’s most telling is that the president considers it punishable by death for us to restate the law.”

“Our servicemembers should know that we have their backs as they fulfill their oath to the Constitution and obligation to follow only lawful orders,” they added. “Every American must unite and condemn the president’s calls for our murder and political violence. This is a time for moral clarity.”

Congresswoman Pramila Jayapal (D-Wash.)—who has for years faced threats from Trump supporters, including Arizona state Rep. John Gillette (R-30) in September—stressed that the president’s “calls for political violence are completely unacceptable.”



Rep. Ilhan Omar (D-Minn.), another frequent target of right-wing threats, similarly took aim at Trump’s sedition remarks, saying, “None of this is normal.”

Senate Minority Leader Chuck Schumer (D-NY) said on the chamber’s floor Thursday: “Let’s be crystal clear: The president of the United States is calling for the execution of elected officials. This is an outright threat, and it’s deadly serious. We have already seen what happens when Donald Trump tells his followers that his political opponents are enemies of the state.”

“We all remember what January 6th was like. We lived through January 6th. We have lived through the assassinations and attempted assassinations this year. We have members whose families have had to flee their homes,” he continued. “When Donald Trump uses the language of execution and treason, some of his supporters may very well listen. He is lighting a match in a country soaked with political gasoline. Every senator, every representative, every American—regardless of party—should condemn this immediately and without qualification.”

Melanie D’Arrigo, executive director of the Campaign for New York Health, said Thursday: “Trump tried to overthrow our government almost five years ago, and is calling for Dems to be put to death for sedition. If you’re threatening Dems for reminding the military that they are obligated to not follow illegal orders, you’re admitting your orders are illegal.”

The Democrats’ video and Trump’s outburst come as members of Congress and legal experts lambast the Trump administration’s deadly bombings of boats allegedly running drugs in the Caribbean and Pacific Ocean. Critics have emphasized that even if the targeted vessels are transporting illicit substances, the strikes are illegal.



Trump is also under fire for his attacks on immigrants in Democrat-led communities. Kelly and Slotkin, along with Democratic Sens. Tammy Duckworth (Ill.), Richard Blumenthal (Conn.), and Ron Wyden (Ore.), recently introduced the No Troops in Our Streets Act, which would limit the administration’s ability to deploy the National Guard and inject $1 billion in new resources to fight crime across the country.

“Our brave military men and women signed up to defend the Constitution and our rights, not to be used as political props or silence dissent,” said Duckworth, a retired Army lieutenant colonel who has been especially critical of the administration’s operation in the Chicagoland area, including efforts to deploy the National Guard there.

“These un-American, unjustified deployments of troops into our cities do nothing to fight crime—they only serve to intimidate Americans in their own neighborhoods,” she added. “I’m introducing this legislation with my colleagues to stop Trump’s gross misuse of our military and devote more resources toward efforts that would actually help our local law enforcement—which Trump has actually defunded to the tune of $800 million.”
‘How Many More Have to Die?’ Asks Democrat as Another Texas Woman’s Death Blamed on Abortion Ban

“Let’s be very clear: Republicans are killing women,” said one abortion rights advocate. “Democrats need to start calling them murderers loudly and often.”



Pro-abortion rights protesters stand outside of the US Supreme Court in Washington, DC on June 24, 2024.
(Photo by Celal Gunes/Anadolu via Getty Images)

Julia Conley
Nov 20, 2025
COMMON DREAMS


After new reporting detailed the latest known woman who died because doctors would not provide her with abortion care under Texas’ ban, the Democratic lawmaker who authored the Women’s Health Protection Act condemned Republicans in Congress for refusing to “protect women’s basic freedom to survive their own pregnancies.”

“It would take only six Republicans in the House to join with us and pass this vital legislation to restore bodily autonomy to every person in this country, regardless of their state or zip code,” said Rep. Judy Chu (D-Calif.), whose bill would create a new legal protection for the right to provide and obtain abortion care.


Chu’s call came as ProPublica reported on the death of Tierra Walker, a 37-year-old pregnant mother of a teenage son who asked doctors to terminate her pregnancy in October 2024 after she experienced seizures and feared she would develop preeclampsia, a life-threatening complication that had led to the stillbirth of her twins a few years earlier.

“Wouldn’t you think it would be better for me to not have the baby?” Walker asked doctors at Methodist Hospital Northeast in San Antonio.

The medical staff assured her there was nothing wrong with her pregnancy and blamed her symptoms on pre-existing conditions including diabetes and high blood pressure—but more than a dozen OB/GYNs reviewed her case and told ProPublica doctors had not followed standard medical practice, which would have been to advise Walker early on in the pregnancy that her health conditions could lead to complications and “to offer termination at any point if she wanted.”

Had doctors done do, all of the medical experts said, Walker would not have died at 20 weeks pregnant on her 14-year-old son’s birthday last December.

“Her death was preventable, and it was caused by a law written by Republicans to control women’s bodies, no matter the consequences. This is the disgraceful reality of Republican abortion bans that criminalize care and sacrifice women’s lives,” said Chu.

Walker found out she was five weeks pregnant in September 2024 after experiencing a seizure. Doctors also noted she had “hypertension at levels so high that it reduces circulation to major organs and can cause a heart attack or stroke,” which put her at increased risk for preeclampsia.

But instead of warning Walker of the risks, the medical staff sent her home, where she continued having seizures through her first trimester and her fiance and aunt took turns watching over her.

Texas law prohibits medical providers from “aiding and abetting” abortion care, with doctors facing the loss of their medical license and up to 99 years in prison if they provide an abortion. Abortions are ostensibly permitted in cases when a pregnant person’s life or major body function is at risk—but Walker’s case demonstrates how medical exceptions within abortion bans often do nothing to ensure a dangerous pregnancy can be terminated to protect a woman’s life.

At least one of the more than 90 doctors—including 21 OB/GYNs—who became involved in Walker’s care last year, when she was repeatedly hospitalized, acknowledged in a case file that she was at “high risk of clinical deterioration and/or death.”

But none of them ever talked to her about terminating the pregnancy.

As Walker’s pregnancy progressed, she developed a blood clot in her leg that didn’t respond to anticoagulation medicine, and her seizures and high blood pressure remained uncontrolled.

She was diagnosed with preeclampsia at 20 weeks pregnant on December 27—but doctors did not even label her condition as “severe” in her files, let alone provide her with the standard care for the condition at that point in pregnancy, which is an abortion.

Instead, they gave her more blood pressure medication and sent her home, where her son, JJ, found her dead days later.

Author and abortion rights advocate Jessica Valenti said Republicans would likely respond to the news of Walker’s death—as they have in the cases of other women who have died after being unable to get abortions in states that ban them—with claims that doctors were legally allowed to “intervene” or “treat” Walker.

“They won’t say she could have had an abortion because they don’t believe in life-saving abortions,” she said.

This year, in the months after Walker’s death and following outrage over numerous similar cases, Texas lawmakers passed a law that Republicans claim would make it easier for women to obtain abortions in cases where they face life-threatening conditions in pregnancy; their conditions no longer need to put them in “imminent” danger for them to obtain care.

But doctors told ProPublica that hospitals in Texas are still likely to avoid providing abortions in cases like Walker’s, even under the new statute.

“How many more women have to needlessly suffer?” asked Chu. “How many more have to die? How many more children have to grow up without their mother? How many more parents have to lose their adult daughters before Republicans in Congress finally do what’s right and protect women’s basic freedom to survive their own pregnancies?”

“This doesn’t have to be our reality,” she added.