‘Wake the F Up. This Is Going to Destroy Us’: Senator Sounds Alarm After AI Cyberattack
“The barriers to performing sophisticated cyberattacks have dropped substantially—and we predict that they’ll continue to do so,” said AI company Anthropic.

Senator Chris Murphy speaks at the rally to Say NO to Tax Breaks for Billionaires & Corporations at US Capitol on April 10, 2025 in Washington, DC.
(Photo by Jemal Countess/Getty Images for Fair Share America)
Brad Reed
Nov 14, 2025
COMMON DREAM
A Democratic senator on Thursday sounded the alarm on the dangers of unregulated artificial intelligence after AI company Anthropic revealed it had thwarted what it described as “the first documented case of a large-scale cyberattack executed without substantial human intervention.”
According to Anthropic, it is highly likely that the attack was carried out by a Chinese state-sponsored group, and it targeted “large tech companies, financial institutions, chemical manufacturing companies, and government agencies.”

Trump FTC Deletes Lina Khan-Era Blog Posts Warning of Threat AI Poses to Consumers

Report Details How ‘Gas-Fed AI Boom’ Set to Blow Up US Climate Goals
After a lengthy technical explanation describing how the attack occurred and how it was ultimately thwarted, Anthropic then discussed the security implications for AI that can execute mass cyberattacks with minimal direction from humans.
“The barriers to performing sophisticated cyberattacks have dropped substantially—and we predict that they’ll continue to do so,” the firm said. “With the correct setup, threat actors can now use agentic AI systems for extended periods to do the work of entire teams of experienced hackers.”
Anthropic went on to say that hackers could now use AI to carry tasks such as “analyzing target systems, producing exploit code, and scanning vast datasets of stolen information more efficiently than any human operator,” which could open the door to “less experienced and resourced groups” carrying out some of the most sophisticated attack operations.
The company concluded by warning that “the techniques described above will doubtless be used by many more attackers—which makes industry threat sharing, improved detection methods, and stronger safety controls all the more critical.”
This cybersecurity strategy wasn’t sufficient for Sen. Chris Murphy (D-Conn.), who said government intervention would be needed to mitigate the potential harms caused by AI.
“Guys wake the f up,” he wrote in a social media post. “This is going to destroy us—sooner than we think—if we don’t make AI regulation a national priority tomorrow.”
Democratic California state Sen. Scott Wiener noted that many big tech firms have continuously fought against government oversight into AI despite threats that are growing stronger by the day.
“For two years, we advanced legislation to require large AI labs to evaluate their models for catastrophic risk or at least disclose their safety practices,” he explained. “We got it done, but industry (not Anthropic) continues to push for federal ban on state AI rules, with no federal substitute.”
Some researchers who spoke with Ars Technica, however, expressed skepticism that the AI-driven hack was really as sophisticated as Anthropic had claimed simply because they believe current AI technology is not yet good enough to execute that caliber of operation.
Dan Tentler, executive founder of Phobos Group, told the publication that the efficiency with which the hackers purportedly got the AI to carry out their commands was wildly different than what he has experienced using the technology.
“I continue to refuse to believe that attackers are somehow able to get these models to jump through hoops that nobody else can,” he said. “Why do the models give these attackers what they want 90% of the time but the rest of us have to deal with ass-kissing, stonewalling, and acid trips?”
“The barriers to performing sophisticated cyberattacks have dropped substantially—and we predict that they’ll continue to do so,” said AI company Anthropic.

Senator Chris Murphy speaks at the rally to Say NO to Tax Breaks for Billionaires & Corporations at US Capitol on April 10, 2025 in Washington, DC.
(Photo by Jemal Countess/Getty Images for Fair Share America)
Brad Reed
Nov 14, 2025
COMMON DREAM
A Democratic senator on Thursday sounded the alarm on the dangers of unregulated artificial intelligence after AI company Anthropic revealed it had thwarted what it described as “the first documented case of a large-scale cyberattack executed without substantial human intervention.”
According to Anthropic, it is highly likely that the attack was carried out by a Chinese state-sponsored group, and it targeted “large tech companies, financial institutions, chemical manufacturing companies, and government agencies.”

Trump FTC Deletes Lina Khan-Era Blog Posts Warning of Threat AI Poses to Consumers

Report Details How ‘Gas-Fed AI Boom’ Set to Blow Up US Climate Goals
After a lengthy technical explanation describing how the attack occurred and how it was ultimately thwarted, Anthropic then discussed the security implications for AI that can execute mass cyberattacks with minimal direction from humans.
“The barriers to performing sophisticated cyberattacks have dropped substantially—and we predict that they’ll continue to do so,” the firm said. “With the correct setup, threat actors can now use agentic AI systems for extended periods to do the work of entire teams of experienced hackers.”
Anthropic went on to say that hackers could now use AI to carry tasks such as “analyzing target systems, producing exploit code, and scanning vast datasets of stolen information more efficiently than any human operator,” which could open the door to “less experienced and resourced groups” carrying out some of the most sophisticated attack operations.
The company concluded by warning that “the techniques described above will doubtless be used by many more attackers—which makes industry threat sharing, improved detection methods, and stronger safety controls all the more critical.”
This cybersecurity strategy wasn’t sufficient for Sen. Chris Murphy (D-Conn.), who said government intervention would be needed to mitigate the potential harms caused by AI.
“Guys wake the f up,” he wrote in a social media post. “This is going to destroy us—sooner than we think—if we don’t make AI regulation a national priority tomorrow.”
Democratic California state Sen. Scott Wiener noted that many big tech firms have continuously fought against government oversight into AI despite threats that are growing stronger by the day.
“For two years, we advanced legislation to require large AI labs to evaluate their models for catastrophic risk or at least disclose their safety practices,” he explained. “We got it done, but industry (not Anthropic) continues to push for federal ban on state AI rules, with no federal substitute.”
Some researchers who spoke with Ars Technica, however, expressed skepticism that the AI-driven hack was really as sophisticated as Anthropic had claimed simply because they believe current AI technology is not yet good enough to execute that caliber of operation.
Dan Tentler, executive founder of Phobos Group, told the publication that the efficiency with which the hackers purportedly got the AI to carry out their commands was wildly different than what he has experienced using the technology.
“I continue to refuse to believe that attackers are somehow able to get these models to jump through hoops that nobody else can,” he said. “Why do the models give these attackers what they want 90% of the time but the rest of us have to deal with ass-kissing, stonewalling, and acid trips?”
Brussels denies pressure from the US administration influenced its push to 'simplify' the bloc's digital rules
By AFP
November 15, 2025

- Copyright AFP/File INA FASSBENDER
Raziye Akkoc
The European Union is set next week to kickstart a rollback of landmark rules on artificial intelligence and data protection that face powerful pushback on both sides of the Atlantic.
Part of a bid to slash red tape for European businesses struggling against US and Chinese rivals, the move is drawing accusations that Brussels is putting competitiveness ahead of citizens’ privacy and protection.
Brussels denies that pressure from the US administration influenced its push to “simplify” the bloc’s digital rules, which have drawn the wrath of President Donald Trump and American tech giants.
But the European Commission says it has heard the concerns of EU firms and wants to make it easier for them to access users’ data for AI development — a move critics attack as a threat to privacy.
One planned change could unite many Europeans in relief however: the EU wants to get rid of those pesky cookie banners seeking users’ consent for tracking on websites.
According to EU officials and draft documents seen by AFP, which could change before the November 19 announcement, the European Commission will propose:
— a one-year pause in the implementation of parts of its AI law
— overhauling its flagship data protection rules, which privacy defenders say will make it easier for US Big Tech to “suck up Europeans’ personal data”.
The bloc’s cornerstone General Data Protection Regulation (GDPR) enshrined users’ privacy from 2018 and influenced standards around the world.
The EU says it is only proposing technical changes to streamline the rules, but rights activists and EU lawmakers paint a different picture.
– ‘Biggest rollback’ –
The EU executive proposes to narrow the definition of personal data, and allow companies to process such data to train AI models “for purposes of a legitimate interest”, a draft document shows.
Reaction to the leaks has been swift — and strong.
“Unless the European Commission changes course, this would be the biggest rollback of digital fundamental rights in EU history,” 127 groups, including civil society organisations and trade unions, wrote in a letter on Thursday.
Online privacy activist Max Schrems warned the proposals “would be a massive downgrading of Europeans’ privacy” if they stay the same.
An EU official told AFP that Brussels is also expected to propose a one-year delay on implementing many provisions on high-risk AI, for example, models that can pose dangers to safety, health or citizens’ fundamental rights.
Instead of taking effect next year, they would apply from 2027.
This move comes after heavy pressure from European businesses and US Big Tech.
Dozens of Europe’s biggest companies, including France’s Airbus and Germany’s Lufthansa and Mercedes-Benz, called for a pause in July on the AI law which they warn risks stifling innovation.
– More battles ahead –
Commission president Ursula von der Leyen faces a battle ahead as the changes will need the approval of both the EU parliament and member states.
Her conservative camp’s main coalition allies have raised the alarm, with the socialists saying they oppose any delay to the AI law, and the centrists warning they would stand firm against any changes that undermine privacy.
Noyb, a campaign group founded by Schrems, published a scathing takedown of the EU’s plans for the GDPR and what they entail.
The EU has pushed back against claims that Brussels will reduce privacy.
“I can confirm 100 percent that the objective… is not to lower the high privacy standards we have for our citizens,” EU spokesman for digital affairs, Thomas Regnier, said.
But there are fears that more changes to digital rules are on the way.
– Simplification, not deregulation –
The proposals are part of the EU executive’s so-called simplification packages to remove what they describe as administrative burdens.
Brussels rejects any influence from Trump — despite sustained pressure since the first weeks of the new US administration, when Vice President JD Vance railed against the “excessive regulation” of AI.
This “started before the mandate of the president of the US”, chief commission spokeswoman Paula Pinho said this week.
Calls for changes to AI and data rules have been growing louder in Europe.
A major report last year by Italian ex-premier Mario Draghi also warned that data rules could hamper European businesses’ AI innovation.
Musk’s Grokipedia leans on ‘questionable’ sources, study says
By AFP
November 14, 2025

Elon Musk has accused Wikipedia of being biased against right-wing ideas - Copyright AFP Lionel BONAVENTURE
Anuj CHOPRA
Elon Musk’s Grokipedia carries thousands of citations to “questionable” and “problematic” sources, US researchers said Friday, raising doubts about the reliability of the AI-powered encyclopedia as an information tool.
Musk’s company xAI launched Grokipedia last month to compete with Wikipedia — a crowdsourced information repository authored by humans that the billionaire and others on the American right have repeatedly accused of ideological bias.
“It is clear that sourcing guardrails have largely been lifted on Grokipedia,” Cornell Tech researchers Harold Triedman and Alexios Mantzarlis wrote in a report seen by AFP.
“This results in the inclusion of questionable sources, and an overall higher prevalence of potentially problematic sources.”
The study, which scraped hundreds of thousands of Grokipedia articles, said the trend was particularly notable in topics pertaining to elected officials and controversial political topics.
Grokipedia’s entry for “Clinton body count” — a widely debunked conspiracy theory that links the deaths of multiple people to former president Bill Clinton and his wife Hillary — cites InfoWars, a far-right website notorious for peddling misinformation.
Other Grokipedia articles cite American and Indian right-wing media outlets, Chinese and Iranian state media, anti-immigration, antisemitic or anti-Muslim sites, and portals accused of promoting pseudoscience and conspiracy theories, the report said.
“Grokipedia cites these sources without qualifying their reliability,” it said.
The study found that Grokipedia articles often “contain exactly identical copies of text” from Wikipedia, a site it has intended to outshine.
It said Grokipedia articles not attributed to Wikipedia are 3.2 times more likely than those on the rival platform to cite sources deemed “generally unreliable” by the English Wikipedia community.
They are also 13 times more likely to include a “blacklisted” source which is blocked by Wikipedia, it added.
– ‘Trustworthiness’ –
AFP’s request to xAI for comment generated this auto response: “Legacy Media Lies.”
Musk — the world’s richest person and owner of social media platform X who poured hundreds of millions of dollars into US President Donald Trump’s election campaign — has said that Grokipedia’s goal is “the truth, the whole truth and nothing but the truth.”
On Thursday, Musk said he plans to rebrand Grokipedia as “Encyclopedia Galactica” when it is “good enough (long way to go).”
“Join @xAI to help build the sci-fi version of the Library of Alexandria!” Musk wrote on X.
Musk and the US Republican Party have frequently accused Wikipedia of being biased against right-wing ideas. Last year, Musk urged his more than 200 million followers on X to stop donating to Wikipedia, dubbing the site “Wokepedia.”
In a recent interview with the BBC Science Focus podcast, Wikipedia founder Jimmy Wales rejected claims it has a left-wing bias as “factually incorrect,” while acknowledging there were areas for improvement among its volunteer community.
“Unlike Grokipedia, which relies on rapid AI-generated content with limited transparency and oversight, Wikipedia’s processes are open to public review and rigorously document the sources behind every article,” Selena Deckelmann, chief product and technology officer at the Wikimedia Foundation, told AFP.
“It is precisely this deliberate openness and community model that upholds the neutrality and trustworthiness essential for a global encyclopedia: no single individual, company, or agenda can exert influence over the work.”
By AFP
November 14, 2025

Elon Musk has accused Wikipedia of being biased against right-wing ideas - Copyright AFP Lionel BONAVENTURE
Anuj CHOPRA
Elon Musk’s Grokipedia carries thousands of citations to “questionable” and “problematic” sources, US researchers said Friday, raising doubts about the reliability of the AI-powered encyclopedia as an information tool.
Musk’s company xAI launched Grokipedia last month to compete with Wikipedia — a crowdsourced information repository authored by humans that the billionaire and others on the American right have repeatedly accused of ideological bias.
“It is clear that sourcing guardrails have largely been lifted on Grokipedia,” Cornell Tech researchers Harold Triedman and Alexios Mantzarlis wrote in a report seen by AFP.
“This results in the inclusion of questionable sources, and an overall higher prevalence of potentially problematic sources.”
The study, which scraped hundreds of thousands of Grokipedia articles, said the trend was particularly notable in topics pertaining to elected officials and controversial political topics.
Grokipedia’s entry for “Clinton body count” — a widely debunked conspiracy theory that links the deaths of multiple people to former president Bill Clinton and his wife Hillary — cites InfoWars, a far-right website notorious for peddling misinformation.
Other Grokipedia articles cite American and Indian right-wing media outlets, Chinese and Iranian state media, anti-immigration, antisemitic or anti-Muslim sites, and portals accused of promoting pseudoscience and conspiracy theories, the report said.
“Grokipedia cites these sources without qualifying their reliability,” it said.
The study found that Grokipedia articles often “contain exactly identical copies of text” from Wikipedia, a site it has intended to outshine.
It said Grokipedia articles not attributed to Wikipedia are 3.2 times more likely than those on the rival platform to cite sources deemed “generally unreliable” by the English Wikipedia community.
They are also 13 times more likely to include a “blacklisted” source which is blocked by Wikipedia, it added.
– ‘Trustworthiness’ –
AFP’s request to xAI for comment generated this auto response: “Legacy Media Lies.”
Musk — the world’s richest person and owner of social media platform X who poured hundreds of millions of dollars into US President Donald Trump’s election campaign — has said that Grokipedia’s goal is “the truth, the whole truth and nothing but the truth.”
On Thursday, Musk said he plans to rebrand Grokipedia as “Encyclopedia Galactica” when it is “good enough (long way to go).”
“Join @xAI to help build the sci-fi version of the Library of Alexandria!” Musk wrote on X.
Musk and the US Republican Party have frequently accused Wikipedia of being biased against right-wing ideas. Last year, Musk urged his more than 200 million followers on X to stop donating to Wikipedia, dubbing the site “Wokepedia.”
In a recent interview with the BBC Science Focus podcast, Wikipedia founder Jimmy Wales rejected claims it has a left-wing bias as “factually incorrect,” while acknowledging there were areas for improvement among its volunteer community.
“Unlike Grokipedia, which relies on rapid AI-generated content with limited transparency and oversight, Wikipedia’s processes are open to public review and rigorously document the sources behind every article,” Selena Deckelmann, chief product and technology officer at the Wikimedia Foundation, told AFP.
“It is precisely this deliberate openness and community model that upholds the neutrality and trustworthiness essential for a global encyclopedia: no single individual, company, or agenda can exert influence over the work.”
No comments:
Post a Comment