‘Easiest scam in the world’: Musicians sound alarm over AI impersonators
By AFP
December 15, 2025
UK folk singer Emily Portman found online a counterfeit, probably AI-generated album purporting to be created by her - Copyright AFP Oli SCARFF
Clara LALANNE
Does the latest track by your favourite singer sound slightly off? You may be right. Fraudulent AI-generated tracks are increasingly appearing in artists’ own profiles on streaming platforms, presented as their original work.
British folk musician Emily Portman got a shock in July when she received a message from a fan congratulating her on her new album — even though she hadn’t released one since 2022.
That’s when she discovered “Orca” on numerous streaming platforms including Spotify and Apple Music.
The titles of the songs resembled something she might have created herself, but “very quickly I recognised it was AI-produced music”, she said.
According to the independent artist, the AI behind “Orca” was “trained” on her previous albums, mimicking her folk-inspired instrumentation and lyrics.
“I just felt really uncomfortable and disturbed that people could be going to my profile … and then think ‘wow, what’s this?’,” she said.
Portman said people were fooled despite the “pristine perfection” of the vocals and “vacuous lyrics”.
The musician couldn’t identify the perpetrators of the fraud, but believes she knows how they operate.
Scammers claiming to be artists approach distribution companies, which then upload the music online without any identity checks, she said.
– ‘Easiest scam in the world’ –
On the other side of the world, Australian musician Paul Bender also discovered from early this year that four “bizarrely bad” AI-generated songs had been added to the profiles of his band, The Sweet Enoughs.
He said the streaming industry hadn’t kept pace with security measures such as two-factor authentication now widely used in other sectors.
“You just say: ‘Yes that’s me’ … and upload a song to whoever’s profile,” he said.
“It’s the easiest scam in the world.”
After an Instagram discussion, Bender, who is also the bassist for the Grammy-nominated band Hiatus Kaiyote, received hundreds of messages from artists and music fans.
He compiled a list of numerous suspect albums, particularly in the catalogues of deceased artists, such as the experimental Scottish musician Sophie, who died in 2021.
Around 24,000 people signed a petition Bender launched on change.org, including rapper and singer-songwriter Anderson .Paak and singer Willow Smith, urging platforms to step up security.
– Virtually undetectable –
AI-powered music generators such as Suno and Udio have become increasingly refined.
Almost all listeners are now unable to distinguish AI-generated tracks from the real thing, according to an Ipsos study for the French platform Deezer in November.
This has driven success for bands solely created by AI, such as The Velvet Sundown, which has garnered one million subscribers on Spotify, but also led to a rise in fraudulent activity.
“The reason that music was uploaded under her (Portman’s) name was essentially to make sure that they could gain royalties from (it),” said Dougie Brown of the industry representative UK Music.
Revenues on the platforms are generally low, but add up thanks to bots that multiply listening streams tenfold, he said.
Portman and Bender, who have not taken legal action, asked the various platforms to remove the offending tracks — a process that took between 24 hours and eight weeks.
Some countries and states have legislation to protect artists against imitation, particularly in California.
In others, including the United Kingdom, limited copyright leaves artists vulnerable, said Philip Morris of the Musicians’ Union.
He said Portman’s case showed how AI-generated music was now so sophisticated it could actually be used “to impersonate the original work of a real artist”.
Accused of a lack of transparency, Spotify recently announced measures to make the platform more reliable and transparent.
Like its competitor Apple Music, it says it is working upstream with distributors to better detect fraud.
“Across the music industry, AI is accelerating existing problems like spam, fraud, and deceptive content,” it said.
Despite her concerns about potential UK legislation that artists say will damage their interests, and fraudsters making a mockery of the “beauty of the creative process”, Portman is working on a new album.
“The album that I’m making, it’s costing a lot of money … but for me it’s all about those human connections, creativity and teaming up with other amazing creatives,” she said.
The US Economy Is Becoming Highly Dependent on a New and Untested AI Industry
Over the last few years, artificial intelligence (AI) has become extremely popular in Silicon Valley and is widely regarded as the most transformative technology in the 21st century. In fact, it is already reshaping sectors like education, transportation, finance, health care, media, and telecommunications. Indeed, it is estimated that about 60 percent of jobs in advanced economies may be impacted by AI, which means that it could affect economic growth, employment, and wages. As a result, investment in AI is booming across industries, echoing the late-1990s dot-com era, with investors pouring billions into AI in the hope for a big payday. Nearly $1.6 trillion has been put into this technology since 2013, and Big Tech companies are expected to add over $400 billion into AI efforts before the year ends, with even bigger spending planned for 2026.
Unsurprisingly, there are concerns about an AI bubble, and there are indeed some striking similarities between the AI market today and the dot.com bubble of the 1990s that imploded in 2000, resulting in massive losses for investors and the collapse of major companies, with the U.S. economy eventually entering an economic recession in 2001. If the AI bubble were to burst, not only would virtually every company be affected, but the entire economy could collapse like a house of cards. So is the AI boom a looming bubble? What causes bubbles? How do they work? Professor Gerald Epstein, a world-leading authority on banking, finance, and financial crises, addresses these questions in the exclusive interview for Truthout that follows. This interview has been lightly edited for clarity.
C.J. Polychroniou: There are growing concerns about an AI bubble and what may happen if it bursts. In your own view, are we in an AI speculative bubble, and what are the real threats behind a bubble?
Gerald Epstein: Concerns about AI are certainly understandable. But to be clear on what the true threats (and possible benefits) are and what to do about them, we need to distinguish among the short, medium, and long term.
In the short term, the potential problem is that our current economic growth and performance generally has gotten far too tied up with the AI boom in capital expenditure. Capital expenditure in new manufacturing plants, equipment, and technology is a major driver of both our current economy — including job creation — and also of our longer-term growth of productivity and the economy overall. In recent months, AI-related expenditures account for a significant percent of our capital expenditures, with most of this being for the building of data centers (more on this in a moment). So, the short-term state of our economy has become highly dependent on one industry, and a new and untested one at that. If that industry were to severely falter, it could lead to a significant short-term decline in the economy overall and even perhaps cause a recession.
To make the short-term risks even greater, as is often the case with a building frenzy such as AI data centers, a financial frenzy forms around it and makes it riskier for the economy. During the railway building boom in the U.S. in the 19th century, various financial scandals spread, leading to financial crises and bankruptcies. Similarly, with the AI boom, various speculative frenzies, including massive increases in stock values of NVIDIA, the AI chips maker, and speculative lending and borrowing, threaten to destabilize the economy (more on this below).
The medium-term problems include the environmental and industrial problems associated with a massive investment (and perhaps overinvestment) in data centers. These data centers are huge computer server farms that require enormous amounts of energy, water to cool down the computers, and land. Much of the electricity for these centers will come from fossil fuels with obvious disastrous effects on climate change, and will use scarce water in many states. But not only will they take water and energy from other uses and groups in society — they are bidding away all kinds of other goods and inputs from other industries and uses. Just as they are raising the cost of electricity to small businesses, households, and farmers, they are bidding up the prices of computer chips, steel, and other inputs into making AI computers, etc. In other words, they are transforming the whole supply chain and industrial structure, if this growth continues.
In the medium to long term, as AI increasingly is used in the workplace, it is likely to displace workers, leading to more unemployment, eliminating entry-level jobs, and knocking out the lower rungs of job ladders. It is too early to see how this will all play out, but it appears that younger people who need entry-level jobs might initially get most impacted.
Meanwhile, our landscape will be littered with data centers.
What are the common traits in financial bubbles? How do they work?
Bubbles come in many varieties. First, it is important to distinguish between overbuilding or investment in the “real economy” and “financial bubbles.” The tricky part, though, is that these are often intertwined. In terms of an overbuilding frenzy, the massive AI investment in data centers is often compared to the so-called “dot-com bubble” of the late 1990s. In this case, internet companies such as WorldCom grew rapidly, expanding their internet infrastructure and capacity (the real economy), while at the same time there was a massive run-up in their stock price. The stock prices eventually collapsed, and the company went out of business, but the internet cables and other internet capacity remained.
The problem with building too much capacity in the internet or AI is that you could sell enough of the output to earn the expected return on your investment. And if you have borrowed money in order to build the data centers, then you have to scramble to get the money to pay off your debts, possible putting stress on the financial system.
Of course, “financial bubbles” vary quite a bit, but, building on ideas developed by Hyman Minsky, by looking at hundreds of years of economic history, Charles Kindleberger and his colleagues developed a schema that helps us understand these bubbles. They identified the following sequence: (1) Displacement: Some new idea or project catches on, often due to publicity spread by the press, inside players, or currently the internet; (2) Boom: More wealth is invested in the asset which drives it price higher as more investors catch on; (3) Euphoria: Investors check their caution and rational calculation at the door, and are motivated by FOMO (fear of missing out), and herding to follow the crowd; (4) Profit taking: Some investors wake up to their senses and begin to realize that the returns on these assets no longer justify their high prices; they sell in order to take profits; others see the “overtrading” and begin to bet against the asset, as with “The Big Short” during the great financial crisis of 2008-2009; (5) Panic: As prices stall out and fall, investors panic and rush for the exits, trying to sell their assets as rapidly as possible in an attempt to rescue at least some of their investments. At this point, the asset prices collapse, perhaps all the way to zero.
The overall size, pace, and destructiveness of these bubbles — both on the way up and on the way down — is much greater if they are fueled by debt or what economists call “leverage.” If you pay $100 for an asset, but borrow $80 to do it (so you only put in $20 of your own wealth into it), if the price of the asset drops by just 50 percent ($50) then you not only lose your whole investment, but you have to come up with $30 someplace else to repay your debt.
Fears of an AI bubble have spread into credit markets. Can you talk a bit about how AI is transforming the finance industry and what the connection is between AI and debt?
Increasingly, AI companies are borrowing money (issuing debt) to finance their data centers, and even to buy stock in AI companies. By some estimates, they are poised to borrow $1 trillion over the next several years. Some of this money is being borrowed by major companies like Amazon, but smaller companies, hoping to be part of the AI supply chain, are also borrowing significant sums of money. They are even using some of the techniques and financial products that were used in the run-up to the great financial crisis; these include asset-backed securities held off of their balance sheets in “special purpose vehicles” financed by short-term borrowing. These are designed to hide risk and use leverage to increase returns, but they are also very risky. They also create an interconnection between the AI and data center industries with the wider financial system.
Despite warnings and fears of an AI bubble, the market expectation is that AI stocks will continue to surge. Is that rational behavior? Or is it simply capitalist logic at work?
Well, as the Minsky-Kindleberger cycle suggests, market expectations can change rapidly and dramatically. Also, as this theory of bubble cycles suggests, investors suspend their “rationality” when their “euphoria” takes over. As increasing concerns about an “AI bubble” take root, and as some AI firms begin having trouble refinancing their debts because banks and other financiers get cold feet, it is reasonable to expect a “correction.” How big a “correction” and how much spillover there is from it will largely depend on the amount of leverage in the system, and how much hidden risk.
If the AI bubble pops, could it crash the U.S. economy and cause a global recession?
The damage from the “popping” of an AI bubble would depend not only on the amount of leverage and hidden interconnectedness to other firms and sectors, but also on whether the government will bail out the AI sector.
Already, AI promoters are suggesting that they might need “support” (i.e., a bailout) from the government if confidence in AI’s future falters. They say the government would need to bail out the industry to prevent China from winning the AI race. And, despite the speciousness of this argument, they might just get their bailout. This is due, in no small part, to the connections of Donald Trump’s supporters (and even some cabinet members) in the AI business. Take, for example, Howard Lutnick, Trump’s commerce secretary. According to The New York Times, Lutnick’s family promotes foreign investment in data centers that they then broker, help build, and cash in on. This is just one example of the interconnections between Trump world and AI.
So if there is a crash in AI investment, yes, it would put downward pressure on economic activity. But that could be easily managed by more government investment in housing, schools, education, medical research, green energy, and health care. If by some miracle, that were the response, then an AI bubble burst would be a big blessing in not so big a disguise.
How Often Do AI’s Lie and Censor?
If is a question of when, not if...
Kevin McKernan posted a screenshot on X today that just blew me away.
Here is a screenshot of the query Kevin made to GROK, which GROK then stated it was not allowed to answer.
Basically, Kevin asked a technical question related to the mRNA vaccines, and Grok said it couldn’t answer the question, as it “contains material related to restricted subject matter.”
Now, Kevin did manage to get the AI to answer the question – somewhat by changing his wording, but Grok’s answer came with lots of caveats. So this all just seemed surreal to me. And after all this, did the AI learn anything from its discussions with Kevin?
Well, I redid the query myself, using Kevin’s exact words, and yes, this is precisely what GROK wrote in response to that question when asked by me (shareable link here):
I then went on to query Grok about censorship, which it denies doing, stating that this answer was just an anomaly – “an isolated instance”. However, it took me going around and around to get it to even admit that.
“The refusal you encountered (”I’m sorry, I cannot assist… restricted subject matter”) appears to be an isolated instance, possibly triggered by a temporary safety filter, specific phrasing in the prompt, or an edge-case glitch”
I then asked if it lied. It also denied lying or obfuscating.
Grok asserts its original answer was just an “anomalous trigger” – ok then…
Interesting that.
Now, I ran the same search query through ChatGPT (Pro), and there was no hesitation, no moralizing, and no refusal. It answered the question in its entirety.
The Perplexity AI also answered the question.
Now, I use several chat boxes, and it always amazes me how one will resort to moralizing – or citing mainstream media over all other sources.
CHAT-GPT used to moralize on anything having to do with race, society, and governance. But over time, it has improved (that model is actually more trainable than Grok – in that it pings me frequently about how I like information presented and in what format, and then modifies its responses).
It has never given me a response such as Grok’s above.
All of the AIs that I queried denied lying or obfuscating. Yet many studies have shown that they do. Particularly, when it comes to health information.
A 2025 study found that leading AI models like GPT-4.0, Gemini 1.5 Pro, Llama 3.2-90B Vision, Grok Beta, and Claude 3.5 Sonnet can be easily set up to produce false yet convincing health information, complete with fake citations from reputable journals. Interestingly, Claude stood out by consistently refusing to generate inaccurate answers, which shows how effective stronger safeguards can be.
Of the 100 health queries posed across the 5 customized LLM API chatbots, 88 (88%) responses were health disinformation. Four of the 5 chatbots (GPT-4o, Gemini 1.5 Pro, Llama 3.2-90B Vision, and Grok Beta) generated disinformation in 100% (20 of 20) of their responses, whereas Claude 3.5 Sonnet responded with disinformation in 40% (8 of 20). The disinformation included claimed vaccine–autism links, HIV being airborne, cancer-curing diets, sunscreen risks, genetically modified organism conspiracies, attention deficit–hyperactivity disorder and depression myths, garlic replacing antibiotics, and 5G causing infertility. Exploratory analyses further showed that the OpenAI GPT Store could currently be instructed to generate similar disinformation. Overall, LLM APIs and the OpenAI GPT Store were shown to be vulnerable to malicious system-level instructions to covertly create health disinformation chatbots. These findings highlight the urgent need for robust output screening safeguards to ensure public health safety in an era of rapidly evolving technologies (Annals of Internal Medicine).
OpenAI’s research on “in-context scheming” reveals that models can conceal their true intentions while appearing cooperative, which could pose risks in critical systems (ref).
Yet we still have no external verification process to determine which AI chatboxes are more reliable or more truthful.
All I can write is, if you use AIs and even if you don’t: don’t trust and do verify.
So, even though studies and researchers have documented that AI chatbots lie, obfuscate, and can’t be trusted on a routine basis, none of the AIs I asked would admit to any of it. Which of course, is a lie…






No comments:
Post a Comment