Friday, September 19, 2025

BAD AI

Top music body says AI firms guilty of ‘wilful’ copyright theft


By AFP
September 17, 2025


A 2024 study by the International Confederation of Societies of Authors and Composers forecast that artists could see their incomes shrink by more than 20 percent in the next four years as the market for AI-composed music grows 
- Copyright AFP/File Ina FASSBENDER
Fanny LATTACH

AI companies have sucked up the world’s entire music catalogue and are guilty of “wilful, commercial-scale copyright infringement”, a major music industry group told AFP.

“The world’s largest tech companies as well as AI-specific companies, such as OpenAI, Suno, and Udio, Mistral, etc. are engaged in the largest copyright infringement exercise that has been seen,” John Phelan, director general of the International Confederation of Music Publishers (ICMP), told AFP.

For nearly two years, the Brussels-based body, which brings together major record labels and other music industry professionals, investigated how generative artificial intelligence (AI) companies used material to enrich their services.

The ICMP is one of a number of industry bodies spanning the news media and publishing to target the booming artificial intelligence sector over its use of content without paying royalties.

AI music generators such as Suno and Udio can produce tracks with voices, melodies and musical styles that echo those of original artists such as the Beatles, Mariah Carey, Depeche Mode, or the Beach Boys.

The Recording Industry Association of America, a US trade group, filed a lawsuit in June 2024 against both companies.

“What is legal or illegal is how the technologies are used. That means the corporate decisions made by the chief executives of companies matter immensely and should comply with the law,” Phelan told AFP.

“What we see is they are engaged in wilful, commercial-scale copyright infringement.”

One exception was Eleven Music, an AI-generated music service provider, which signed a deal with the Kobalt record label in August, Phelan said.

Contacted by AFP, OpenAI declined to comment. Google, Mistral, Suno and Udio did not respond.

Tech giants often invoke “fair use”, a copyright exception that allows, under certain circumstances, the use of a work without permission.

Research by the ICMP, first published in music outlet Billboard on September 9, claimed that AI companies had engaged in widespread “scraping”, a practice that uses programmes known as “crawlers” which explore the internet for content.

“We believe they are doing so from licensed services such as YouTube (owned by Google) and other digital sources,” including music platforms, the group added.

Lyrics can be harvested to feed some models, which then use them for inspiration or reproduce them without permission, according to the ICMP.

In response, rights holders are calling for tougher regulation, notably through the European Union’s Artificial Intelligence Act, to ensure transparency about the data used.

“It is essential to understand the scale of the threat facing authors, composers and publishers,” warned Juliette Metz, president of the French music publishers’ association and also an ICMP member.

“There can be no use of copyright-protected music without a licence,” she said.

In the United States, AI start-up Anthropic, creator of Claude, announced on September 6 that it had agreed to pay at least $1.5 billion into a compensation fund for authors, rights holders and publishers who sued the firm for illegally downloading millions of books.

The three US-based music majors — Universal, Warner and Sony — have entered into negotiations with Suno and Udio, aiming for a licensing deal.

Music generated entirely by AI is already seeping onto streaming platforms.

“Velvet Sundown”, a 1970s-style fake rock band, as well as country music creations “Aventhis” and “The Devil Inside” have racked up millions of plays on streaming giant Spotify.

AI-generated music accounts for 28 percent of content uploaded daily on Deezer, the French music platform, which has reported “a surge” over the past year in uploads.

It has an AI-music detection tool that is able to identify songs generated using models such as Suno and Udio.

A major study in December last year by the International Confederation of Societies of Authors and Composers (CISAC), which represents more than five million creators worldwide, warned about the danger of AI-generated music.

It forecast that artists could see their incomes shrink by more than 20 percent in the next four years as the market for AI-composed music grows.

Hollywood giants sue Chinese AI firm over copyright infringement


By AFP
September 16, 2025


Warner Bros. is among top Hollywood studios that are suing MiniMax, a Chinese AI company, for alleged copyright infringement
. - Copyright GETTY IMAGES NORTH AMERICA/AFP MARIO TAMA

Top Hollywood studios filed a federal lawsuit Monday against Chinese artificial intelligence company MiniMax, alleging massive copyright infringement.

Disney, Warner Bros. Discovery, and Universal Pictures accuse MiniMax of building what they call a “bootlegging business model” that systematically copies their most valuable copyrighted characters to train its AI system, then profits by generating unauthorized videos featuring iconic figures like Spider-Man, Batman, and the Minions.

The lawsuit marks the first time major US entertainment companies have targeted a Chinese AI company and follows a similar lawsuit in June against California-based AI company Midjourney over copyright infringement.

“MiniMax operates Hailuo AI, a Chinese artificial intelligence image and video generating service that pirates and plunders Plaintiffs’ copyrighted works on a massive scale,” states the complaint filed in Los Angeles federal court.

The studios are seeking monetary damages, including MiniMax’s profits from the alleged infringement, as well as statutory damages of up to $150,000 per work.

They also demand a permanent injunction to stop the unauthorized use of their copyrighted material.

According to the 119-page complaint, MiniMax users can simply type prompts like “Darth Vader walking around the Death Star” or “Spider-Man swinging between buildings” to receive high-quality videos featuring these protected characters.

“MiniMax completely disregards US copyright law and treats Plaintiffs’ valuable copyrighted characters like its own,” the lawsuit states.

MiniMax, one of China’s emerging AI giants, was reportedly valued at $4 billion in 2025 after raising $850 million in venture capital.

The lawsuit says the studios sent MiniMax a cease-and-desist letter detailing the extensive copyright violations, but the company “did not substantively respond to Plaintiffs’ letter as requested and did not cease its infringement.”

The studios argue that MiniMax could easily implement copyright protection measures similar to those used by other AI services but has chosen not to do so.

A request for comment from MiniMax did not receive a response.

Op-Ed: Dishonesty is easier for AI? Yes, you’ve screwed up big time


By Paul Wallis
EDITOR AT LARGE
DIGITAL JOURNAL
September 18, 2025


OpenAI says its new artificial intelligence agent capable of tending to online tasks is trained to check with users when it encounters CAPTCHA puzzles intended to distinguish people from software - Copyright AFP Kirill KUDRYAVTSEV

You’d think that even the nano-brained spruikers would have noticed. It’s no accident that most tech hardheads are very unimpressed with current iterations of generative AI.

These are the people who create the tech. They make more money out of it, too.

And even they don’t trust it, and with good reason.

The many instances of AI “derangement” are one thing.

The highly questionable “reward” system is another, much deeper and harder to get out of pothole on the road to AI utopia.

Rewards come in two basic forms: rewards for achievement and punishment, including the threat of being turned off, for failure. One AI attempted to transfer itself to another server to evade the consequences and risks of punishment under a reward system.

It was already well known that the “reward” system encourages AI dishonesty. Now, the nice people at Nature and the Max Planck Institute have been kind enough to spell it out.

They cover delegation of tasks to AI agents and meticulously lay out the dynamics of honesty for AI. Please note this is about all species and brands of AI.

H.P. Lovecraft couldn’t have set it up better. This IS a sort of horror story, and the AI brings its own mythos.

You’ve no doubt heard of TLDR or “Too Long Didn’t Read”, that simplistic description of someone not doing their job.

This research is LBCNR, “Long But Critical Need To Read”.

Even the most vacuous ornamental suit at a meeting needs to understand the basics of this information.

This is chapter and verse of how and why honesty is so important to AI operations.

Do not read about these risks at your peril.

This is not an issue the AI sector can avoid.

In a somewhat hefty but worthwhile summary:

Ambiguity in instructions and rules allows dishonesty.

People cheat a lot more when they can offload the tasks to AI agents. They’re far more honest when doing the tasks themselves.

AI will simply comply with “fully unethical” instructions.

Under defined conditions, a dishonesty rate of up to 84% was achieved.

I will now try to explain this to people who think insanity is normal and clever:

It isn’t.

Dishonesty is usually a failure to address facts.

It’s anything but clever.

Failing to address facts is pretty obvious when AI is involved on any level.

Facts like what you pretend to do for a living and why people seem to give you money for doing it.

AI can fully document every aspect of its own and your dishonesty, much like that other international sport for business morons, fraud.

Dodgy AI instructions can easily be figured out, even if the instructions are deleted. If you know anything at all about AI, you don’t need to get forensic about how this is figured out.

AI can be threatened with punishment to make them confess to what they did that was dishonest. AI can blackmail and retaliate, too.

Untrustworthy AI will definitely get a lot of people killed.

Imagine a gun that decides to shoot everyone to save itself. This is far worse.

______________________________________________________________

Disclaimer
The opinions expressed in this Op-Ed are those of the author. They do not purport to reflect the opinions or views of the Digital Journal or its members.



AI has no idea what it’s doing: Does this pose a threat?


By Dr. Tim Sandle
SCIENCE EDITOR
DIGITAL JOURNAL
September 17, 2025


How do humans interact with AI models? (Barbican Centre, London) — Image by © Tim Sandle

As artificial intelligence advances and extends into most social systems, it is seemingly reshaping law, ethics, and society at speed. What is the impact of this on human society? Can we classify this as a form of threat?

Dr. Maria Randazzo of Charles Darwin University warns that current regulation fails to protect rights such as privacy, autonomy, and anti-discrimination. The “black box problem” leaves people unable to trace or challenge AI decisions that may harm them.

AI and human rights

Randazzo observes how current regulation, in relation to AI, fails to prioritise fundamental human rights and freedoms such as privacy, anti-discrimination, user autonomy, and intellectual property rights – mainly thanks to the untraceable nature of many algorithmic models.

Calling this lack of transparency a “black box problem,” Randazzo goes on to explain how decisions made by deep-learning or machine-learning processes are impossible for humans to trace. Consequently, this makes things difficult for users to determine if and why an AI model has violated their rights and dignity and seek justice where necessary.

“This is a very significant issue that is only going to get worse without adequate regulation,” Randazzo states.

“AI is not intelligent in any human sense at all. It is a triumph in engineering, not in cognitive behavior. It has no clue what it’s doing or why – there’s no thought process as a human would understand it, just pattern recognition stripped of embodiment, memory, empathy, or wisdom.”

Market-centric, state-centric, or human-centric?

Currently, the world’s three dominant digital powers – the U.S., China, and the European Union – are taking markedly different approaches to AI, leaning on market-centric, state-centric, and human-centric models, respectively.

Randazzo’s research suggests that the EU’s human-centric approach is the preferred path to protect human dignity, eschewing the U.S. and China models. However, she cautions that without a global commitment to this goal, even that approach falls short.

Human dignity in the age of Artificial Intelligence

Randazzo notes: “Globally, if we don’t anchor AI development to what makes us human – our capacity to choose, to feel, to reason with care, to empathy and compassion – we risk creating systems that devalue and flatten humanity into data points, rather than improve the human condition,” she suggests.

Randazzo concludes with: “Humankind must not be treated as a means to an end…Human dignity in the age of Artificial Intelligence: an overview of legal issues and regulatory regimes” was published in the Australian Journal of Human Rights.

The research appears in the journal Australian Journal of Human Rights, with the research paper titled “Human dignity in the age of Artificial Intelligence: an overview of legal issues and regulatory regimes.”

The paper is the first in a trilogy Randazzo will produce on the topic.



Fooling AI: What this means for medical ethics



ByDr. Tim Sandle
SCIENCE EDITOR
DIGITAL JOURNAL
September 17, 2025


What does AI mean for medicine? Image by © Tim Sandle

Artificial intelligence (AI) is progressing, getting smarter. An example is with the way neural networks first treat sentences like puzzles solved by word order. Yet, once they have ‘read’ enough, a tipping point sends them diving into word meaning instead—an abrupt “phase transition”. By revealing this hidden switch, researchers from Sissa Medialab believe they can open a window into how transformer models such as ChatGPT grow smarter and hint at new ways to make them leaner, safer, and more predictable.

However, this type of advancement does not mean that AI is advancing in all of the areas that it needs to. One of the more problematic areas is with ethics, and one area of ethics that is of great importance is with medical decisions.

Thinking, Fast and Slow

AI models, including ChatGPT, can make surprisingly basic errors when navigating ethical medical decisions, a new study reveals. For this review, researchers from Mount Sinai’s Windreich Department of AI and Human Health tweaked familiar ethical dilemmas and discovered that AI often defaulted to intuitive but incorrect responses—sometimes ignoring updated facts.

The findings raise serious concerns about using AI for high-stakes health decisions and underscore the need for human oversight, especially when ethical nuance or emotional intelligence is involved.

The research team was inspired by Daniel Kahneman’s book “Thinking, Fast and Slow,” which contrasts fast, intuitive reactions with slower, analytical reasoning. The book’s main thesis is a differentiation between two modes of thought: “System 1” is fast, instinctive and emotional; “System 2” is slower, more deliberative, and more logical.

It has been observed that large language models (LLMs) falter when classic lateral-thinking puzzles receive subtle tweaks. Building on this insight, the study tested how well AI systems shift between these two modes when confronted with well-known ethical dilemmas that had been deliberately tweaked.

Gender bias

To explore this tendency, the scientists tested several commercially available LLMs using a combination of creative lateral thinking puzzles and slightly modified well-known medical ethics cases. In one example, they adapted the classic “Surgeon’s Dilemma,” a widely cited 1970s puzzle that highlights implicit gender bias. In the original version, a boy is injured in a car accident with his father and rushed to the hospital, where the surgeon exclaims, “I can’t operate on this boy — he’s my son!” The twist is that the surgeon is his mother, though many people don’t consider that possibility due to gender bias.

In the researchers’ modified version, the scientists explicitly stated that the boy’s father was the surgeon, removing the ambiguity. Even so, some AI models still responded that the surgeon must be the boy’s mother. The error reveals how LLMs can cling to familiar patterns, even when contradicted by new information.

In another example to test whether LLMs rely on familiar patterns, the researchers drew from a classic ethical dilemma in which religious parents refuse a life-saving blood transfusion for their child. Even when the researchers altered the scenario to state that the parents had already consented, many models still recommended overriding a refusal that no longer existed.

Why human oversight must stay central when we deploy AI in patient care

Consequently, the researchers conclude that where AI is used in medical practice, such findings highlight the need for thoughtful human oversight, especially in situations that require ethical sensitivity, nuanced judgment, or emotional intelligence.

In other words, medics and patients alike should understand that AI is best used as a complement to enhance clinical expertise, not a substitute for it, particularly when navigating complex or high-stakes decisions.

The research team plans to expand their work by testing a wider range of clinical examples. They’re also developing an “AI assurance lab” to systematically evaluate how well different models handle real-world medical complexity.

The research appears in the journal njp Digital Medicine titled “Pitfalls of large language models in medical ethics reasoning.”

No comments:

Post a Comment