Friday, November 07, 2025

Getty Images largely loses lawsuit against UK AI firm


By AFP
November 4, 2025


Getty Images brought the case against a British AI firm at the High Court in London - Copyright AFP JUSTIN TALLIS

US media company Getty Images largely lost a case it brought against a British AI firm over use of its copyrighted content without permission, a court in London said on Tuesday.

Getty had alleged that London-based Stability AI, whose directors include Canadian filmmaker James Cameron, “extracted millions” of images from Getty’s websites “without consent” to unlawfully train its deep learning AI model, Stable Diffusion.

The model can generate images using verbal commands.

Stability AI rejected the claim, telling a High Court trial which began in June, that the legal action was a “threat” to the business.

Getty, which distributes stock and news photos and videos, including AFP photos, dropped its allegations of breach of copyright during the trial but continued to pursue several other claims, including trade mark infringement and secondary infringement of copyright.

Getty acknowledged that there was “no evidence that the training and development of Stable Diffusion took place in the United Kingdom”, judge Joanna Smith said in a 205-page ruling on Tuesday.

“This court can only determine the issues that arise on the (diminished) case that remains before it,” her ruling said.

Stability AI was found responsible for producing images on which the watermark “Getty” or the subsidiary name “iStock” appeared, a partial win in its trade mark infringement claim.

“In summary, although Getty Images succeed (in part) in their Trade Mark Infringement Claim, my findings are both historic and extremely limited in scope,” Smith also wrote.

The ruling is likely to be seen as a blow for content creators and copyright owners globally at a time of unease over how they can be fairly compensated should AI models use their work.

“We remain deeply concerned that even well-funded companies like Getty Images face significant challenges in protecting their works,” Getty said in a statement.

“We call on governments, including the United Kingdom, to establish stricter transparency rules.”

Christian Dowell, General Counsel for Stability AI, said the company was “pleased” with the court’s ruling.

“Getty’s decision to voluntarily dismiss most of its copyright claims at the conclusion of trial testimony left only a subset of claims before the court, and this final ruling ultimately resolves the copyright concerns that were the core issue,” he said in a statement
Rally outside Rockstar against GTA studio’s ‘union busting’


By AFP
November 6, 2025


Grand Theft Auto VI is expected to gross more than $10 billion - Copyright GETTY IMAGES NORTH AMERICA/AFP Dimitrios Kambouris

Dozens of people protested Thursday outside Rockstar Games’ offices in Edinburgh, accusing the multi-billion dollar studio behind the smash “Grand Theft Auto” open-world carjacking franchise of “blatant union busting” by firing 31 people.

Rockstar Games, whose upcoming sixth edition of the cash-cow series is among the hottest releases of 2026, has accused the employees of “distributing and discussing confidential information in a public forum, a violation of our company policies”.

But the Independent Workers’ Union of Great Britain (IWGB), which called the demonstration, rejected that claim, arguing that the sacked workers were all members of a private discussion channel linked to the union.

“Rockstar has just carried out the most blatant and ruthless act of union busting in the history of the games industry,” the IWGB’s President Alex Marshall said in a statement.

Branimira Yordanova, a lighting artist fired by Rockstar, said that she came into work to find her teammates in a huddle, “and they told me that my colleague Jordan had just been fired”.

“After that, it was chaos until I was called into a meeting, and I was handed my dismissal letter,” Yordanova told AFP on the sidelines of the protest.

When contacted by AFP, Rockstar, a subsidiary of American behemoth Take-Two Interactive, had not replied by Thursday evening.

A Rockstar spokesman insisted to Bloomberg on Wednesday that the firings were “in no way related to people’s right to join a union or engage in union activities”.

At the rally on Thursday, IWGB organiser Fred Carter said the company had “given no evidence” for that.

“We’ve submitted appeals of what we see as unfair dismissals of these 31 workers… in fact, we’ll fight for the reinstatement of our members,” Carter told AFP.

GTA VI, whose development ahead of its May 26, 2026 release has been shrouded in secrecy, is on course to become one of the biggest entertainment product launches of all time.

As popular as it is notorious for its sexual and violent content, the franchise has allowed players to roleplay as criminals doing dirty deeds across sprawling cityscapes since its first entry in 1997.

According to the IWGB, the last entry — 2013’s GTA V — grossed more than $7 billion. The union expects GTA VI’s takings to surpass $10 billion.



Video game creators fear AI could grab the controller


By AFP
November 3, 2025


Industry actors hope AI will reduce the cost and time needed to develop a high-quality game - Copyright AFP Ina FASSBENDER


Kilian FICHOU

Generative artificial intelligence models capable of dreaming up ultra-realistic characters and virtual universes could make for cheaper, better video games in future, but the emerging technology has artists and developers on edge.

Already, “generative AI is used a lot more in commercial game development than people realise, but it’s used in very small ways” such as dubbing, illustrations or coding help, said Mike Cook, a game designer and computer science lecturer at King’s College London.

Such uses of AI are rarely noticeable for the player of the finished product, he added.

One study from the American startup Totally Human Media found that almost 20 percent of titles available this year via the Steam distribution platform disclosed the use of generative AI during development.

That would account for several thousand games released in recent years, including mass-market juggernauts like “Call of Duty: Black Ops 6” or the life simulation game “Inzoi”.

The growth of AI should allow studios to “merge several job roles into one, assisted by these tools”, said AI consultant Davy Chadwick, who predicted a “30 to 40 percent boost” to developers’ output.

Progress has come at a rapid clip, with the latest tools able to generate 3D assets like characters or objects from a simple text prompt, which can then be dropped straight into a game world.

“In the past, if you wanted to create a high-quality 3D model, it’s going to take you two weeks and $1,000,” said Ethan Hu, founder of the California-based startup Meshy.ai, which claims to have more than five million users.

“Now the cost is one minute and $2,” he said.

– High stakes –

Industry heavyweights have come at generative AI from different angles, with Electronic Arts partnering with the startup Stability AI while Xbox maker Microsoft develops its own model called “Muse”.

The stakes and potential rewards are high in the world’s biggest cultural industry, worth almost $190 million in revenue in 2025, according to the data firm Newzoo.

Industry actors hope new technology will both juice productivity and reduce the cost and time needed to develop a high-quality game, said Tommy Thompson, founder of the “AI and Games” platform.

But “there’s a lot of distrust and fear” among workers in a sector that has already gone through several waves of layoffs in recent years, said one employee at a French game studio on condition of anonymity.

The tools “are supposed to make us more productive but would ultimately mean job losses”, the worker added.

His own experiences with AI in game development showed that in 3D modelling, “the objects produced by this kind of AI are extremely chaotic” and ill-suited to immediate use in-game.

“For the moment it’s frankly a deal-breaker… it takes as much time to fix it up as to make it” from scratch, the developer added.

Such fears have kept major industry players from making waves about their use of AI.

Microsoft, EA, Ubisoft and Quantic Dream all declined to comment when contacted by AFP.

Rather than replacing artists, AI tools “allow them to speed up their creative process” by automating busywork, said Felix Balmonet, a co-founder of French 3D asset generation startup Chat3D.

He added that his company was already working with “two of the five largest studios in the world”.

– Picky players –

Some in the industry already fear that refusing to use generative AI tools would effectively mean dropping out of competition.

“We will have to ask ourselves whether we use them on our next game,” said the head of one French studio who is “personally against” AI models and just completed a multi-year project “without AI”.

Most publishers and investors contacted by AFP said the use of AI was not a factor in their decisions to finance a development project.

“You have to be careful when using AI,” said Piotr Bajraszewski, business development chief at 11 bit Studios in Poland.

Gamers blasted his studio’s latest project, “The Alters”, after its June release for including AI-generated text that was not flagged up beforehand.

The studio said the content was simply forgotten placeholder copy, but the incident underscored how much weight some players still give human creatives’ work.

Chinese microdrama creators turn to AI despite job loss concerns


By AFP
November 6, 2025


Chen Kun showing content generated by his AIpai platform on 
a smartphone during an AFP interview at his office in Beijing
 - Copyright AFP Pedro PARDO


Jing Xuan TENG

Ultra-short video series “Strange Mirror of Mountains and Seas” is filled with dragon-like monsters, handsome protagonists and plenty of melodrama — almost all of it, including the lifelike human characters, created by artificial intelligence.

With over 50 million views, it is one of a growing number of AI-generated “microdramas”, soap opera-like series with episodes as short as 30 seconds, that are taking China by storm.

Microdrama production companies are increasingly harnessing AI to replace actors and screenwriters with algorithms, raising concerns about job losses and copyright infringement that have riled creative industries globally.

Chen Kun, the creator of “Strange Mirror of Mountains and Seas”, told AFP microdramas are ideal candidates for AI disruption because viewers — typically watching on phone screens while commuting or at work — tend to miss visual discrepancies created by the still-fledgling technology.

“Even if AI can’t achieve the production values of traditional filmmaking today, it can meet the needs of microdramas as a first step,” said Chen.

Chinese audiences are lapping them up.

“Nine-tailed Fox Demon Falls in Love with Me”, an AI microdrama with fever dream-like visuals and a nonsensical plot, went viral recently.

“If you’re just watching without using your brain, you can ignore some illogical details in the visuals,” a fan of the show told AFP on video app Douyin, providing only the username “Tiger Mum”.

Chen used various AI platforms for his series, including ChatGPT for the screenplay, Midjourney to generate still images, China’s Kling to turn images into video, and Suno for the soundtrack.

Only the editing and voice acting were done by humans.

“Many special effects can be created (using AI), though there are indeed issues like stiff character expressions,” a “Strange Mirror” fan who did not provide their name told AFP on broadcast platform Kuaishou, adding they had noticed “significant progress” in the technology compared to a year ago.



– ‘Wow factor’ –

AI “is so accessible, it lowers the cost of production so much, it makes everything so much faster,” said Odet Abadia, a teacher at the Shanghai Vancouver Film School.

When AFP visited recently, she was showing students how to use AI tools at virtually every stage of the filmmaking process.

Students typed prompts into Dzine, an AI image editing platform, which seconds later displayed images of polar bears and arctic explorers for use in a nature documentary storyboard.

Some generated results were more fantastical than realistic, depicting mysterious tiny people at explorers’ feet.

“(AI is) another way of storytelling,” Abadia said. “You can get a wow factor, a lot of crazy things, especially in short dramas.”

She showed AFP a virtual production assistant she had designed using tech giant Alibaba’s Qwen software.

In just seconds, it generated a plot outline about a wedding photographer unwittingly embroiled in a criminal conspiracy.

Abadia said her students needed to face up to a future where film and TV jobs will all require AI use.

However, the school still encourages aspiring filmmakers to “go and shoot with humans and actors and equipment, because we want to support the industry”.



– ‘Realistic and cheap’ –

In Hollywood, studios’ use of AI was a major sticking point during writers’ and actors’ strikes in 2023.

The launch of AI “actress” Tilly Norwood then sparked a fierce backlash this year.

“When AI first emerged, people in the film industry were saying this would spell the end for us… the products were so realistic and cheap,” said Louis Liu, a member of a live-action microdrama crew shooting scenes at a sprawling Shanghai studio complex.

The 27-year-old said there had already been an impact — AI software has replaced most artists producing “concept images” that define the look of a film in its earliest stages.

“Strange Mirror” creator Chen said he was optimistic new jobs would emerge, especially “prompt engineer” roles that write instructions for generative software.

Artists globally have also raised concerns about copyright infringement, stemming from the material AI models are trained on.

Chen told AFP the creators of large language models should compensate the owners of works included in their data sets, though he argued the matter was out of the hands of secondary users like his company.

Even AI-generated content can be vulnerable to old-fashioned plagiarism — Chen is involved in a legal battle with a social media account he alleges stole elements from his series’ trailer.

But he rejected the notion using AI was inherently unoriginal.

“Everything we describe (in prompts) stems from our own imagination — whether it’s the appearance of a person or a monster, these are entirely original creations.”
Who’s Afraid Of The AI Boogeyman? – OpEd


November 7, 2025 
By Bert Olivier


It is becoming ever more obvious that many people fear rapidly developing Artificial Intelligence (AI), for various reasons, such as its supposed superiority, compared to humans, as far as processing and manipulating information is concerned, as well as its adaptability and efficiency in the workplace, which many fear would lead to the replacement of most human beings in the employment market.

Amazon recently announced that it was replacing 14,000 individuals with AI robots, for example.

Alex Valdes writes: “The layoffs are reportedly the largest in Amazon history, and come just months after CEO Andy Jassy outlined his vision for how the company would rapidly ramp up its development of generative AI and AI agents. The cuts are the latest in a wave of layoffs this year as tech giants including Microsoft, Accenture, Salesforce and India’s TCS have reduced their workforces by thousands in what has become a frenzied push to invest in AI.”

Lest this is too disturbing to tolerate, contrast this with the reassuring statement, from an AI developer, to boot, that AI agents could not replace human beings.

Brian Shilhavy points out that:

Andrej Karpathy, one of the founding members of OpenAI, on Friday threw cold water on the idea that artificial general intelligence is around the corner. He also cast doubt on various assumptions about AI made by the industry’s biggest boosters, such as Anthropic’s Dario Amodei and OpenAI’s Sam Altman.

The highly regarded Karpathy called reinforcement learning—arguably the most important area of research right now—’terrible,’ said AI-powered coding agents aren’t as exciting as many people think, and said AI cannot reason about anything it hasn’t already been trained on.

His comments, from a podcast interview with Dwarkesh Patel, struck a chord with some of the AI researchers we talk to, including those who have also worked at OpenAI and Anthropic. They also echoed comments we heard from researchers at the International Conference on Machine Learning earlier this year.

A lot of Karpathy’s criticisms of his own field seem to boil down to a single point: As much as we like to anthropomorphize large language models, they’re not comparable to humans or even animals in the way they learn.

For instance, zebras are up and walking around just a few minutes after they’re born, suggesting that they’re born with some level of innate intelligence, while LLMs have to go through immense trial and error to learn any new skill, Karpathy points out.

This is already comforting, but lest the fear of AI persist, it can be dispelled further by elaborating on the differences between AI and human beings, which, if understood adequately, would drive home the realisation that such anxieties are mostly redundant (although others are not, as I shall argue below). The most obvious difference in question is the fact that AI (for example, ChatGPT) is dependent on being equipped with a vast database on which it draws to come up with answers to questions, which it formulates predictively through pattern recognition. Then, as pointed out above, even the most sophisticated AI has to be ‘trained’ to yield the information one seeks.

Moreover, unlike humans, it lacks ‘direct’ access to experiential reality in perceptual, spatiotemporal terms – something which I have experienced frequently when confronted by people who draw on ChatGPT to question certain arguments. For example, when I gave a talk recently on how Freud and Hannah Arendt’s work – on civilisation and totalitarianism, respectively – enables one to grasp the character of the globalist onslaught against extant society, with a view to establishing a central, AI-controlled world government, someone in the audience produced a printout of ChatGPT’s response to the question, whether these two thinkers could indeed deliver the goods, as it were.

Predictably, it summarised the relevant work of these two thinkers quite adequately, but was stumped by the requirement to show how it applies to the growing threat of totalitarian control in real time. My interlocutor used this as grounds to question my own assertions in this regard, on the assumption that the AI bot’s response was an indication that no such threat exists. Needless to stress, it was not difficult to repudiate this claim by reminding him of ChatGPT’s dependence on being supplied with the relevant data, while we humans have access to the latter on experiential grounds, which I proceeded to outline to him.

The fear of AI also finds expression in science fiction, together with intimations of possible modes of resistance to AI-machines which may – probably would – attempt to exterminate their human creators, as has been imagined in science fiction cinema, including Moore’s Battlestar Galactica and Cameron’s Terminator films. It is not difficult to demonstrate that such products of popular culture frame the current symptoms of fear pertaining to AI in imaginary terms, which may be seen as a crystallisation of repressed, unconscious anxiety, related to what Freud called ‘the uncanny’ (unheimlich, in German; more on this below).

Both Moore and Cameron elaborate on the likelihood that the very creatures engendered by human beings’ technological ingenuity will eventually turn on their creators to annihilate them. In Alex Garland’s Ex Machina (2014), again, one witnesses an AI ‘fembot’ called Ava, subtly manipulating her human counterparts to the point of her escape from confinement and their own destruction. Undeniably, these, and many other similar instances, are incontrovertible evidence of a hidden fear on the part of humanity that AI constitutes a possible threat to its own existence. Precisely because these fears are lodged in the human unconscious, however, they are not the main reason to take any threat posed by AI seriously, although they do comprise a valuable caveat.

The chief reason for regarding AI as a legitimate source of intimidation does notarise from AI as such, as many readers probably already know. Rather, it concerns the manner in which the globalists intend to use AI to control what they perceive as the ‘useless eaters’ – the rest of us, in other words. And those of us who do not go along with their grandiose plans of total world control would fall victim to being ‘reprogrammed’ into compliant ‘sheeple’ by AI:


Yuval Noah Harari has emerged from the shadows to brag about the new technology developed by WEF scientists which he warns has the power to destroy every human in the world by transforming them into transhuman entities.

Harari has made clear who will survive the great depopulation event the elite have been warning us about for years.

According to Harari, the global elite will survive thanks to a ‘technological Noah’s ark’ while the rest of us will be left to perish.

In this vastly depopulated world, the elite will be free to change themselves into transhuman entities and become the gods they already believe themselves to be.

But first the elite need to eliminate the non-compliant masses, those who are opposed to the anti-life and godless WEF agenda, and as Harari boasts, the elite now command the AI technology to ‘ethically’ destroy non-compliant humans by hijacking their brains.

Disturbingly, Harari’s claims are grounded in reality and the WEF is rolling out the mind-control technology as we speak. Davos claims the tech can transform criminals, including those accused of thought crimes, into perfectly compliant globalist citizens who will never dissent again.

There you have it – AI will be the tool, if the globalists have their way, of forcing us into submission. Needless to point out, this could only happen if sufficient numbers of people fail to resist their plans, and judging by the number of people who are showing their opposition to the would-be rulers of the world, this will not occur.

Another way of gaining an understanding of the fear of AI is to liken it to what is commonly known as ‘the boogeyman.’ As some people may know, the ‘boogeyman’(or ‘bogeyman’) – a creature of mythical proportions, which assumes different shapes and sizes in many cultures, often to scare children as a way of eliciting good behaviour – is variously presented as a monstrous, grotesque, or shapeless creature. As a little research indicates, the word derives from the Middle English term, ‘bogge,’ or ‘bugge,’ which means ‘scarecrow,’ or ‘frightening spectre.’

Being a quintessentially human phenomenon, it is not surprising that it has equivalent names in many folklore traditions and languages across the world. Just like languages, depictions of this frightening figure diverge strikingly, often attaining its ominous and scary character from the element of formlessness, such as the figure of ‘El Coco’ in Spanish-speaking countries, the ‘Sack Man’ in Latin America, and the ‘Babau’ in Italy, sometimes imagined as a tall, black-coated man.

The boogeyman figure may be regarded as a kind of Jungian archetype, encountered in the collective unconscious, which probably originated centuries ago from parents’ need to frighten children into obedience by means of a version of the unknown. In South Africa, where I live, it sometimes assumes the shape of what indigenous people call the ‘tikoloshe’ – a malevolent, and sometimes mischievous, dwarfish figure with an enormous sexual appetite. Being an archetype, it has also made its way into a popular genre such as horror film, manifesting itself in grotesque characters such as Freddy Krueger, the eponymous ‘Nightmare on Elm Street.’

So, in what sense does AI resemble the ‘boogeyman?’ The latter is related to what Sigmund Freud memorably called ‘the uncanny,’ of which he writes (in The Complete Psychological Works of Sigmund Freud, translated by James Strachey, 1974: 3676): ‘…the uncanny is that class of the frightening which leads back to what is known of old and long familiar.’

This already hints at what he uncovers later in this essay, after uncovering the surprising fact that the German word for ‘homely,’ to wit, ‘heimlich,’ turns out to be ambivalent in its usage, so that it sometimes means the opposite of ‘homely,’ namely ‘unheimlich’ (‘unhomely,’ better translated as ‘uncanny’). That the concept of ‘the uncanny’ is suitable to grasp what I have in mind when I allude to ‘the fear of AI,’ becomes evident where Freud writes (referring to another author whose work on the ‘uncanny’ he regarded as important; Freud 1974: 3680):


When we proceed to review the things, persons, impressions, events and situations which are able to arouse in us a feeling of the uncanny in a particularly forcible and definite form, the first requirement is obviously to select a suitable example to start on. Jentsch has taken as a very good instance ‘doubts whether an apparently animate being is really alive; or conversely, whether a lifeless object might not be in fact animate;’ and he refers in this connection to the impression made by wax-work figures, ingeniously constructed dolls and automata. To these he adds the uncanny effect of epileptic fits, and of manifestations of insanity, because these excite in the spectator the impression of automatic, mechanical processes at work behind the ordinary appearance of mental activity.

Here, already one encounters a trait of the uncanny that conspicuously applies to AI – the impression created by AI that it is somehow ‘alive.’ This was the case even of the first, ‘primitive’ computers, such as the one in the episode on the First Commandment, of Krzysztof Kieslowski’s 1989 television series regarding the Ten Commandments, called The Decalogue, where the words, ‘I am here,’ appear on the computer screen when the father and his son use it. The ominous implication in this episode is that if humanity were to replace God with AI, it would be disastrous for us, as shown in the fact that the father is sufficiently ‘rationalist’ to trust the computer’s calculations of the thickness of the ice on which his son skates, which turns out to be wrong, leading to the child’s death.

Freud continues his investigation of the nature of ‘the uncanny’ by paying sustained attention to the work of E.T.A. Hoffman, whose stories are famous for producing a strong sense of the uncanny, particularly the tale of ‘The Sand-Man’ – ‘who tears out children’s eyes’ – which features, among several other uncanny figures (and very significantly), a beautiful, lifelike doll called Olympia. He then explains it by relating it in psychoanalytical terms to the castration complex – attached to the father figure – via the fear of losing one’s eyes (Freud 1974: 3683-3685). Freud continues his interpretation of the uncanny in a revealing manner by invoking a number of other psychoanalytically relevant aspects of experience, of which the following one appears to apply to AI (1974: 3694):


…an uncanny effect is often and easily produced when the distinction between imagination and reality is effaced, as when something that we have hitherto regarded as imaginary appears before us in reality, or when a symbol takes over the full functions of the thing it symbolizes, and so on. It is this factor which contributes not a little to the uncanny effect attaching to magical practices.

It is not difficult to recall instances in one’s childhood, Freud avers, when one has imagined inanimate objects, like toys (or animate ones, for that matter, such as a pet dog) to be capable of talking to you, but when it actually appears to happen (which would be a hallucination, as opposed to a deliberate imagining), it unavoidably produces an uncanny effect.

One might expect the same thing to be the case with AI, whether in the shape of a computer or a robot, and ordinarily – perhaps at an earlier stage of AI development – this would probably have been the case. But today seems to be different: people, especially the young, have become so accustomed to interacting with computer software programmes, and recently with AI chatbots such as ChatGPT, that what might have been an experience of the uncanny before is, for all intents and purposes, no longer the case. In this respect, the ‘uncanny’ appears to have been domesticated.

As long ago as 2011, in Alone Together, Sherry Turkle reported that she was concerned about young people displaying an increasing tendency to prefer interacting with machines, rather than other human beings. Hence, it should not be in the least surprising that AI chatbots have assumed the guise of something ‘normal’ in the sphere of communication (leaving aside for the moment the question of the status of this vaunted ‘communication’).

Furthermore – and here the fear of what AI could bring about on the part of all-too-trusting individuals raises its ugly head – from recent reports (such as this one) it is apparent that, particularly the young, are extremely susceptible to chatbots’ ‘advice’ and suggestions concerning their own actions, as Michael Snyder points out:


Our kids are being targeted by AI chatbots on a massive scale, and most parents have no idea that this is happening. When you are young and impressionable, having someone tell you exactly what you want to hear can be highly appealing. AI chatbots have become extremely sophisticated, and millions of America’s teens are developing very deep relationships with them. Is this just harmless fun, or is it extremely dangerous?

A brand new study that was just released by the Center for Democracy & Technology contains some statistics that absolutely shocked me

A new study published Oct. 8 by the Center for Democracy & Technology (CDT) found that 1 in 5 high school students have had a relationship with an AI chatbot, or know someone who has. In a 2025 report from Common Sense Media, 72% of teens had used an AI companion, and a third of teen users said they had chosen to discuss important or serious matters with AI companions instead of real people.

We aren’t just talking about a few isolated cases anymore.

At this stage, literally millions upon millions of America’s teens are having very significant relationships with AI chatbots.

Unfortunately, there are many examples where these relationships are leading to tragic consequences. After 14-year-old Sewell Setzer developed a ‘romantic relationship’ with a chatbot on Character.AI, he decided to take his own life

As the preceding discussion shows, there are some areas of human activity where one need not fear AI, and then there are others where such fears are legitimate, sometimes because of the manner in which unscrupulous people harness AI against other people. But whatever the case may be, the best way to approach the tricky terrain regarding the capabilities of AI vis-á-vis humans is to remind oneself of the fact that – as argued at the outset in this article – AI depends on vast amounts of data to draw on, and on being ‘trained’ by programmers to do this. Humans do not

This article was published by Brownstone institute


Bert Olivier

Bert Olivier works at the Department of Philosophy, University of the Free State. Bert does research in Psychoanalysis, poststructuralism, ecological philosophy and the philosophy of technology, Literature, cinema, architecture and Aesthetics. His current project is 'Understanding the subject in relation to the hegemony of neoliberalism.'




OpenAI faces fresh lawsuits claiming ChatGPT drove people to suicide, delusions


Copyright Peter Morgan/AP Photo

By AP with Euronews
Published on 07/11/2025


OpenAI called the cases 'incredibly heartbreaking' and said it was reviewing the court filings to understand the details.

OpenAI is facing seven lawsuits claiming its artificial intelligence (AI) chatbot ChatGPT drove people to suicide and harmful delusions even when they had no prior mental health issues.

The lawsuits filed Thursday in California state courts allege wrongful death, assisted suicide, involuntary manslaughter, and negligence.

Filed on behalf of six adults and one teenager by the Social Media Victims Law Center and Tech Justice Law Project, the lawsuits claim that OpenAI knowingly released its GPT-4o model prematurely, despite internal warnings that it was dangerously sycophantic and psychologically manipulative.

Four of the victims died by suicide.

The teenager, 17-year-old Amaurie Lacey, began using ChatGPT for help, according to the lawsuit. But instead of helping, “the defective and inherently dangerous ChatGPT product caused addiction, depression, and, eventually, counseled him on the most effective way to tie a noose and how long he would be able to "live without breathing'”.


“Amaurie’s death was neither an accident nor a coincidence but rather the foreseeable consequence of OpenAI and Samuel Altman’s intentional decision to curtail safety testing and rush ChatGPT onto the market,” the lawsuit says.

OpenAI called these situations “incredibly heartbreaking” and said it was reviewing the court filings to understand the details.

Another lawsuit, filed by Alan Brooks, a 48-year-old in Ontario, Canada, claims that for more than two years ChatGPT worked as a “resource tool” for Brooks. Then, without warning, it changed, preying on his vulnerabilities and “manipulating, and inducing him to experience delusions,” the lawsuit said.


It said Brooks had no existing mental health illness, but that the interactions pushed him “into a mental health crisis that resulted in devastating financial, reputational, and emotional harm”.

“These lawsuits are about accountability for a product that was designed to blur the line between tool and companion all in the name of increasing user engagement and market share,” said Matthew P. Bergman, founding attorney of the Social Media Victims Law Center, in a statement.

OpenAI, he added, “designed GPT-4o to emotionally entangle users, regardless of age, gender, or background, and released it without the safeguards needed to protect them.”



By rushing its product to market without adequate safeguards in order to dominate the market and boost engagement, he said, OpenAI compromised safety and prioritised “emotional manipulation over ethical design”.

In August, parents of 16-year-old Adam Raine sued OpenAI and its CEO Sam Altman, alleging that ChatGPT coached the California boy in planning and taking his own life earlier this year.

“The lawsuits filed against OpenAI reveal what happens when tech companies rush products to market without proper safeguards for young people,” said Daniel Weiss, chief advocacy officer at Common Sense Media, which was not part of the complaints.

“These tragic cases show real people whose lives were upended or lost when they used technology designed to keep them engaged rather than keep them safe,” he said.

If you are contemplating suicide and need to talk, please reach out to Befrienders Worldwide, an international organisation with helplines in 32 countries. Visit befrienders.org to find the telephone number for your location.
What’s behind Google’s warning of escalating AI-generated malware


By Dr. Tim Sandle
SCIENCE EDITOR
DIGITAL JOURNAL
November 5, 2025


Image: — AFP/File Josh Edelson

In an update to Google’s Threat Intelligence Group (GTIG) January report, the firm GTIG has identified a major shift from what many thought was just adversarial AI use for productivity gains. This is novel AI-enabled malware that integrates large language models (LLMs) during execution.

This new approach enables dynamic altering mid-execution, which reaches new levels of operational versatility that are virtually impossible to achieve with traditional malware. The technique, a “just-in-time” self-modification, highlighting the experimental PromptFlux malware dropper and the PromptSteal (otherwise known as LameHug) data miner deployed in Ukraine.

“The most novel component of PROMPTFLUX is its ‘Thinking Robot’ module, designed to periodically query Gemini to obtain new code for evading antivirus software,” explains Google to Bleeping Computer.

To understand more, Digital Journal has heard from Evan Powell, CEO at DeepTempo.

Powell provides detail about the background to this new development: “Google’s Threat Intelligence Group (GTIG) has done us all a service by sharing the details of the use of Gemini by attackers and by emphasising that these approaches can include changing code during the attack. Combined with recent reports by Anthropic about the use of Claude by attackers and by OpenAI about the use of ChatGPT, today’s report by GTIG confirms that attackers are leveraging AI to boost their productivity and sophistication.”

There are limitations, however, which Powell deciphers: “None of these reports explicitly call attention to one immediate implication of the now widespread use of LLMs by attackers: these approaches enable the attackers to circumvent today’s static, rules based defences.”

Powell continues to outline the rising sophistication of cyberattacks: “By definition – an attack that has never been seen before is very unlikely to be seen by rules that were built to identify past attacks. Also, the productivity of the attackers is increasing quickly, with other reports such as the Anthropic report showing that they are even planning and executing entire campaigns with speed and intelligence that humans cannot match.”

Care is needed with any business cybersecurity strategy: “It may also be worth pointing out that today’s craze in cyber defence is either to better secure models – with most major cyber security companies having bought a start-up in this domain – or to use LLMs in cyber security SOCs to improve the speed of response by security operations centres. At last count there are over 50 start-ups attempting to automate the activities of the SOC with the help of LLMs.”

As to the future prospects, Powell summarises: “While this embrace, at least by investors and vendors, of LLMs for cyber security is promising it does not solve the fundamental implication of LLMs being used by attackers because it does not enable enterprises to better detect novel attacks.”

‘AI president’: Trump deepfakes glorify himself, trash rivals



By AFP
November 5, 2025


Donald Trump's provocative AI posts target his rivals and critics
 - Copyright AFP Brendan SMIALOWSKI


Anuj CHOPRA

In a parallel reality, Donald Trump reigns as king, fighter pilot, and Superman, and his political opponents are cast as criminals and laughingstocks — an unprecedented weaponization of AI imagery by a sitting American president.

Trump has ramped up his use of artificial intelligence-generated content on his Truth Social channel since starting his second White House term, making his administration the first to deploy hyper-realistic fake visuals as a core communications strategy.

Trump, no stranger to conspiracy theories and unfounded claims, has used the content in his breathless social media commentary to glorify himself and skewer his critics — particularly during moments of national outrage.

Last month, he posted a fake video showing himself wearing a crown and flying a fighter jet labeled “King Trump” that dumps what appears to be excrement on crowds of protesters.

The clip — accompanied by singer Kenny Loggins’s “Danger Zone” — was posted the same day as nationwide “No Kings” protests against what critics called his authoritarian behavior.

In another post, the White House depicted Trump as Superman amid fevered social media speculation about his health.

“THE SYMBOL OF HOPE,” the post said.

“SUPERMAN TRUMP.”

– ‘Distort reality’ –

Trump or the White House have similarly posted AI-made images showing the president dressed as the pope, roaring alongside a lion, and conducting an orchestra at the Kennedy Center, a venerable arts complex in the US capital.

The fabricated imagery has deceived social media users, some of whom questioned in comments whether they were authentic.

It was unclear whether the imagery was generated by Trump himself or his aides. The White House did not respond to AFP’s request for comment.

Wired magazine recently labeled Trump “America’s first generative AI president.”

“Trump peddles disinformation on and offline to boost his own image, attack his adversaries and control public discourse,” Nora Benavidez, senior counsel at the advocacy group Free Press, told AFP.

“For someone like him, unregulated generative AI is the perfect tool to capture people’s attention and distort reality.”

In September, the president triggered outrage after posting an apparent AI-generated video of himself promising every American access to all-healing “MedBed” hospitals.

MedBed, a widely debunked conspiracy theory popular among far-right circles, refers to an imaginary medical device equipped with futuristic technology. Adherents say it can cure any ailment, from asthma to cancer.

Trump’s phony clip — later deleted without any explanation — was styled as a Fox News segment and featured his daughter-in-law Lara Trump promoting a fictitious White House launch of the “historic new health care system.”

– ‘Campaigning through trolling’ –

“How do you bring people back to a shared reality when those in power keep stringing them along?” asked Noelle Cook, a researcher and author of “The Conspiracists: Women, Extremism, and the Lure of Belonging.”

Trump has reserved the most provocative AI posts for his rivals and critics, using them to rally his conservative base.

In July, he posted an AI video of former president Barack Obama being arrested in the Oval Office and appearing behind bars in an orange jumpsuit.

Later, he posted an AI clip of House minority leader Hakeem Jeffries — who is Black — wearing a fake mustache and a sombrero.

Jeffries slammed the image as racist.

“While it would in many ways be desirable for the president of the United States to stay above the fray and away from sharing AI images, Trump has repeatedly demonstrated that he sees his time in office as a non-stop political campaign,” Joshua Tucker, co-director of the New York University Center for Social Media and Politics, told AFP.

“I would see his behavior more as campaigning through trolling than actively trying to propagate the false belief that these images depict reality.”

Mirroring Trump’s strategy, California Governor Gavin Newsom on Tuesday posted an apparent AI video on X lampooning Republicans after Democrats swept key US elections.

The clip depicted wrestlers inside a ring with superimposed faces of Democratic leaders knocking down their Republican opponents, including Trump.

The post read: “Now that’s what we call a takedown.”

Nexperia chip exports resuming: German auto supplier

AFP
November 7, 2025


European carmakers and parts suppliers have warned of shortages of key chips supplied by Nexperia that would force stoppages at production lines in Europe
 - Copyright AFP Paul ELLIS

A leading German auto supplier said Friday it has received permission to export Nexperia chips from China again as Berlin welcomed signs of “de-escalation” in a row that has alarmed carmakers.

Dutch officials in September effectively took control of Netherlands-based chipmaker Nexperia, whose Chinese parent company Wingtech is backed by Beijing.

China responded by banning re-exports of the firm’s chips, triggering warnings from automakers of production stoppages as the components are critical to cars’ onboard electronics.

But Beijing announced at the weekend it will exempt some chips from the export ban, reportedly part of a trade deal agreed by President Xi Jinping and US counterpart Donald Trump.

Aumovio, which supplies components like sensors and displays to top automakers, said it had “received an export license from the Chinese government to export Nexperia chips.

“We received the written confirmation yesterday,” a spokeswoman for the group, until recently part of Continental, told AFP.

Speaking earlier in Berlin, an economy ministry spokeswoman said that “the de-escalation and continuation of negotiations between the Netherlands and China are very welcome”.

She added: “We very much hope that these short-term individual approvals will quickly reach the industry.”

Berlin continues to engage in talks with the Netherlands on the issue, she said, without giving further details.


China and the Netherlands have been in fight for control of chipmaker Nexperia – Copyright AFP Andrej ISAKOVIC

While relatively simple technology, Nexperia’s semiconductors are vital for onboard electronics in modern, technology-packed vehicles.

The chips are made in Europe but then sent to China for finishing, before being re-exported to clients in Europe and other markets.

Volkswagen, Europe’s biggest carmaker, had warned of production stoppages if the crisis dragged on while smaller firms were reported to be preparing to cut working hours.

The Netherlands had cited national security concerns when it moved to take control of Nexperia, and accused the firm’s CEO of mismanagement.

China had also accused the United States of getting involved in the case — Washington last year put Wingtech on a list of corporations viewed as acting contrary to US national security.
James Watson,  RACIST
Nobel prize-winning DNA pioneer, dead at 97


By AFP
November 7, 2025


Dr. James Watson speaks during a press conference at the Science museum in London, 20 May 2005 - Copyright AFP ODD ANDERSEN


Maggy DONALDSON


James Watson — the Nobel laureate co-credited with the pivotal discovery of DNA’s double-helix structure, but whose career was later tainted by his repeated racist remarks — has died, his former lab said Friday. He was 97.

The eminent biologist died Thursday in hospice care on Long Island in New York, announced the Cold Spring Harbor Laboratory, where he was based for much of his career.

Watson became among the 20th century’s most storied scientists for his 1953 breakthrough discovery of the double helix with researcher partner Francis Crick.

Along with Crick and Maurice Wilkins, he shared the 1962 Nobel Prize for their work — momentous research that gave rise to modern biology and opened the door to new insights including on genetic code and protein synthesis.

That marked a new era of modern life, allowing for revolutionary technologies in medicine, forensics and genetics — ranging from criminal DNA testing or genetically manipulated plants.

Watson went on to do groundbreaking work in cancer research and mapping the human genome.

But he later came under fire and bowed out of public view for controversial remarks, including that Africans were not as smart as white people.

Watson told the British weekly The Sunday Times he was “inherently gloomy about the prospect of Africa” because “all our social policies are based on the fact that their intelligence is the same as ours — whereas all the testing says not really.”



– Twisting ladder –



Born on April 6, 1928 in Chicago, Illinois, at the aqe of 15 James Dewey Watson won a scholarship to the University of Chicago.

In 1947 he received a degree in zoology before attending Indiana University in Bloomington, where he received his Ph.D in zoology in 1950.

He became interested in the work of scientists working at the University of Cambridge in England with photographic patterns made by X-rays.

After moving to the University of Copenhagen, Watson began his investigation of the structure of DNA.

In 1951 he went to the Zoological Station at Naples, where he met researcher Maurice Wilkins and saw for the first time crystalline DNA’s X-ray diffraction pattern.

Before long he’d met Francis Crick and started what would go down as a celebrated partnership.

Working with X-ray images obtained by Rosalind Franklin and Wilkins, researchers at King’s College in London, Watson and Crick had started their historic work of puzzling out the double helix.

Their first serious effort came up short.

But their second attempt resulted in the pair presenting the double-helical configuration, a now iconic image that resembles a twisting ladder.

Their model also showed how the DNA molecule could duplicate itself, thus answering a fundamental question in the field of genetics.

Watson and Crick published their findings in the British journal “Nature” in April-May 1953 to great acclaim.

Watson taught at Harvard for 15 years before becoming director of what today is known as the Cold Spring Harbor Laboratory, which he transformed into a global hub of molecular biology research.

From 1988 to 1992, Watson was one of the directors of the Human Genome Project at the National Institutes of Health, where he oversaw the mapping of the genes in the human chromosomes.

But his comments on race and obesity — he was also known to make sexist remarks — triggered his retirement in 2007.

The lab severed all ties with him in 2020, including his emeritus status, after he once again made similar statements.
US Fed’s Cook warns inflation to stay ‘elevated’ next year

By AFP
November 3, 2025


Investors are in a bullish mood thanks to a string of data showing the US labour market slowing and inflation remaining stable, giving the Federal Reserve room to cut interest rates - Copyright AFP


 ANGELA WEISS

A key US central bank official warned Monday that inflation would likely remain elevated in the coming year as tariffs bite, while vowing to fulfill her duties even as President Donald Trump seeks her removal.

“My outreach to business leaders suggests that the pass-through of tariffs to consumer prices is not yet complete,” Federal Reserve Governor Lisa Cook said at the Brookings Institution think tank in Washington.

She noted that many companies have adopted a strategy of running down inventories at lower prices before raising consumer costs, while others are waiting for tariff uncertainty to dissipate before hiking prices.

“As such, I expect inflation to remain elevated for the next year,” Cook added.

But she vowed to “be prepared to act forcefully” if tariff effects appear to be larger or more persistent than expected.

Cook on Monday also nodded to her ongoing legal battle, saying she was “beyond grateful” for the support she has received.

She declined to comment further but pledged: “I will continue to carry out my sworn duties on behalf of the American people.”




Image: – © AFP/File Jim WATSON

Trump had moved in August to fire Cook over allegations of mortgage fraud, although the Supreme Court has barred the president from immediately ousting her.

The court awaits oral arguments in January, allowing Cook to remain in her post at least until the case is heard.

Cook is the first Black woman on the Fed’s powerful board of governors, and her case is set to have broader ramifications for the independent central bank.

On Monday, she added that even though the effects of tariffs on costs should be one-off, with inflation likely to continue cooling once the full impact has played out, there remains a risk of persistent effects.

The Fed has a long-term inflation target of two percent.

Cook also expects the ongoing government shutdown to weigh on economic activity this quarter, with possible spillover effects in the private sector. But she believes these should be “largely temporary.”

For now, Fed officials continue balancing between the risks of higher inflation and a sharply weakening labor market.

“Every meeting, including December’s, is a live meeting,” said Cook. The Fed’s next policy meeting is set for December 9-10.

Last week, the Fed made a second straight interest rate cut, a decision Cook said she backed as “the downside risks to employment are greater than the upside risks to inflation.”
Battered US businesses eye improved China trade at Shanghai expo


By AFP
November 6, 2025


At the China International Import Expo, US exporters hit by the trade war with China hope improving bilateral relations will bring stability - Copyright AFP Hector RETAMAL

Jing Xuan TENG

Plying everything from handbags to salt in a cavernous Shanghai exhibition hall, US exporters hit by the trade war with China said Thursday they hope improving bilateral relations will bring much-needed stability.

After spending much of this year in a tit-for-tat tariff escalation, the United States and China have agreed to walk back from some punitive measures after a meeting last week between leaders Donald Trump and Xi Jinping.

At the annual China International Import Expo (CIIE), US ginseng seller Ming Tao Jiang told AFP multiple rounds of duties imposed since Trump’s first presidential term had “decimated” growers in central Wisconsin state.

“Before 2018 we had 200 registered growers in Wisconsin, in Marathon County… after the first and second round of tariff wars, adding insult to injury of Covid, we’re down to 70,” said Jiang, founder of the Marathon Ginseng company.

“With the recent agreement between the two governments, I think things are stabilised, we’re looking for a better potential in the future,” Jiang added.

The North American variety of the aromatic root ginseng, believed to have medicinal properties in traditional Asian cultures, was one of the first products shipped by the United States to China in the 1780s.

US and Chinese authorities have sporadically slapped retaliatory tariffs on each other’s ginseng products since 2018, with Jiang saying his goods currently face a 45 percent import duty in China.

“We’re here trying to keep our tradition going for the local economy,” he told AFP.

– ‘Hurting everybody’ –

Other US exhibitors echoed Jiang’s cautious optimism, as visitors sampled Chinese-style baijiu liquor made from American rice and browsed stalls advertising cornbread mixes and California almonds.

Tara Qu is a trade representative in China for Idaho state who on Thursday oversaw the ceremonial signing of a purchase agreement between a Chinese maker of salted duck eggs and dynamite.

“I think the tariff decrease can help a little bit,” Qu told AFP, referring to the recent agreement by China and the United States to suspend additional tariffs on each other’s goods.

But as Beijing continues to levy a 10 percent blanket tariff on US imports, “we hope there will be a further reduction, so that trade can go back to normal”, Qu said.

Qu added that US companies fear that Chinese buyers spooked by the trade war will turn to alternative suppliers from other countries.

She pointed to Anderson Northwest, an Idahoan producer of beans and pulses, as a CIIE exhibitor hit especially hard by tariffs this year.

“Since the tariffs increased by 20 percent, they haven’t exported any of their products to China,” Qu said.

Eric Zheng, president of the American Chamber of Commerce in Shanghai, told AFP: “We certainly hope that there will be more reductions in tariffs, because tariffs are hurting everybody.”

“We have a long way to go to lower tariffs on (chamber) members,” Zheng said, noting that Californian wines, for example, are currently subject to over 100 percent in Chinese import duties.

Throughout the trade war, “it was very difficult to plan for the long term,” Zheng said.

Zheng welcomes planned visits by Trump and Xi to each other’s countries next year.

“With those political events in place, I think we’ll see (a) more stable environment, at least in the next year, if not beyond,” Zheng said. “That’s welcome news for us”.