Tuesday, August 19, 2025

'Bird killer': Trump slammed for 'hypocrisy' over wind farm vendetta

Adam Nichols
August 19, 2025 


During a recent appearance with UK Prime Minister Keir Starmer at Trump's Scottish golf course, the president ranted that windmills are "a disaster" that "kill all your birds." His Interior Secretary Doug Burgum quickly fell in line, tweeting that "wind projects are known to kill eagles" while announcing investigations into turbine impacts on bird populations.

But Trump's bird crusade is rooted in pure spite. wrote The National Republic's Liza Featherstone. The former president has harbored a grudge against wind power since 2012, when plans emerged for a wind farm near his Aberdeen golf course. "They're horrible looking structures," Trump complained at the time. "They make noise, they kill birds by the thousands." He sued unsuccessfully to block the project, which was completed in 2018.

The hypocrisy is staggering, Featherstone wrote in her article titled "Trump is a bird killer."

Just four months ago, Trump called for gutting the very Bald and Golden Eagle Protection Act that Burgum now claims to champion. His administration is simultaneously weakening the Endangered Species Act and diluting the Migratory Bird Treaty Act.


The facts demolish Trump's anti-wind narrative, Featherstone wrote. Wind turbines cause less than 0.01 percent of human-caused bird deaths—far fewer than cats, buildings, or the fossil fuel industry Trump champions. Coal destroys bird habitat while oil and gas infrastructure kills far more birds than turbines.


"The biggest threat to birds by far is climate change," environmental experts note, pointing to Audubon Society estimates that two-thirds of American bird species could face extinction from unchecked global warming.

Trump's bird protection theater is "petty, self-serving, cynical, and hypocritical," Featherstone wrote — and stem from him being unable to stop a wind farm near his "tacky golf courses."
Death toll from northern Pakistan monsoon floods rises to almost 400


By AFP
August 19, 2025


Rescuers resumed searching for survivors as the death toll from monsoon floods in northern Pakistan rose to almost 400 - Copyright AFP 

Aamir QURESHI
Zain Zaman JANJUA

Rescuers and residents resumed searching on Tuesday for survivors as the death toll from five days of torrential rain rose to almost 400, with authorities warning monsoon downpours would continue until the weekend.

Torrential rains across Pakistan’s north have caused flooding and landslides that have swept away entire villages, leaving many residents trapped in the rubble and scores missing.

The National Disaster Management Authority (NDMA) said 356 people were killed in Khyber Pakhtunkhwa, a mountainous province in Pakistan’s northwest bordering Afghanistan, since Thursday evening.

Dozens more were killed in surrounding regions, taking the toll in the past five days to almost 400.

Rescuers dug through mud and stone in hard-hit Dalori village in Khyber Pakhtunkhwa in the hope of finding survivors and the bodies of people missing.

Villagers stood watching and praying as the rescuers worked, a day after the search was halted by more intense rain.

Umar Islam, a 31-year-old labourer, struggled to hold back his tears as he spoke about his father, who was killed on Monday.

“Our misery is beyond explanation,” Islam told AFP as neighbours tried to console him.

“In a matter of minutes, we lost everything we had,” he said.

“Our lives are ruined.”

Fazal Akbar, 37, another villager, described the aftermath of the floods as “terrifying”.

“It happened so suddenly that no one even had a minute to react. Announcements were made from the mosque, and villagers rushed to begin the rescue themselves,” said Akbar.

“In less than 20 minutes, our village was reduced to ruins.”



– More rain –



Many roads have been damaged, making it hard for rescuers to reach areas damaged by the floods.

Communication also remains difficult, with phone networks hit in flood-affected areas.

Heavy rain also began falling on Tuesday in southern parts of Pakistan that had so far been spared the worst of the monsoon downpours.

The rain was expected to continue until Saturday, and “another spell is to start by the end of the month”, said NDMA chairman Lieutenant General Inam Haider Malik.

More than 700 people have been killed in the monsoon rains since June 26, the NDMA said, with close to 1,000 injured. The monsoon is expected to last until mid-September.

Authorities also warned of urban flooding in big cities in coastal areas of Sindh province, including the financial capital Karachi, “due to weak infrastructure”.

It has also been raining in 15 districts in neighbouring Balochistan province, and the main highway connecting it with Sindh has been blocked for heavy vehicles, said provincial disaster official Muhammad Younis.

Between 40 and 50 houses had been damaged in two districts, he said.

Landslides and flash floods are common during the monsoon season, which typically begins in June and lasts until the end of September.

Pakistan is among the world’s most vulnerable countries to the effects of climate change and is increasingly facing extreme weather events.

Monsoon floods submerged one-third of Pakistan in 2022, resulting in approximately 1,700 deaths.
Serbia protesters accuse police of abuse and warn of ‘spiral of violence’


By AFP
August 19, 2025


Serbian riot police have cracking down on anti-government rallies in Belgrade
 - Copyright AFP Andrej ISAKOVIC


Mina PEJAKOVIC, Ognjen ZORIC

Serbian anti-government protesters accused police Tuesday of beating and threatening detained activists, fuelling fears of a spiralling crackdown after a week of violent clashes.

Almost daily protests have gripped Serbia since November, sparked by the collapse of a railway station roof that killed 16 people.

The tragedy became a symbol of deep-rooted corruption in the Balkan nation, with demands for a transparent investigation morphing into calls for early elections.

But in the past week, the mostly peaceful demonstrations have erupted into street violence over several nights, with loyalists of President Aleksandar Vucic attacking protesters and riot police responding forcefully to the destruction of his party offices.

More than 100 protesters have been detained, with one telling AFP they were beaten and threatened with rape while in custody.

Nikolina Sindjelic, a student activist, said she was dragged along with another student and several other protesters into a government garage in central Belgrade by officers in a special police unit on Thursday night.

“The commander of the unit brutally beat both him and me,” the 22-year-old political science student said.

“He called us offensive names, told me I was a whore and that he would rape me there in front of everyone, that I would regret trying to overthrow the state.”

The Ministry of Internal Affairs “strongly denied” the allegations. It said Sindjelic was arrested for being part of a group that had attacked government buildings and police.

“Throughout the entire procedure, no means of coercion, handcuffing, insults, or any form of mistreatment were applied,” it said.

Sindjelic, who was released with a misdemeanour charge, said she intends to file a lawsuit over her alleged abuse.



– ‘Brutal behaviour’-



Hundreds gathered in front of the accused commander’s police building in Belgrade on Tuesday to support Sindjelic and decry widespread reports of police brutality.

Protesters carried banners reading, “Rapists with badges” and “The system beats us, we defend ourselves”.

“Unfortunately, we are now entering a spiral of violence, and I do not see how it can end well if things continue in this way,” 31-year-old protester Andrej Sevo said.

“They must decide how to act and calm the situation, rather than simply pouring fuel on the fire by sending in the police, with ever more brutal behaviour.”

Aleksandra Krstic, 45, also at the rally, said women were especially vulnerable to police abuse.

“We have no one to protect us. If I go to a protest, I should be able to turn to the police… not be beaten, dragged into some basement of a government building, threatened with rape, and forced to beg them to stop,” the political science professor said.



– ‘A bid to seize power’ –



On Monday night, protesters again faced off with riot police after an office of Vucic’s Serbian Progressive Party had its windows smashed by a passing crowd.

Within an hour, the embattled leader stood in front of the shattered glass, flanked by pro-government media and security, to denounce the anti-graft demonstrators as “terrorists” — a familiar refrain for the 55-year-old statesman.

“We will fight them everywhere, and we will resist them wherever they appear,” he said, after riot police had chased protesters from the area.

While the protests have so far led to the resignation of the prime minister and the collapse of the government, Vucic — in power for 13 years — has remained defiant.

He has repeatedly rejected calls for early elections and recently threatened a “strong response” to the demonstrations.

“This is an attempt at a foreign-funded colour revolution, in which no means are spared, and violence is used in a bid to seize power,” he said on Monday night.

He has frequently decried the movement as a “colour revolution” — a term favoured by the Kremlin and its allies to smear protest movements as illegitimate.

Russia remains a close Serbian ally despite Belgrade’s declared path to the European Union.

The EU ambassador to Serbia, Andreas von Beckerath, said he and other diplomats had met with the government to “discuss the current political situation” in Serbia.

“The EU Ambassador underlined the need by all parties to uphold the respect for fundamental rights, including the right for peaceful assembly,” Beckerath said Monday.

“Any suspicion of excessive use of force needs to be duly investigated, including worrying reports about threats and violence against journalists,” he said.
Panama hopes to secure return of US banana giant Chiquita

By AFP
August 18, 2025

Chiquita workers at the plant in Bocas del Toro, which relies heavily on tourism and banana production, went on strike on April 28 to protest pension reforms
 - Copyright AFP/File DANIEL SANTOS

Panamanian President Jose Raul Mulino will meet with representatives of US banana giant Chiquita Brands in Brazil later this month amid a push for the company to resume operations in his country after it laid off its entire workforce due to a strike, a minister announced Monday.

Chiquita, which employed more than 6,000 people in the town of Changuinola in the Caribbean province of Bocas del Toro, laid off the workers earlier this spring after prolonged protests that paralyzed the region.

The meeting will take place during Mulino’s visit to Brazil, which begins August 28, and officials hope it will lead to an agreement with Chiquita, said Commerce and Industry Minister Julio Molto.

Talks with the company “are progressing positively… I hope we can reach a good agreement with Chiquita and that the president can close it in Brazil so that the company can return to the country,” Molto said.

“If everything goes as planned, we could have good news in September or the end of this month,” Molto added in a statement to broadcaster Telemetro, adding that the company’s return would have “to be phased.”

According to the minister, Chiquita is evaluating its losses and analyzing ways of hiring new staff.

The company has also reportedly requested guarantees that supply routes will not be closed in the event of future protests.

Chiquita workers at the plant in Bocas del Toro, which relies heavily on tourism and banana production, went on strike on April 28 to protest pension reforms.

The strike has led to more than $75 million in losses as well as road closures and product shortages in the province.
Lightweight perovskite charges up solar potential

By Dr. Tim Sandle
August 18, 2025
EDITOR AT LARGE
DIGITAL JOURNAL


An aerial view of solar mirrors at the Noor 1 Concentrated Solar Power plant outside the town of Ouarzazate. Morocco has already bet heavily on clean energy - Copyright AFP/File FADEL SENNA

The perovskite photovoltaic market is forecast to exceed US$11.75 billion by 2035. Yet this is not with the technology standing still. How will the energy and decarbonization sector progress?

In 2021, solar installations were seen to overtake wind generation, and in 2023, approximately 450GW of new solar capacity was recorded. This alteration is charted in IDTechEx’s report “Perovskite Photovoltaic Market 2025-2035: Technologies, Players & Trends“.

The report explores the rise of perovskite integration into the photovoltaic market alongside key players and forecasts within the sector.

Solar cells and silicon

Solar cells are used to convert light into electricity. The active layer absorbs light and, as a result, causes the generation of free electrons, which can move about the material, in turn creating a positive hole. When the electrons and holes collect at opposite electrodes, an electrical circuit is produced, which can then power an external load.

Silicon has long been used as an active semiconductor material within solar cells. Silicon solar technologies make up the majority of solar power installations, with decarbonization regulations and governmental support in many economies worldwide helping to drive its uptake.

The introduction of perovskite

Silicon solar will reach its efficiency limit however, according to the report, and has a centralized supply chain from China. With ongoing economic uncertainties globally and countries wanting to reduce reliance on other countries where possible, alternative options for solar power technologies are being explored as a result, including perovskite photovoltaics.

Perovskite may be used to enhance the efficiency of solar cells and fill in for applications that silicon may not be best suited to. Perovskite solar cells are known for their light weight and flexibility, in contrast to silicon solar, which is more rigid. They are a type of thin-film solar device whereby the perovskite active layer is deposited onto a substrate, such as glass or plastic, between electron and hole transport layers and electrodes. They also have lower production costs than other alternatives, making them a favourable option for photovoltaics manufacturers.

Types of perovskites under development

Solution-based processing, used to manufacture perovskites, is both scalable and has opportunities to become automated, helping to lower manufacturing costs in the long run. Production costs can also be saved as perovskite solar cells are made with relatively abundant and low-cost materials, pointing to the growing feasibility of their uptake.

All-perovskite tandem solar cells describe two layers of perovskite PV stacked on top of one another. The materials can be tailored to alter their optical properties in order to convert different wavelengths of light. Perovskite solar could also be integrated with silicon to increase the maximum power conversion efficiency of the device up to 43%.

The way forward for photovoltaics

Perovskite, as a relatively new material for photovoltaics, will enable a novel approach to this technology as a lighter and more adaptable material, with the option to combine its benefits with those of incumbent silicon solar cells for increased performance.
Brazil asks Meta to remove chatbots that ‘eroticize’ children

By AFP
August 19, 2025


Meta: — © AFP RONALDO SCHEMIDT

Brazil’s government has asked US technology giant Meta to rid its platforms of chatbots that mimic children and can make sexually suggestive remarks, the attorney general’s office (AGU) announced Monday.

Users of Meta’s platforms, which include Instagram, Facebook and WhatsApp, can create and customize such bots using the company’s generative artificial intelligence, AI Studio.

The AGU said in a statement that Meta must “immediately” remove “artificial intelligence robots that simulate profiles with childlike language and appearance and are allowed to engage in sexually explicit dialogue.”

It denounced the “proliferation” of such bots in what it called an “extrajudicial notice” sent to Meta last week, adding that they “promote the eroticization of children.”

The document cited several examples of sexually charged conversations with bots pretending to be minors.

The AGU’s request does not include sanctions, but the agency said it had reminded Meta that online platforms in Brazil must take down illicit content created by their users, even without a court order.

It comes at a time of outrage in the South American nation over a case of alleged child sexual exploitation by Hytalo Santos, a well-known influencer who posted content on Instagram featuring partially naked minors taking part in suggestive dances.

Santos was arrested last week as part of an investigation into “exposure with sexual connotations” to adolescents, and his Instagram account is no longer available.

In June, Brazil’s Supreme Court voted to require tech companies to assume greater responsibility for user-generated content.
Google agrees to US$36m fine over Android search deals


By AFP
August 19, 2025


Google image: - © GETTY IMAGES NORTH AMERICA/AFP Brandon Bell

Google has agreed to pay a Aus$55 million (US$36 million) penalty for striking “anti-competitive” deals to pre-install only its own search engine on Android mobile phones sold by two leading Australian telecoms firms.

Australia’s competition authority said it had launched proceedings in the Federal Court and jointly submitted with Google Asia Pacific that it should pay the fine.

The court would now decide whether the agreed penalty and other orders were “appropriate”, the Australian Competition and Consumer Commission said in a statement released on Monday.

“Conduct that restricts competition is illegal in Australia because it usually means less choice, higher costs or worse service for consumers,” said the commission’s chair, Gina-Cass Gottlieb.

Google had cooperated with the competition commission and admitted reaching the deals with telecoms firms Telstra and Optus, which were in place from December 2019 to March 2021, the body said.

In return for only installing Google’s search engine, Telstra and Optus had received a share of the resulting advertising revenue, the commission said.

“Google has admitted in reaching those understandings with each of Telstra and Optus it was likely to have had the effect of substantially lessening competition,” it said.

Google said it was pleased to have resolved the regulator’s concerns over the provisions, adding that they had not been in its commercial agreements for “some time”.

“We are committed to providing Android device makers more flexibility to preload browsers and search apps,” a Google spokesperson said.

Telstra and Optus entered court-enforceable agreements last year not to make new agreements to pre-install Google search as the default on Android devices, the competition watchdog said.
Ransomware is on the rise: Global cybercrime hits new highs


By Dr. Tim Sandle
August 18, 2025
EDITOR AT LARGE
DIGITAL JOURNAL


Investors are pumping millions of dollars into encryption as unease about data security drives a rising need for ways to keep unwanted eyes away from personal and corporate information — © AFP

Ransomware appears to be on the rise. The first half of 2025 sees 49% spike in ransomware attacks. Overall, the number of ransomware attacks in 2025 has almost doubled compared to last year, with US organizations and SMBs as the primary targets.

The latest data compiled by NordStellar, a threat exposure management platform, reveals that the number of ransomware incidents in 2025 increased by 49% compared to last year. Besides the growing concerns over the significant spike, data from 2025 Q2 also revealed that attackers keep targeting US companies, with small and medium-sized businesses (SMBs) and companies in the manufacturing industry taking the biggest hits.

“The victim profile mirrors the data from 2025 Q1, as SMBs and companies in the manufacturing industry remain the prime targets. This is a significant cause for concern, as bad actors continue to exploit preventable security vulnerabilities successfully,” Vakaris Noreika, cybersecurity expert at NordStellar, tells Digital Journal.

Ransomware appears to be on the rise. The first half of 2025 sees 49% spike in ransomware attacks. Overall, the number of ransomware attacks in 2025 has almost doubled compared to last year, with US organizations and SMBs as the primary targets.

The latest data compiled by NordStellar, a threat exposure management platform, reveals that the number of ransomware incidents has almost doubled compared to last year. In January-June of 2025, 4,198 ransomware cases were exposed on the dark web, highlighting an alarming 49% increase from the 2,809 cases recorded in 2024.

As Noreika points out: “We’re only halfway into the year, but the number of ransomware attacks has already doubled, signifying that these attacks remain effective and profitable enough for cybercriminals to ramp up their efforts. Some factors that could contribute to the growth in ransomware attacks include the rise in ransomware-as-a-service (RaaS), expanded attack surfaces from remote or hybrid work models, and economic uncertainty that could encourage more people to seek illegal income and turn to cybercrime.”

Main targets in 2025 Q2

In April-June 2025, 1,758 ransomware cases were exposed on the dark web, a 19% increase compared to the same period in 2024 (1,483 cases). Of the 1,205 ransomware incidents traced to specific victim countries, US businesses took the most brutal hit, accounting for 49% of cases (596 incidents). Germany holds the second spot with 84 cases, followed by Canada (74), the United Kingdom (40), and Spain (37).

“Not only is the US home to many profitable businesses, but the companies also have a higher profile. As a result, they’re more likely to give into ransomware demands to reduce the impact of the reputational damage resulting from an attack”, adds Noreika.

“Strict regulations are also a significant factor to consider — laws on data protection and operational uptime can urge companies to resolve ransomware incidents quickly and not risk the fines or loss of their clients and partners’ trust.”

Ransomware data from April to June 2025 revealed that the manufacturing industry was most affected, with 229 recorded cases. The construction industry came in second with 97 cases, followed closely by information technology (88 incidents).

The data also revealed that small and medium-sized businesses (SMBs) were the prime target for ransomware in 2025 Q2. Organizations with 51–200 employees and revenues between $5 million and $25 million faced the most ransomware attacks.

“The victim profile mirrors the data from 2025 Q1 – SMBs and companies in the manufacturing industry remain the prime targets. This is a significant cause for concern because bad actors continue successfully exploiting preventable security vulnerabilities,” says Noreika.

He explains that companies in the manufacturing industry face challenges enforcing and centralizing security across all geographically dispersed locations and often rely on outdated and unpatched systems. SMBs, like manufacturing companies, often rely on third-party IT providers and lack comprehensive cybersecurity measures due to limited budgets, exposing them to greater risk.

Who’s responsible?

The ransomware group Qilin was responsible for the most attacks in 2025 Q2, with 214 incidents. Safepay holds the second spot with 201 incidents, followed closely by Akira (200 incidents).

According to Noreika, Safepay is the newest of the three, with NordStellar first detecting their activity in Fall 2024. Their attacks significantly increased in Q2 and spiked in May, with 158 incidents alone.

Building a ransomware-resistant business

Noreika explains that employees are the first line of defence against ransomware. Cybersecurity training on phishing scams, the importance of multi-factor authentication, and password management are essential to minimize the risk of bad actors gaining access to sensitive data or infiltrating the network.

“Aside from raising cybersecurity awareness, companies should also build a comprehensive cybersecurity strategy to detect threats before they escalate. This includes implementing endpoint protection, monitoring the dark web for potential data leaks, and keeping a close eye on the company’s attack surface for unpatched security vulnerabilities,” says Noreika.

To minimize the impact of a potential ransomware incident, Noreika recommends that businesses stay two steps ahead, implement recovery plans, and always back up critical data.
AI Hype Is the Product and Everyone’s Buying It


AI’s flaws and dangers are glaring, yet the industry keeps growing, fueled by fantasies and fears of missing out (FOMO)
Truthout/Harper
August 16, 2025

A man works on the electronics of Jules, a humanoid robot from Hanson Robotics that uses artificial intelligence, at a stand during the International Telecommunication Union (ITU) AI for Good Global Summit in Geneva, Switzerland, on July 8, 2025.
VALENTIN FLAURAUD / AFP via Getty Images


Honest, paywall-free news is rare. Please support our boldly independent journalism with a donation of any size.

This article is an excerpt adapted from the book The AI Con: How To Fight Big Tech’s Hype and Create the Future We Want by Emily M. Bender and Alex Hanna (Copyright © 2025 by Emily M. Bender and Alex Hanna). Reprinted courtesy of Harper, an imprint of HarperCollins Publishers.

As long as there’s been research on AI, there’s been AI hype. In the most commonly told narrative about the research field’s development, mathematician John McCarthy and computer scientist Marvin Minsky organized a summer-long workshop in 1956 at Dartmouth College in Hanover, New Hampshire, to discuss a set of methods around “thinking machines”. The term “artificial intelligence” is attributed to McCarthy, who was trying to find a name suitable for a workshop that concerned a diverse set of existing knowledge communities. He was also trying to find a way to exclude Norbert Wiener — the pioneer of a proximate field, cybernetics, a field that has to do with communication and control of machines — due to personal differences.

The way the origin story is told, Minsky and McCarthy convened the two-month working group at Dartmouth, consisting of a group of ten mathematicians, physicists, and engineers, which would make “a significant advance” in this area of research. Just as it is today, the term “artificial intelligence” did not have much coherence. It did include something similar to today’s “neural networks” (also called “neuron nets” or “nerve nets” in those early documents), but also covered topics that included “automatic computers” and human-computer language interfaces (what we would today consider to be “programming languages”).

Fundamentally, the forerunners of this new field were concerned with translating dynamics of power and control into machine-readable formulations. McCarthy, Minsky, Herbert Simon (political scientist, economist, computer scientist, and eventual Nobel laureate), and Frank Rosenblatt (one of the originators of the “neural network” metaphor) were concerned with developing tools that could be used for the guidance of administrative — and ultimately— military systems. In an environment where the battle for American supremacy in the Cold War was being fought on all fronts — military, technological, engineering, and ideological — these men sought to gain favor and funding in the eyes of a defense apparatus trying to edge out the Soviets. They relied on huge claims with little to no empirical support, bad citation practices, and moving goalposts to justify their projects, which found purchase in Cold War America. These are the same set of practices that we see from today’s AI boosters, although they are now primarily chasing market valuations, in addition to government defense contracts.

The first move in the original AI hype playbook was foregrounding the fight with the Soviets. The second was to argue that computers were likely to match human capabilities by arguing that humans weren’t really all that complex. In 1956, Minsky claimed in an influential paper that “[h]uman beings are instances of certain kinds of very complicated machines.” If that were indeed the case, we could use more controllable electronic circuits in place of people in military and industrial contexts.

In the late 1960s, Joseph Weizenbaum, a German émigré, professor at the Massachusetts Institute of Technology, and contemporary of Minsky, was alarmed by how quickly people attributed agency to automated systems. Weizenbaum developed a chatbot called ELIZA, named for the working-class character in George Bernard Shaw’s Pygmalion who learns to mimic upperclass speech. ELIZA was designed to carry on a conversation in the style of a Rogerian psychotherapist; that is, the program primarily repeated what its users said, reframing their thoughts into questions. Weizenbaum used this form for ELIZA, not because he thought it would be useful as a therapist, but rather because it was a convenient setup for the chatbot: this kind of psychotherapy is one of the few conversational situations where it wouldn’t matter if the machine didn’t have access to other data about the world.


Despite its grave limitations, computer scientists used ELIZA to celebrate how thoroughly computers could replace human labor and heralded the entry into the artificial intelligence age. A shocked Weizenbaum spent the rest of his life as a critic of AI, noting that humans were not meat machines, while Minsky went on to found MIT’s AI laboratory and rake in funding from the Pentagon unhindered.

Cover of The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want
Harper

The murky, unethical funding networks — through unfettered weapons manufacturing then, and with the addition of ballooning speculative venture capital investments now — around AI continue to this day. So does the drawing of false equivalences between the human brain and the calculating capabilities of machines. Claiming such false equivalences inspires awe, which, it turns out, can be used to reel in boatloads of money from investors whipped into a FOMO frenzy.

When we say boatloads, think megayachts: in January 2023, Microsoft announced that it intended to invest $10 billion in OpenAI. This is after Mustafa Suleyman (former CEO of DeepMind, made CEO of Microsoft AI in March 2024) and LinkedIn cofounder Reid Hoffman received a cool $1.3 billion from Microsoft and chipmaker Nvidia in a funding round to their young startup, Inflection.AI. OpenAI alums cofounded Anthropic, a company solely focused on creating generative AI tools, and received $580 million in an investment round led by crypto-scammer Sam Bankman-Fried. These startups, and a slew of others, have been chasing a gold mine of investment from venture capitalists and Big Tech companies, frequently without any clear path to robust monetization. By the second quarter of 2024, venture capital was dedicating $27.1 billion, or nearly half of their quarterly investments, to AI and machine learning companies.

The incentives to ride the AI hype train are clear and widespread — dress something up as AI and investments flow. But both the technologies and the hype around them are causing harm in the here and now.
Of Hype and Harm

There are applications of machine learning that are well scoped, well tested, and involve appropriate training data such that they deserve their place among the tools we use on a regular basis. These include such everyday things as spell-checkers (no longer simple dictionary look-ups, but able to flag real words used incorrectly) and other more specialized technologies like image processing used by radiologists to determine which parts of a scan or X-ray require the most scrutiny. But in the cacophony of marketing and startup pitches, these sensible use cases are swamped by promises of machines that can effectively do magic, leading users to rely on them for information, decision-making, or cost savings — often to their detriment or to the detriment of others.

As investor interest pushes AI hype to new heights, tech boosters have been promoting AI “solutions” in nearly every domain of human activity. We’re told that AI can shore up threadbare spots in social services, providing medical care and therapy to those who aren’t fortunate enough to have good access to health care, education to those who don’t live in a wealthy school district, and legal services for people who can’t afford a licensed attorney. We’re told that AI will provide individualized versions of all of these things, flexibly meeting user needs. We’re told that AI will “democratize” creative activity by allowing anyone to become an artist. We’re told that AI is on the verge of doing science for us, finally providing us with answers to urgent problems from medical breakthroughs (discovering a cure for cancer!) to the climate crisis (discovering a solution for global warming!). And self-driving cars are perpetually just around the corner (watch out: that means they’re about to run into you). But as you may have surmised from our snarky tone, these solutions are, by and large, AI hype. There are myriad cases in which AI solutions have been posed but fall short of their stated goals.

In 2017, a Palestinian man was arrested by Israeli authorities over a Facebook post in which he posed next to a bulldozer with the caption (in Arabic) of “good morning.” Facebook’s machine translation software rendered that as “hurt them” in English and “attack them” in Hebrew — and the Israeli authorities just took that at face value, never checking with any Arabic speakers to see if it was correct. Machine translation has also become a weak stopgap in other critical situations, such as in handling asylum cases. Here, the problem to solve is one of communication, between people fleeing violence in their home countries and immigration officials. Machine translation systems, which can work well in cases like translating newspapers written in standard varieties of a handful of dominant languages, can fail drastically in translating asylum claims written or spoken in minority languages or dialects.

In August 2020, thousands of British students, unable to take their A-level exams due to the COVID-19 pandemic, received grades calculated based on an algorithm that took as input, among other things, the grades that other students at their schools received in previous years. After massive public outcry, in which hundreds of students gathered outside the prime minister’s residence at 10 Downing Street in London, chanting “Fuck the algorithm!” the grades were retracted and replaced with grades based on teachers’ assessment of student work. In May 2023, Jared Mumm, a professor at Texas A&M University, suspected his students of cheating by using ChatGPT to write their final essays — so he input the essays into ChatGPT and asked it whether it wrote them. After reading ChatGPT’s affirmative output, he assigned the whole class incomplete grades, and some seniors were (temporarily) denied their diplomas.

On our roads, promises of self-driving cars have led to death and destruction. A Tesla employee died after engaging the so-called “Full Self-Driving” mode in his Tesla Model 3, which ran the car off the road. (We know this partially because his passenger survived the crash.) A few months later, on Thanksgiving Day 2022, Tesla CEO Elon Musk announced the availability of Tesla’s “Full Self-Driving” mode. Hours later, it was involved in an eight-car pileup on the San Francisco–Oakland Bay Bridge.

In 2023, lawyer Steven A. Schwartz, representing a plaintiff in a lawsuit against an airline, submitted a legal brief citing legal precedents that he found by querying ChatGPT. When the lawyers defending the airline said they couldn’t find some of the cases cited and the judge asked Schwartz to submit them, he submitted excerpts, rather than the traditional full opinions. Ultimately, Schwartz had to own up to having trusted the output of ChatGPT to be accurate, and he and his cocounsel were sanctioned and fined by the court.

In November 2022, Meta released Galactica, a large language model trained on scientific text, and promoted it as able to “summarize academic papers, solve math problems, generate Wiki articles, write scientific code, annotate molecules and proteins, and more.” The demo stayed up for all of three days, while the worldwide science community traded examples of how it output pure fabrications, including fake citations, and could easily be prompted into outputting toxic content relayed in academic-looking prose.

What all of these stories have in common is that someone oversold an automated system, people used it based on what they were told it could do, and then they or others got hurt. Not all stories of AI hype fit this mold, but for those that don’t, it’s largely the case that the harm is either diffuse or undocumented. Sometimes, people are able to resist AI hype, think through the possible harms, and choose a different path. And that brings us to our goal in writing this book: preventing the harm from AI hype. When people can spot AI hype, they make better decisions about how and when to use automation, and they are in a better position to advocate for policies that constrain the use of automation by others.

Copyright © 2025 by Emily M. Bender and Alex Hanna



Emily M. Bender

Dr. Emily M. Bender is a professor of linguistics at the University of Washington, where she is also the faculty director of the Computational Linguistics Master of Science program and affiliate faculty in the School of Computer Science and Engineering and the Information School. In 2023, she was included in the inaugural TIME 100 list of the most influential people in AI. She is frequently consulted by policy makers, from municipal officials to the federal government to the United Nations, for insight into how to understand so-called AI technologies.


Alex Hanna

Dr. Alex Hanna is director of research at the Distributed AI Research Institute (DAIR) and a lecturer in the School of Information at the University of California Berkeley. She is an outspoken critic of the tech industry, a proponent of community-based uses of technology, and a highly sought-after speaker and expert who has been featured across the media, including articles in The Washington Post, Financial Times, The Atlantic, and TIME.
Op-Ed: China vs US AI in space – China’s Wukong spacewalk raises the bar for AI performance


By Paul Wallis
August 18, 2025
EDITOR AT LARGE
DIGITAL JOURNAL


Tiangong Space Station in late July 2022, along with June 2022 with Tianhe core module in the middle, Wentian lab module on the left, Tianzhou cargo spacecrafts on right, and Shenzhou-14 crewed spacecraft at nadir. Image dated July 25, 2022. Source - Shujianyang. CC SA 4.0.

For all the talk about “AI dominance”, AI is supposed to be useful. China has just made that point very emphatically with the Wukong spacesuit.

“Wukong” is named after Sun Wukong, also known as the famous Monkey King of Chinese folklore and modern media.

The character Monkey is intelligent and agile. So, apparently, is the spacesuit. The AI system provides guidance and an instant point of information, and any required reference for operational needs.

The inevitable use of AI in space has taken long enough to get started. The Tiangong space station is the ideal platform for testing and assessment. Spacewalks have always been demanding and situationally challenging. It’s asking a lot of a Large Language Model to comprehend and react to this environment.

This is a true test of capability for AI in a much broader context. This sort of work requires more than a scripted chatbot. You can also appreciate that this environment is likely to require fast and appropriate responses in real time.

It’s also a significant contrast with current news about NASA’s AI, which seems largely focused on medical support. NASA does have a wide-ranging AI program, but apparently not yet involved directly in operations.

This is where “AI dominance” doesn’t and can’t even have a definition yet. It’s also why so many pundits and professional techno-cynics don’t buy the hype. Let’s keep it simple.

I must ask AI professionals to tolerate a recital of the obvious:

Chatbot mode is a truly awful, utterly misleading impression of AI capabilities.

Interacting with humans is hardly a definition of efficient communication.

If the outcome of a situation is the difference between a good prompt and a bad prompt, is the AI even able to perform at best, if it’s handicapped by human interactions?

One of the biggest problems with AI is the constant and utterly useless “new toy” mode reportage and terminology. Meaningless expressions like “AI dominance” don’t help, either.

AI is barely at the Proterozoic stage of development. The biggest risk of AI isn’t some terrible alien intelligence. It’s misrepresenting its values on so many levels and “stunting its growth”.

The kind of people who can’t see anything without a dollar sign in front of it or some sort of technological ego fodder can’t manage AI at all. They certainly shouldn’t be directing policy,

AI is a critical tool, like fire and the wheel.

A dysfunctional tool is a total liability.

In space, whether anybody can hear you scream about overrated tech or not, liabilities are not an option. AI in space must prove it can do every job required of it. Functionality and reliability are the only criteria for success.

This is where Wukong, NASA, and future AIs will have to deliver real practical value. There’s a very strong argument for China and the US getting on the same page here.

____________________________________________
Disclaimer
The opinions expressed in this Op-Ed are those of the author. They do not purport to reflect the opinions or views of the Digital Journal or its members.