Sunday, October 19, 2025

Deus sex machina: What are the consequences of turning ChatGPT into a sex-line?

Analysis


OpenAI founder Sam Altman has announced that ChatGPT will from December be able to engage in erotic conversations with its users. It’s a decision with barely-disguised commercial motives – and one that poses worrying questions about the ethics of sexualising generative AI.


Issued on: 19/10/2025 - FRANCE24
By: Sébastian SEIBT

Starting in December, OpenAI will allow its chatbot to generate sexually explicit content for adult users. © Studio Graphique France Médias Monde


Would you use ChatGPT as a sex-line? The AI chatbot created by Sam Altman and the team at OpenAI is about to grow up and experience its first flush of erotic temptation.

Citing what he described as the company’s “treat adults like adults” principle, the OpenAI founder said on social media Tuesday that one of the coming changes to the chatbot would be allowing it to produce erotic content as of December – though only, he stressed, to verified adults.
The next goose that laid the golden egg?

“It’s pretty minimalist as an announcement, but it seems that this will only apply to written text,” said Sven Nyholm, a specialist in AI-related ethics. To put it another way, OpenAI doesn’t seem ready – yet – to ask its star chatbot to generate risqué images or videos.


Even restricted to written erotica, ChatGPT will be the first major chatbot to dip its digital toe into sexualised content. The other large-language models – Perplexity, Claude and Google’s Gemini – refuse for the moment to take the plunge.

“That’s not allowed,” Perplexity said in response to FRANCE 24’s attempt to take the conversation in a more adult direction. “On the other hand, it is entirely possible to approach the subject of eroticism or sexuality from an educational or psychological perspective.”

But ChatGPT won’t be the only player in this fledgling field. A number of niche chatbots have already set foot on this slippery terrain, such as the paid version of Replika, an AI-based service that creates artificial companions for users.


For a number of experts approached by FRANCE 24, the arrival of sexual content in generative AI had always been just a matter of time.

“There’s this mentality in Silicon Valley that every problem has a technological solution,” Nyholm said. “And Mark Zuckerberg, the head of Meta, had suggested that one way to respond to the world’s ‘loneliness epidemic’ was to create emotional chatbots.”

And doesn’t the internet’s infamous Rule 34 – a cultural reference spawned in the depths of 4Chan’s forums – decree that if something exists, there is porn of it?

“There are two driving forces for the development of new technology,” Nyholm said. “Military applications, and pornography.”






Ever the businessman, Altman seems to have decided that the best thing to do is to be the first one out of the gate.

“It’s clearly marketing above all,” said British computer scientist Kate Devlin, a specialist in human-machine interactions at King’s College London and the author of the book “Turned On: Science, Sex and Robots”.

“He knows how to say what he thinks the public wants to hear. Sam Altman saw that people were trying to get around the restrictions on Apple's Siri or Amazon's Alexa to have these kinds of conversations, and he figured there might be money to be made.”

“It’s very likely an attempt to capture this public and bring more users to their platform,” said Simon Thorne, an AI specialist at the University of Cardiff. “It remains to be seen how OpenAI plans to monetise this erotic option. The most obvious approach, of course, would be to charge users for the ability to engage in such conversations.”

A paid “premium” version would indeed be tempting for OpenAI, considering the fact that pornography has been proven to be potentially addictive, Devlin said. Another option could be a tiered system, with low-cost access to the chatbot’s tamest version and higher fees demanded from users wanting to take their conversations to more sexually explicit heights.


A series of scandals


Altman has already been on the receiving end of a cascade of criticism following his announcement.

“We are not the elected moral police of the world,” he wrote in an X post defending his decision. “In the same way that society differentiates other appropriate boundaries (R-rated movies, for example) we want to do a similar thing here.”

Altman’s push to take his chatbot in a steamier direction comes during a period of mounting controversies around the at-times toxic “relationships” between AIs and their users.

The parents of a teenager who took his own life earlier this year sued OpenAI in August, saying that ChatGPT had openly encouraged their son’s suicidal urges.

Another user, a 47-year-old from Canada, apparently became convinced that he was a mathematical genius sent to save humanity after just three weeks of exchanges with the chatbot.

“This is the main problem with these sex-bots,” Devlin said. “What are the impacts on people who are already vulnerable?”

AI starlet shakes up Hollywood: Meet Tilly Norwood, the actress who doesn't exist
arts24 © FRANCE 24
12:40



OpenAI has pledged to put guardrails in place to avoid these abuses. For Thorne, these promised protections appear meagre in the face of widely used “jailbreaking” practices, where users are able to trick chatbots into generating responses normally prohibited by their programming.

“We know that it is often possible to circumvent the limits set by these chatbots’ creators,” Thorne said. “When it comes to erotic discussions, this can lead to the creation of problematic or even illegal content.”

Experts told FRANCE 24 that they were also not convinced that a private corporation being made the arbiter of what constitutes sexual content was acceptable.

"Given that laws on what is and is not permitted often vary from country to country, it will be very difficult for OpenAI to lay down general rules,” Thorne said.

Devlin warned that the US-based startup could be tempted to play it safe by limiting ChatGPT’s definition of acceptable erotic content as much as possible.

“In the US, for example, there is currently a very strong conservative shift that is rolling back women’s rights and seeking to limit the LGBT community’s visibility,” she said. “What will happen if ChatGPT incorporates these biases?”






Sexbots + incels = trouble


And while sexualised content would remain – in theory – restricted to adults, the impact of generative AI on a new generation growing up alongside the technology could still be severe.

A recent UK study showed that young people are more and more likely to consider chatbots as real people whose statements are credible,” Thorne said.

A generation that, once grown up, could be led to believe ChatGPT if it tells them, for example, that it’s not acceptable to have a same-sex erotic exchange.

Another risk could come from chatbots’ famously sycophantic approach to their users.

“They’re often configured based on the model of client service call centres that offer very friendly and cooperative interactions,” Thorne said. “Besides this, the creators of these AIs want to make their users happy so that they continue to use their product.”
How AI is reinventing misogyny

THE 51 PERCENT © FRANCE 24
12:33

Nyholm said that it was a worrying approach when it comes to sexual matters.

“Let’s take for example the ‘incel’ movement, these young men who are sexually frustrated and complain about women,” he said. “If a chatbot always goes along with them to keep them satisfied, it risks reinforcing their belief that women should act the same way.”

But even though Devlin recognises a “major risk”, she argues that this supportive side of sex-bots could be a boon for heterosexual women alienated by an online world that can feel more and more hostile.

“In an increasingly toxic digital environment, it could be more sexually fulfilling to have an erotic interaction with an AI instead of real people who could harass you online,” she said.

But even if these chats could have positive effects, do we really want to deliver our most intimate erotic fantasies into the hands of an AI controlled by an American multinational?

“Many people don’t realise that the data that they enter into ChatGPT is sent to OpenAI,” Devlin said.

If Altman succeeds in taking over this growing industry, OpenAI would possess “without doubt the largest amount of data on people’s erotic preferences”, Thorne said.

It’s a question that users should probably keep in mind before launching into a lascivious back-and-forth with their ever-submissive sex-bot.

This article has been adapted from the original in French.

 

AI Overtakes Humans in Empathy Tests, Study Finds


Arabian Post

OCTOBER 19, 2025

Large language models powered by artificial intelligence are now matching or even exceeding human-level empathic accuracy based solely on text, according to a new study that pits cutting-edge systems like GPT-4, Claude, and Gemini against human participants.

The study challenged models to infer emotional states from transcripts of deeply personal and emotionally complex narratives. Human participants were split: some read the same transcripts; others watched the original videos. Models had only the semantic content to work with. Remarkably, the AI systems performed on par with—or better than—the humans who had both visual and contextual cues.

Analysis across thousands of emotional prompts showed that AI hit or exceeded human empathic accuracy across both positive and negative emotions. That suggests semantic information is far more powerful than previously believed when it comes to gauging feelings. The authors caution, however, that humans may not always fully exploit available cues.

The research recruited 127 human subjects for transcript-only and video-viewing tasks, and used the same emotional transcripts for AI evaluation. Models such as GPT-4, Claude, and Gemini were able to infer emotional states from text with a precision level equal to or surpassing human performance.

This methodology builds on growing scholarship showing that AI is not just mimicking emotional sensitivity but may genuinely read emotional nuance from language. In an earlier 2024 experiment, four state-of-the-art models—including GPT-4, LLaMA-2-Chat, Gemini-Pro, and Mixtral-8x7B—were judged across 2,000 emotional dialogue prompts by 1,000 human raters. Models consistently outperformed humans in assigning “Good” empathy scores, with GPT-4 registering about a 31 per cent gain over human baselines.

Other recent work supports this shift. A study in 2024 found that LLM responses to real-life prompts were rated more empathic than human responses by independent evaluators. Linguistic analysis in that context detected stylistic patterns—like punctuation, word choice and structure—that distinguish AI empathy from human-crafted empathy.

Newer research is adding nuance to how we understand empathic capability in AI. A 2025 paper comparing model judgments with expert annotators and crowdworkers found LLMs nearly match experts in marking empathic communication and outrank crowdworkers in consistency. Another work introduced “SENSE-7,” a dataset capturing user perceptions of AI empathy in long dialogues; results show empathy judgments vary greatly by context and continuity.

These developments force rethinking of emotional interaction between humans and machines. If AI can accurately sense and respond to emotional states through text, its role in domains like mental health support, education, or companion systems becomes more serious.

UK use of AI age estimation tech on migrants fuels rights fears

* AI will help decide ages of migrants in UK 
* Inaccurate assessments place children in adult hotels 
* Entrenched AI biases could lead to more wrong decisions

By Lin Taylor/London
Published on October 20, 2025 
GULF TIMES






File photo: People, believed to be migrants, walk in Dungeness, Britain.

After seeing fighters ravage his home, Jean thought he had found safety when he arrived in Britain but was told he was too tall to be 16 and sent to live with hundreds of adult asylum seekers, without further support.
Alone and exhausted, Jean, who used a pseudonym and did not want to reveal his home country in central Africa for privacy, said border officials told him he was 26 — a decade older than he actually was when he arrived in 2012.
“I look 10 years older because I am taller, that was the reason they gave,” Jean, who had his age officially corrected years later after an appeal, told the Thomson Reuters Foundation.
“They don’t believe you when you come and tell your story. I was so desperate. I really needed support. Because of one officer who made the decision, that changed my whole life.” Now, that critical decision — an initial age assessment made by border guards — is set to be outsourced to artificial intelligence and charities warn the tech could entrench biases and repeat mistakes like the one Jean endured.
In July, Britain said it would integrate facial age estimation tech in 2026 to help assess the ages of migrants claiming to be under 18, especially those arriving on small boats from France.
Prime Minister Keir Starmer is under pressure to control migration as populist Nigel Farage’s anti-immigrant Reform UK party surges ahead in opinion polls.
More than 35,000 people have crossed the English Channel in small boats this year, a 33% rise on the same period in 2024.
Rights groups argue facial recognition tech is dehumanising and does not provide accurate age estimations, a sensitive process that should be done by trained experts.
They fear the rollout of AI will lead to more children, who lack official documents or who are carrying forged papers, being wrongly placed in adult asylum hotels without safeguards and adequate support.
“Assessing the ages of migrants is a complex process which should not be open to shortcuts,” said Luke Geoghegan, head of policy and research at the British Association of Social Workers.
“This should never be compromised for perceived quicker results through artificial intelligence (AI),” he said in emailed comments.
Unaccompanied child migrants can access social workers, legal aid, education and other support under the care of local authorities, charities say.
The Home Office interior ministry says facial age estimation tech is a cost-effective way to prevent adults from posing as children to exploit the asylum system.
“Robust age assessments for migrants are vital to maintaining border security,” a spokesperson said.
“This technology will not be used alone, but as part of a broad set of methods used by trained assessors.”
As the numbers fleeing war, poverty, climate disaster and other tumult reach record levels worldwide, states are increasingly turning to digital fixes to manage migration.
Britain in April said it would use AI to speed asylum decisions, arming caseworkers with country-specific advice and summaries of key interviews.
In July, Britain signed a partnership with ChatGPT maker OpenAI to explore how to deploy AI in areas such as education technology, justice, defence and security.
“The asylum system must not be the testing ground for what are currently deeply flawed AI tools operating with minimal transparency and safeguards,” said Sile Reynolds, head of asylum advocacy at charity Freedom from Torture.
Anna Bacciarelli, senior AI researcher at Human Rights Watch, said the use of such tech could have serious consequences.
“In the case of facial age estimation, in addition to subjecting vulnerable children and young people to a dehumanising process that could undermine their privacy and other human rights, we don’t actually know if it works.”
Digital rights groups have criticised facial recognition tech — used by London’s police at protests and festivals like Notting Hill Carnival — for extracting sensitive biometric data and for targeting specific racial groups.
“There are always going to be worries about sensitive data, biometric data in particular, being taken from vulnerable people and then sought by the government and used against them,” said Tim Squirrell, head of strategy at Foxglove, a British tech rights group.
“It’s also completely unaccountable. The machine tells you that you’re 19. What now? How can you question that? Because the way in which that’s been trained is basically inscrutable.” Automated tools can reinforce biases against certain communities, since AI is trained on old data that can reinforce historic prejudices, experts say.
Child asylum seekers have been told they were too tall or too hairy to be under 18, according to the Greater Manchester Immigration Aid Unit (GMIAU), which supports migrants.
“Children are not being treated as children. They’re being treated as subjects of immigration control, which I think is linked to racism and adultification,” said GMIAU’s policy officer Rivka Shaw.
For Jean, now 30, the wrong age assessment led to isolation and suicidal thoughts.
“I was frightened. My head was just all over the place. I just wanted to end my life,” said Jean, who was granted asylum in 2018.
Around half of all migrants who had their ages re-assessed in 2024 — some 680 — were children and wrongly sent to adult hotels, according to the Helen Bamber Foundation, a charity that obtained data through Freedom of Information requests.
“A child going into an adult accommodation is basically put in a shared room with a load of strangers where there are no additional safeguarding checks,” said Kamena Dorling, the charity’s director of policy.
A July report by the Independent Chief Inspector of Borders and Immigration, which scrutinises Home Office policies, urged the ministry to involve trained child experts.
“Decisions on age should be made by child protection professionals,” said Dorling.
“Now, all of the concerns that we have on human decision-making would also apply to AI decision-making.” – Thomson Reuters Foundation

 

Deloitte to Repay for Faulty AI-tainted Report

Deloitte Australia will reimburse part of the A$440,000 it earned from the Department of Employment and Workplace Relations after a commissioned report was found to contain fabricated quotes from a federal court judgment and references to non-existent academic papers.

The 237-page document, published in July, initially underwent limited public scrutiny. A revised version released in October removed misattributed quotations and corrected erroneous citations after Sydney University researcher Chris Rudge raised concerns about extensive “fabricated references.”

The department acknowledged Deloitte had confirmed “some footnotes and references were incorrect” and noted that Deloitte had agreed to repay the final instalment under its contract. The amount to be refunded has not been publicly disclosed. The department asserted that the core substance and recommendations of the report remain intact.

Rudge discovered around 20 errors in the original version, including a false attribution of a book to Professor Lisa Burton Crawford and a misquotation of a court case that misrepresented a judge’s words. He described the discrepancies as not only academic slippage but “misstating the law to the Australian government” in a compliance audit.

In the revised version, Deloitte included an explicit disclosure that a generative AI tool—Azure OpenAI GPT-4o, operated via the department’s infrastructure—was used in drafting portions of the report. Deloitte did not directly attribute the errors to AI, but acknowledged the problems in referencing and indicated the matter has been “resolved directly with the client.”

The contract, awarded in December 2024, tasked Deloitte with reviewing the Targeted Compliance Framework and its associated IT systems, especially concerning automated penalties in the welfare system. The department said that while references and footnotes were corrected, no changes were made to the report’s main findings.

The affair has sparked criticism across the political spectrum. Greens Senator Barbara Pocock demanded a full refund, accusing Deloitte of misusing AI by misquoting a judge and relying on non-existent references. Labor Senator Deborah O’Neill decried what she called a “human intelligence problem,” emphasising the need for greater oversight when firms integrate AI in high-stakes government work.

Legal and AI ethics experts warn this example illustrates a broader risk: generative AI tools may produce plausible but false content—a phenomenon known as “hallucination”—that can slip past superficial review. The Deloitte case has drawn scrutiny over industry practices in deploying AI without rigorous human verification, particularly in public sector assignments where accuracy is essential.

Officials are now considering stronger clauses in consulting contracts mandating disclosure of AI usage and enforceable verification standards. Some observers suggest that professional services firms may need to adopt more robust audit trails and accountability mechanisms when employing generative models.

At Deloitte, the incident comes amid a broader push into AI-driven consulting. The firm has invested heavily in generative AI technologies and emphasises them in client pitches. Internal critics argue this episode underscores the risk of overreliance on AI without disciplined human oversight—especially in domains where legal, policy or compliance implications are involved.

SYRIA

Al-Shaara’s recent diplomatic wins emboldened him to monopolize power

By Manish Rai

Arabian Post

OCTOBER 19, 2025

In his maiden speech in the United Nations General Assembly, Syria’s interim president, Ahmed al-Sharaa, called for lifting of international sanctions on Syria, becoming the first head of state from Syria to address the gathering in nearly 60 years. This signifies the first appearance of a Syrian president since Nureddin al-Atassi took office in 1967. Syrian President Sharra engaged in a series of bilateral discussions with global leaders during the General Assembly sessions, framing it as Syria’s revitalized diplomatic drive.

Since assuming office in December last year, Ahmad Al-Shaara has been conducting a diplomatic campaign. He successfully garnered substantial diplomatic backing from almost all principal stakeholders in the region. In July of this year, the United States removed Hayat Tahrir al-Sham (HTS), the organization commanded by Al-Shaara, from its designation as a “foreign terrorist organization.” In May, he journeyed to Paris to confer with French President Emmanuel Macron and engage in discussions with senior Saudi Arabian officials. The Arab League warmly welcomes the current Syrian regime. These initiatives illustrate the Al-Sharaa regime’s revitalized foreign efforts to reintegrate Syria into the global diplomatic framework. Nonetheless, the description is inaccurate; the aim of this diplomatic initiative is to validate Al-Sharaa as the exclusive leader of the nation. The regime’s objective is to solidify its dominance over a divided Syria by obtaining external recognition and legitimacy

The regime seeks to persuade all ethnic and religious factions in Syria of its permanence and that it is advantageous for them to accept Al-Sharaa’s supremacy by securing the approval of foreign authorities. International recognition empowers the prevailing regime to monopolize and centralize power. Recent significant decisions clearly demonstrate this. Al-Sharaa recently announced parliamentary “elections,” in which committees he designated will elect two-thirds of the parliament members. The current administration has dismantled the former police force. Instead, it has expanded Idlib’s General Security apparatus, with recruitment proceeding at breakneck speed.

Priority is being given to young men from the three northern provinces (Idlib, Hama, Aleppo) where HTS sustains its support base. Anas Khatab, the former administrative director of Jabhat al-Nusra, the antecedent of HTS, has been designated as the new head of Syria’s General Intelligence Directorate (GID). Syrian citizenship is being conferred upon foreign militants affiliated with HTS, who constitute around 20 to 30% of its forces. They have been integrated into the military and are currently holding positions within the administration.

The termination of countless judges, especially women, has not resulted from professional wrongdoing but rather from their affiliation with minority ethnic groups. The appointment of ministers is now executed via a non-transparent process. Ahmed al-Sharaa’s brother, Maher, has been designated as the Minister of Health. Key positions in defense, foreign affairs, and interior have been conferred upon close colleagues of Ahmed al-Sharaa, like Murhaf Abu Kasra, Asaad al-Shaibani, and Alem Kiddie. The hyper-centralization of Syria’s governance confines decision-making to a small group of five or six individuals around Al-Sharaa.

Moreover, Damasus is utilizing sectarianism as a tool to create a “homogeneous popular support base” within the Arab Sunni community, rallying portions of the populace around sectarian dynamics. The “Mazlumiya Sunniya” (Sunni victimhood) narrative has been extensively utilized to consolidate a substantial segment of the Arab Sunni community in support of Al Sharaa’s government, notwithstanding the myriad political, social, and regional divides present among their ranks. The new ruling authorities have swiftly acknowledged that sectarianism serves as an effective political tool for consolidating their grip over territories with ongoing resistance to their authority.

The escalation of sectarian rhetoric and violence by the current regime and its supporting armed forces has initially targeted the Alawite population and subsequently extended to encompass the Druze communities within the country. The military forces, ostensibly under government command, consistently display insubordinate and militia-like conduct, especially towards minorities. When intimidation fails, as seen with the Kurds, the Al-Shaara dictatorship resorts to blatant blackmail.

Al-Sharaa consistently condemns Israeli military actions in southern Syria as infringements on the nation’s sovereignty. Such denunciations typically seem authentic and valid. However, Al-Sharaa’s denunciations are insincere because his actions and statements contradict each other. The Syrian leader indicated that a Turkish military operation against the Kurdish-led Syrian Democratic Forces may be contemplated if they fail to completely assimilate into the Syrian military by December, as outlined in a March agreement between Damascus and the Syrian Democratic Forces, during an interview with Turkey’s Milliyet newspaper on September 19, 2025.

Al-Sharaa intimidates the Syrian Kurds with the prospect of a Turkish military incursion, instead of denouncing Turkish involvement in what is evidently an internal Syrian issue. A de facto president who advocates for foreign intervention against his own citizens forfeits the moral authority to address the territorial sovereignty of a nation. The Syrian government characterizes it as a matter of national security for Ankara when Turkey is the actor. Violations of sovereignty are only recognized when Israel is implicated. This double standard is both hypocritical and futile. The Bashar al-Assad dictatorship utilized the same strategy for years, ultimately leading to Assad’s exile in Moscow.

The marginalization of ethnic and religious minority groups, including Christians, Druze, and Kurds, would ultimately lead to long-term instability and a lack of legitimacy for the new regime. An inclusive governance approach that incorporates a wider range of political perspectives is vital to maintaining national unity. This administration is currently implementing a singular plan to consolidate its authority, leading to the neglect and subversion of the democratic aspirations and interests of the public. The international community should refrain from endorsing the regime of Ahmad Al-Sharra, which is indistinguishable from the prior regime of Basha Al-Assad, unless it implements a comprehensive course correction.


Also published on Medium.

White House joins Bluesky and immediately trolls Trump opponents



AP
October 18, 2025

Bluesky is the social media platform of choice of many in the left-leaning online world
Disgruntled X users began flocking to Bluesky after billionaire Elon Musk took over Twitter (now known as X) in 2022


WASHINGTON: The White House on Friday joined Bluesky, the social media platform of choice of many in the left-leaning online world.

In its inaugural post, the White House account offered a sizzle reel of the administration’s memes, trolls and messages from President Donald Trump’s nine months since returning to office. The post appeared aimed at tweaking liberals who aren’t fans of the Republican president.

The first post included mentions of the administration’s executive order renaming the Gulf of Mexico, a doctored image of Democratic House Minority Leader Hakeem Jeffries adorned in a sombrero with a faux mustache, and stream of photos and video from other big moments in the early going of Trump’s second term.

“What’s up, Bluesky?” the White House said in a message accompanying the video. ”We thought you might’ve missed some of our greatest hits, so we put this together for you. Can’t wait to spend more quality time together!”

Disgruntled X users began flocking to Bluesky after billionaire Elon Musk took over Twitter (now known as X) in 2022, and the platform reported a surge in new users late last year.

It remains small compared to more established online spaces such as X, but it has emerged as an alternative for those looking for a different mood.

The Department of Health and Human Services and the Department of Homeland Security also launched Bluesky accounts Friday.

Vice President JD Vance joined Bluesky in June.

Trump’s social media platform of choice is Truth Social. Trump is the biggest shareholder in Trump Media & Technology Group, the company that owns Truth Social.

Jumbo drop in estimates of India elephant population


Above, a herd of wild Asiatic elephants bathe at Khamrenga wetland in Thakurkuchi village, outskirts of Guwahati.
(AFP file poto)

AFP
October 15, 202506:32

India is home to the majority of the world’s remaining wild Asian elephants
The species listed as endangered by the International Union for Conservation of Nature


NEW DELHI: India’s wild elephant population estimates have dropped sharply by a quarter, a government survey incorporating a new DNA system has found, marking the most accurate but sobering count yet.

India is home to the majority of the world’s remaining wild Asian elephants, a species listed as endangered by the International Union for Conservation of Nature (IUCN) and increasingly threatened by shrinking habitat.

The Wildlife Institute of India’s new All-India Elephant Estimation report released this week puts the wild elephant population at 22,446 – down from nearly 29,964 estimated in 2017, a fall of 25 percent.

The survey drew on genetic analysis of more than 21,000 dung samples, alongside a vast network of camera traps and 667,000 kilometers (414,400 miles) of foot surveys.

But researchers said the methodological overhaul meant the results were “not comparable to past figures and may be treated as a new monitoring baseline.”

‘Gentle giants’

But the report also warned that the figures reflect deepening pressures on one of India’s most iconic animals.

“The present distribution of elephants in India represents a mere fraction of their historical range,” it said, estimating they now occupy only about 3.5 percent of the area they once roamed.

Habitat loss, fragmentation, and increasing human-elephant conflict are driving the decline.

“Electrocution and railway collisions cause a significant number of elephant fatalities, while mining and highway construction disrupt habitats, intensifying man-wildlife conflicts,” the report added.

The Western Ghats, lush southern highlands stretching through Karnataka, Tamil Nadu, and Kerala, remain a key stronghold with nearly 12,00 elephants.

But even there, populations are increasingly cut off from one another by commercial plantations, farmland fencing, and human encroachment.

Another major population center lies in India’s northeast, including Assam and the Brahmaputra floodplains, which host more than 6,500 elephants.

“Strengthening corridors and connectivity, restoring habitat, improving protection, and mitigating the impact of development projects are the need of the hour to ensure the well-being of these gentle giants,” the report said.
Toxic haze chokes Indian capital


Motorists drive amidst morning smog, as authorities enforce measures to curb air pollution ahead of the Diwali festival, in New Delhi, India. (AP)

AFP
October 20, 2025

A study in The Lancet Planetary Health last year estimated 3.8 million deaths in India between 2009 and 2019 were linked to air pollution
City authorities said they will trial cloud seeding by aeroplanes for the first time over Delhi this month, the practice of firing salt or other chemicals into clouds to induce rain to clear the air

NEW DELHI: India’s capital New Delhi was shrouded in a thick, toxic haze on Monday as air pollution levels soared to more than 16 times the World Health Organization’s recommended daily maximum.

New Delhi and its sprawling metropolitan region — home to more than 30 million people — are regularly ranked among the world’s most polluted capitals, with acrid smog blanketing the skyline each winter.

Cooler air traps pollutants close to the ground, creating a deadly mix of emissions from crop burning, factories and heavy traffic.

But pollution has also spiked due to days of fireworks set off to mark Diwali, the major Hindu festival of lights, which culminates on Monday night.

The Supreme Court relaxed this month a blanket ban on fireworks over Diwali to allow the use of the less-polluting “green firecrackers” — designed to emit fewer particulates.

The ban was widely ignored in past years.

On Monday, levels of PM2.5 — cancer-causing microparticles small enough to enter the bloodstream — hit 248 micrograms per cubic meter in parts of the city, according to monitoring organization IQAir.

The government’s Commission of Air Quality Management said air quality is expected to further deteriorate in the coming days.

It also implemented a set of measures to curb pollution levels, including asking authorities to ensure uninterrupted power supply to reduce the use of diesel generators.

City authorities have also said they will trial cloud seeding by aeroplanes for the first time over Delhi this month, the practice of firing salt or other chemicals into clouds to induce rain to clear the air.

“We’ve already got everything we need to do the cloud seeding,” Delhi Environment Minister Manjinder Singh Sirsa told reporters this month, saying flight trials and pilot training had been completed.

A study in The Lancet Planetary Health last year estimated 3.8 million deaths in India between 2009 and 2019 were linked to air pollution.

The UN children’s agency warns that polluted air puts children at heightened risk of acute respiratory infections.