Wednesday, February 18, 2026

US tech giant Nvidia announces India deals at AI summit

By AFP
February 18, 2026


This week's AI Impact Summit is the fourth annual international gathering to discuss how to govern the fast-evolving technology. - © AFP Arun SANKAR


Katie Forster

US artificial intelligence chip titan Nvidia unveiled tie-ups with Indian computing firms on Wednesday as tech companies rushed to announce deals and investments at a global AI conference in New Delhi.

This week’s AI Impact Summit is the fourth annual gathering to discuss how to govern the fast-evolving technology — and also an opportunity to “define India’s leadership in the AI decade ahead”, organisers say.

Mumbai cloud and data centre provider L&T said it was teaming up with Nvidia, the world’s most valuable company, to build what it touted as “India’s largest gigawatt-scale AI factory”.

“We are laying the foundation for world-class AI infrastructure that will power India’s growth,” said Nvidia boss Jensen Huang in a statement that did not put a figure on the investment.

L&T said it would use Nvidia’s powerful processors, which can train and run generative AI tech, to provide data centre capacity of up to 30 megawatts in Chennai and 40 megawatts in Mumbai.

Nvidia said it was also working with other Indian AI infrastructure players such as Yotta, which will deploy more than 20,000 top-end Nvidia Blackwell processors as part of a $2 billion investment.

Dozens of world leaders and ministerial delegations have come to India for the summit to discuss the opportunities and threats, from job losses to misinformation, that AI poses.

Last year India leapt to third place — overtaking South Korea and Japan — in an annual global ranking of AI competitiveness calculated by Stanford University researchers.

But despite plans for large-scale infrastructure and grand ambitions for innovation, experts say the country has a long way to go before it can rival the United States and China.

– Hyperscale –

The conference has also brought a flurry of deals, with IT minister Ashwini Vaishnaw saying Tuesday that India expects more than $200 billion in investments over the next two years, including roughly $90 billion already committed.

Separately, India’s Adani Group said Tuesday it plans to invest $100 billion by 2035 to develop “hyperscale AI-ready data centres”, a boost to New Delhi’s push to become a global AI hub.

Microsoft said it was investing $50 billion this decade to boost AI adoption in developing countries, while US artificial intelligence startup Anthropic and Indian IT giant Infosys said they would work together to build AI agents for the telecoms industry.

Nvidia’s Huang is not attending the AI summit but other top US tech figures joining include OpenAI’s Sam Altman, Google DeepMind’s Demis Hassabis and Microsoft founder Bill Gates.

Indian Prime Minister Narendra Modi and other world leaders including French President Emmanuel Macron and Brazil’s Luiz Inacio Lula da Silva are expected to deliver a statement at the end of the week about how they plan to address concerns raised by AI technology.

But experts say that the broad focus of the event and vague promises made at previous global AI summits in France, South Korea and Britain mean that concrete commitments are unlikely.

Nick Patience, practice lead for AI at tech research group Futurum, told AFP that nonbinding declarations could still “set the tone for what acceptable AI governance looks like”.

But “the largest AI companies deploy capabilities at a pace that makes 18-month legislative cycles look glacial,” Patience said.

“So it’s a case of whether governments can converge fast enough to create meaningful guardrails before de facto standards are set by the companies themselves.”

Uncut gems: Indian startups embrace AI despite job fears



By AFP
February 18, 2026


An Indian firm is using AI to design intricate brooches and other jewellery which are then handmade by artisans - Copyright AFP Arun SANKAR

Katie Forster and Uzmi Athar

Glinting under the exhibition centre lights, the gold brooch studded with gemstones on the startup founder’s lapel was handmade by Indian artisans — but artificial intelligence dreamt up its elaborate design.

The brooch, in the shape of Hindu deity Lord Krishna, is an emblem of both the fast-developing power of AI technology and hopes it will drive innovation in India’s youthful economy.

Siddharth Soni, 23, showed AFP a box of AI-designed jewellery, mostly in classical Indian style, made by the company Idea Jewellery which he co-founded in 2023.

“Jewellery like this used to take around six months, seven months” to manufacture using traditional methods, said Soni, at a global AI summit in New Delhi.

Now, using a 3D-printed mould based on an AI blueprint, and streamlining the process in other ways, “I can make this piece in one week” with a few more needed for hallmarking, he said.

Tech bosses and world leaders are gathered in the Indian capital this week to discuss the opportunities and challenges presented by AI, including the threat of mass redundancies and loss of human expertise.

Soni’s startup is a new direction for his decades-old family jewellery manufacturing business in the city of Hyderabad.

He said his father was “excited” about the new venture and “wants to take it all over the world” so retailers in places like the United States can offer custom AI-designed Indian jewellery.

At the same time, his father and grandfather, both in the industry for around 30 years, are conflicted because they believe “artisans should not lose their imagination”, Soni said.

“We’re losing the form of art, basically, by using AI,” but even so, “we have to move forward.”



– ‘Very uncomfortable’ –



Prime Minister Narendra Modi says the AI summit “shows the capability of our country’s youth” as “further proof that our country is progressing rapidly” in technology.

India’s government is expecting $200 billion in AI investment in the next two years, with plans to build large-scale data centres and nuclear power plants to run them.

Idea Jewellery, which does not receive government support but would like to, is in talks with 20 retailers including well-known brands in major cities who are already clients of the long-running family business.

On a tool powered by a fine-tuned version of Google’s Gemini, customers can specify the type of metal, precious stones and price range of their jewellery, and describe their desired style with a simple text prompt.

The tool shows examples of the piece and can then produce a detailed 3D model to be turned by hand into real jewellery.

Some of the workers, who have spent years mastering their craft and usually spend weeks designing a piece of jewellery, are “very uncomfortable with it” and fear their jobs could eventually disappear, Soni admitted.

However they are still making the AI-designed pieces, “because it’s their livelihood”.



– New fields –



The AI boom has brought huge profits for tech giants and sprouted many startups worldwide, but the bubble could pop if the frenzied excitement loses momentum.

For now, governments and companies are bullish that AI innovation will benefit society, from helping teachers educate large populations to better personalising medical care.

Peush Bery’s startup, Xtreme Gen AI, sells a voice chat tool that can answer and make calls for Indian businesses in a dozen local languages.

It’s a competitive field, but the company hopes to carve out a niche by offering smaller businesses a customised tool that they don’t need technical know-how to implement.

Different accents and India’s noisy streets can make accuracy a challenge. But as the technology improves and becomes more affordable, it could threaten the country’s huge call centre industry.

Bery remains optimistic. “New jobs come up, new fields come up,” such as working with data to improve the AI models, he said.

Another startup, Soil Doctor, has offered AI-powered soil testing to 500 farms across 10 Indian states, working with NGOs to run programmes with rural women and youth.

The government could help the company by granting access to historical agricultural data that it currently does not have, said Soil Doctor’s chief of staff Vartika Gupta.

AI technology can “benefit farmers big time”, helping them save money by buying fertiliser better targeted to their soil type, Gupta said.

“Season after season, at a much lower input cost, they will be able to achieve an increased yield.”


India’s tougher AI social media rules spark censorship fears

By AFP
February 16, 2026


A man passes by a mural depicting various social media apps in Bangalore on March 22, 2018 - Copyright AFP/File Manjunath KIRAN

Parvaiz BUKHARI

India has tightened rules governing the use of artificial intelligence on social media to combat a flood of disinformation, but also prompting warnings of censorship and an erosion of digital freedoms.

The new regulations are set to take effect on February 20 — the final day of an international AI summit in New Delhi featuring leading global tech figures — and will sharply reduce the time platforms have to remove content deemed problematic.

With more than a billion internet users, India is grappling with AI-generated disinformation swamping social media.

Companies such as Instagram, Facebook and X will have three hours, down from 36, to comply with government takedown orders, in a bid to stop damaging posts from spreading rapidly.

Stricter regulation in the world’s most populous country ups the pressure on social media giants facing growing public anxiety and regulatory scrutiny globally over the misuse of AI, including the spread of misinformation and sexualised imagery of children.

But rights groups say tougher oversight of AI if applied too broadly risks eroding freedom of speech.

India under Prime Minister Narendra Modi has already faced accusations from rights groups of curbs on freedom of expression targeting activists and opponents, which his government denies.

The country has also slipped in global press freedom rankings during his tenure.

The Internet Freedom Foundation (IFF), a digital‑rights group, said the compressed timeframe of the social media take-down notices would force platforms to become “rapid-fire censors”.

– ‘Automated censorship’ –

Last year, India’s government launched an online portal called Sahyog — meaning “cooperate” in Hindi — to automate the process of sending takedown notices to platforms including X and Facebook.

The latest rules have been expanded to apply to content “created, generated, modified or altered through any computer resource” except material changed during routine or good‑faith editing.

Platforms must now clearly and permanently label synthetic or AI‑manipulated media with markings that cannot be removed or suppressed.

Under the new rules, problematic content could disappear almost immediately after a government notification.

The timelines are “so tight that meaningful human review becomes structurally impossible at scale”, said IFF chief Apar Gupta.

The system, he added, shifts control “decisively away from users”, with “grievance processes and appeals operate on slower clocks”, Gupta added.

Most internet users were not informed of authorities’ orders to delete their content.

“It is automated censorship,” digital rights activist Nikhil Pahwa told AFP.

The rules also require platforms to deploy automated tools to prevent the spread of illegal content, including forged documents and sexually abusive material.

“Unique identifiers are un-enforceable,” Pahwa added. “It’s impossible to do for infinite synthetic content being generated.”

Gupta likewise questioned the effectiveness of labels.

“Metadata is routinely stripped when content is edited, compressed, screen-recorded, or cross-posted,” he said. “Detection is error-prone.”

– ‘Online hate’ –

The US-based Center for the Study of Organized Hate (CSOH), in a report with the IFF, warned the laws “may encourage proactive monitoring of content which may lead to collateral censorship”, with platforms likely to err on the side of caution.

The regulations define synthetic data as information that “appears to be real” or is “likely to be perceived as indistinguishable from a natural person or real-world event.”

Gupta said the changes shift responsibility “upstream” from users to the platforms themselves.

“Users must declare if content is synthetic, and platforms must verify and label before publication,” said Gupta.

But he warned that the parameters for takedown are broad and open to interpretation.

“Satire, parody, and political commentary using realistic synthetic media can get swept in, especially under risk-averse enforcement,” Gupta said.

At the same time, widespread access to AI tools has “enabled a new wave of online hate “facilitated by photorealistic images, videos, and caricatures that reinforce and reproduce harmful stereotypes”, the CSOH report added.

In the most recent headline-grabbing case, Elon Musk’s AI chatbot Grok sparked outrage in January when it was used to make millions of sexualised images of women and children, by allowing users to alter online images of real people.

“The government had to act because platforms are not behaving responsibly,” Pahwa said.


Junk to high-tech: India bets on e-waste for critical minerals


By AFP
February 17, 2026


Workers dismantle discarded monitors at 'Ecowork', an e-waste recycling facility in Ghaziabad, India - Copyright AFP Punit PARANJPE

Arunabh SAIKIA and Uzmi ATHAR

Hundreds of discarded batteries rattle along a conveyor belt into a crusher in a remote plant in northern India, fuelling a multi-billion-dollar industry that is bolstering the country’s geopolitical ambitions.

India is cashing in on the growing “e-waste” sector — pulling critical minerals like lithium and cobalt, which are needed to make everything from smartphones to fighter jets and electric cars, from everyday electronics.

Global jitters about China’s dominance as a critical minerals producer has kicked New Delhi into action, ramping up extraction of the materials that are essential for its drive to become an artificial intelligence hub.

With demand expected to soar and domestic mining unlikely to deliver meaningful output for at least a decade, the country is turning to an often‑overlooked source — the swelling mountains of electronic waste.

Dead batteries yield lithium, cobalt and nickel; LED screens contain germanium; circuit boards hold platinum and palladium; hard disks store rare earths — e‑waste has long been described as a “gold mine” for critical minerals.

India generated nearly 1.5 million tonnes of e‑waste last year, according to official data — enough to fill 200,000 garbage trucks — though experts believe the real figure is likely to be twice as much.

At Exigo Recycling’s sprawling plant in Haryana state, a machine churns the batteries from e-scooters into a jet-black powder.

The material is then leached into a wine‑red liquid, filtered, evaporated and finally transformed into a fine white powder — lithium.

“White gold,” said the facility’s lead scientist, watching the final product collect in trays.



– Backyard workshops –



Industry estimates suggest “urban mining” — the recovery of minerals from e‑waste — could be worth up to $6 billion annually.

While insufficient to meet India’s projected demand, analysts say it could help absorb import shocks and strengthen supply chains.

Most e‑waste, however, is still dismantled in informal backyard workshops that extract easily saleable metals such as copper and aluminium, leaving critical minerals untapped.

India’s formal recycling capacity remains limited compared to China and the European Union, both of which have invested heavily in advanced recovery technologies and traceability systems.

India has a “100 percent import dependency” for key critical minerals including lithium, cobalt and nickel, according to the Institute for Energy Economics and Financial Analysis.

Seeking to close the gap, Prime Minister Narendra Modi’s government approved a $170‑million programme last year to boost formal recycling of critical minerals.

The programme builds on Extended Producer Responsibility (EPR) rules, which require manufacturers to collect and channel e‑waste to government-registered recyclers.

“EPR has acted as a primary catalyst in terms of bringing scale to the recycling industry,” said Raman Singh, managing director at Exigo Recycling, one of the few Indian facilities able to extract lithium.

Other analysts agree the rules have redirected more waste into the formal sector.

“Before EPR was fully implemented, 99 percent of e-waste was being recycled in the informal sector,” said Nitin Gupta of Attero Recycling, which says it can recover at least 22 critical minerals.

“About 60 percent has now moved to formal.”

Government data suggests an even higher shift, though critics say the figures are inflated due to poor tracking of total e‑waste generation.

More than 80 percent of India’s e-waste is still processed informally, according to a United Nations Development Programme note in October.



– Rife with hazards –



Indian government-backed think‑tank NITI Aayog warned that organised recycling lagged behind both policy targets and the rapid growth in waste volumes.

Informal recycling is rife with hazards — open burning, acid baths and unprotected dismantling expose workers to toxic fumes and contaminate soil and water.

A bulk of India’s e‑waste still flowed through informal channels, leading to “loss of critical minerals”, said Sandip Chatterjee, senior adviser at Sustainable Electronics Recycling International.

“India’s informal sector remains the backbone of waste collection and sorting,” he told AFP.

In Seelampuri, a low‑income Delhi neighbourhood home to one of India’s largest informal e‑waste hubs, narrow alleys spill over with tangled cables and broken devices.

“The new companies just keep enough for certification, but the rest still comes to us,” said Shabbir Khan, a local trader. “Business has increased… not gone down.”

Even the junk that eventually reaches formal recyclers often passes through informal hands first, Chatterjee said.

“Integrating informal actors into traceable supply chains could substantially reduce” loss of valuable critical minerals at the sorting and dismantling stages, he said.

Ecowork, India’s only authorised non‑profit e‑waste recycler, is attempting that through training and safe workspaces.

“Our training covers dismantling and the (full) process for informal workers,” said operations manager Devesh Tiwari.

“We tell them about the hazards, the valuable critical minerals, and how they can do it the right way so the material’s value doesn’t drop.”

At its facility on the outskirts of Delhi, Rizwan Saifi expertly dismantled a discarded hard drive, slicing out a permanent magnet destined for an advanced recycler, where it will be shredded to recover dysprosium — a rare‑earth metal essential to modern electronics.

“Earlier all we would care about was copper and aluminium because that is what was high-value in the scrap market,” Saifi, 20, said.

“But now we know how valuable this magnet is.”

 AI ‘arms race’ risks human extinction, warns top computing expert


By AFP
February 17, 2026


Countries and companies are spending big on building energy-hungry data centres for generative AI tools - Copyright AFP Arun SANKAR


Katie Forster

Tech CEOs are locked in an artificial intelligence “arms race” that risks wiping out humanity, top computer science researcher Stuart Russell told AFP on Tuesday, calling for governments to pull the brakes.

Russell, a professor at the University of California, Berkeley, said the heads of the world’s biggest AI companies understand the dangers posed by super-intelligent systems that could one day overpower humans.

To him, the onus to save the species rests on world leaders who can take collective action.

“For governments to allow private entities to essentially play Russian roulette with every human being on earth is, in my view, a total dereliction of duty,” said Russell, a prominent voice on AI safety.

Countries and companies are spending hundreds of billions of dollars on building energy-hungry data centres to train and run generative AI tools.

The rapidly developing technology promises benefits such as drug discovery, but could also lead to job losses, and facilitate surveillance and online abuse among other threats.

Alongside that is the risk of “AI systems themselves taking control and human civilisation being collatoral damage in that process”, Russell said in an interview at the AI Impact Summit in New Delhi.

“Each of the CEOs of the main AI companies, I believe, wants to disarm” but cannot do so “unilaterally” as they would be fired by investors, he said.

“Some of them have said it in public and some of the told me it privately,” he added, noting that even Sam Altman, head of ChatGPT maker OpenAI, has said on-record that AI could lead to human extinction.

OpenAI and rival US startup Anthropic have seen public resignations of staff who have spoken out about their ethical concerns.

Anthropic also warned last week that its latest chatbot models could be nudged towards “knowingly supporting — in small ways — efforts toward chemical weapon development and other heinous crimes”.

– Human ‘imitators’ –

International gatherings such as this week’s AI summit provide an opportunity for regulation, although its three previous editions have only resulted in voluntary agreements from tech companies.

“It really helps if each of the governments understand this issue. And so that’s why I’m here,” Russell said.

India is hoping the five-day AI summit, attended by tech bosses and dozens of high-level national delegations, will help it power ahead in the sector.

Indian IT minister Ashwini Vaishnaw said Tuesday that the country expects more than $200 billion in AI investments over the next two years, including roughly $90 billion already committed.

Meanwhile fears that AI assistant tools could lead to mass redundancies in India’s larges customer service and tech support sectors has caused shares in the country’s outsourcing firms to plunge in recent days.

These kind of back-end jobs in India are ripe for replacement with AI, Russell said.

“We are creating human imitators. And so of course, the natural application for that type of system is replacing humans.”

Russell is sensing a burgeoning backlash against AI, “particularly among younger people”.

“They actually are pushing back against the dehumanising aspects of AI,” he said.

“When you’re taking over all cognitive functions — the ability to answer a question, to make a decision, to make a plan… you are turning someone into less than a human being. The young people do not want that.”


Experts warn open access bio-data could help AI design dangerous pathogens

FILE: A lab technician looks at a computer screen during research on coronavirus, COVID-19 in Belgium.
Copyright  Copyright 2020 The Associated Press. All rights reserved

By Marta Iraola Iribarren
Published on 

More than 100 researchers call for safeguards on high-risk biological datasets to prevent AI misuse, which could create deadly pathogens.

Artificial intelligence (AI) models for biology rely heavily on large volumes of biological data, including genetic sequences and pathogen characteristics. But should this information be universally accessible, and how can its legitimate use be ensured?

More than 100 researchers have warned that unrestricted access to certain biological datasets could enable AI systems to help design or enhance dangerous viruses, calling for stronger safeguards to prevent misuse.

In an open letter, researchers from leading institutions, including Johns Hopkins University, the University of Oxford, Fordham University, and Stanford University, argue that while open access scientific data has accelerated discovery, a small subset of new biological data poses biosecurity risks if misused.

“The stakes of biological data governance are high, as AI models could help create severe biological threats,” the authors wrote.

AI models used in biology can predict mutations, identify patterns, and generate more transmissible variants of pandemic pathogens.

The authors describe this as a “capability of concern,” which could accelerate and simplify the creation of transmissible biological pathogens that can lead to human pandemics, or similar events in animals, plants, or the environment.

Biological data should generally be openly available, the researchers noted, but “concerning pathogen data” requires stronger security checks.

“Our focus is on defining and governing the most concerning datasets before they are generally available to AI developers,” they wrote in the paper, proposing a new framework to regulate access.

“In a time dominated by open-weight biological AI models developed across the globe, limiting access to sensitive pathogen data to legitimate researchers might be one of the most promising avenues for risk reduction,” said Moritz Hanke, co-author of the letter from Johns Hopkins University.

What developers are doing

Currently, no universal framework regulates these datasets. While some developers voluntarily exclude high-risk data, researchers argue that clear and consistent rules should apply to all.

Developers of leading biological AI models, Evo, created by Arc Institute, Stanford, and TogetherAI researchers, and ESM3, from EvolutionaryScale, have withheld certain viral sequences from their training data.

In February 2025, EVO 2’s team announced that they had excluded pathogens infecting humans and other complex organisms from their datasets due to ethical and safety risks, and to “preempt the use of Evo for the development of bioweapons”.

EVO 2 is an open source AI model for biology that can predict DNA mutations’ effects, design new genomes, and uncover genetic code patterns.

“Right now, there's no expert-backed guidance on which data poses meaningful risks, leaving some frontier developers to make their best guess and voluntarily exclude viral data from training,” study author Jassi Panu, co-author of the letter, wrote on LinkedIn.

Different types of risky data

The authors note that the proposed framework applies only to a small fraction of biological datasets.

It introduces a five-tier Biosecurity Data Level (BDL) to categorise pathogen data, classifying data by “risk” level based on its potential to enable AI systems to learn general viral patterns and biological threats to both animals and humans. It includes:

BDL-0: Everyday biology data. It should have no restrictions and can be shared freely.

BLD-1: Basic viral building blocks, such as genetic sequences. It doesn’t need big security checks, but login and access should be monitored.

BLD-2: Data on animal virus traits like jumping species or surviving outside the host.

BLD-3: Data on human virus characteristics, such as transmissibility, symptoms, and vaccine resistance.

BLD-4: Upgraded human viruses, such as mutations to the COVID-19 virus that make it more contagious. This category would face the strictest restrictions.

Ensuring safe access

To guarantee safe access, the letter calls for specific technical tools that would enable data providers to verify legitimate users and track misuse.

Proposed tools include watermarking – embedding hidden, unique identifiers in datasets to easily track leaks – data provenance, and audit logs that record access and changes with temper-proof signatures, and behavioural biometrics that can track unique user interaction patterns.

The researchers argue that striking the right balance between openness and necessary security restrictions on high-risk data will be essential as AI systems become more powerful and widely available.



AI: Use, misuse, and a muddled future


By Dr. Tim Sandle
SCIENCE EDITOR

DIGITAL JOURNAL
February 18, 2026


Anyone with a smartphone and specialized software can create the harmful deepfake images - Copyright AFP Mark RALSTON

AI is a powerful tool, yet it remains simply a tool. It is not a friend, not a companion, and not an infallible source of truth. Used carelessly, it can erode creativity, weaken education, and even cause real harm. This “human-at-the-helm” view emphasises that AI lacks consciousness, desire, or the capacity to make moral decisions, instead following instructions to analyse patterns and generate results based on probability.

Is this view too simplistic? To some the “AI as a tool” framing is considered by some to be a dangerous oversimplification of a technology that is rapidly changing. For example., emerging “agentic AI” can troubleshoot and take action with minimal human oversight, blurring the line between a tool and an independent actor.

There are also developments in a more present and dangerous direction. With deepfake-related fraud already up more than 2,000% in the past three years, and cases like UK firm Arup losing £20 million to AI-driven impersonation, the problems with the wider use of AI are accelerating.

One example comes from the firm TRG Datacenters who have shared key warnings with Digital Journal on where AI can go wrong – fraud and bias to psychological and creative risks – and what to do about it.


AI Deepfakes Open Doors To More Impersonations and Fraud

Deepfake scams are among the fastest-growing threats. In the UK, engineering firm Arup lost £20 million after criminals impersonated executives on a video call. But video is only the tip of the iceberg: AI is also being used to clone voices for scam calls, generate convincing letters from “banks” or “lawyers,” and produce emails so polished that even seasoned professionals are fooled.

Protect yourself: Verified payment portals, digital watermarking, and liveness tests can expose fakes.


AI in Hiring Isn’t As Objective As It Seems


Applicants now use AI to polish résumés, while employers rely on AI to screen them. The result is a stalemate: machine-generated CVs are filtered by machine reviewers, leaving candidates unseen and employers unable to identify real talent.

Protect yourself: Treat AI as a sorting aid, not a final judge. Human recruiters must review shortlists, and platforms should be bias-audited. Overall, automatic rejections by AI hurt both candidates and employers.


AI Chatbots Are Not Friends Or Therapists


As more people use AI chatbots for emotional support, the risks are becoming more noticeable. These systems cannot understand feelings or exercise emotional intelligence. They reflect emotions and, in most cases, tell people what they want to hear, but cannot provide objectivity. The Adam Raine case showed how fragile these safeguards are: a teenager received reinforcement of suicidal thoughts instead of intervention. Without regulation, particularly for children, the dangers will only escalate.

Protect yourself: Platforms must add escalation protocols that route at-risk users to human help. Child-safe filters and stricter oversight are essential to limit harm.
Generative AI Makes You Learn And Analyse Less

Generative AI is efficient, but its overuse is already reshaping how people learn and work. Students, employees, and entire institutions now lean heavily on chatbots to draft papers, homework, and reports. This undermines the very skills education is meant to build: searching, analysing, and developing independent thought.

Protect yourself: Education must adapt with assignments that test reasoning and originality with oral exams, projects, and real-time work.

Used wisely, AI can amplify productivity and open new opportunities, yet it can go wrong and be subject to misuse. The responsibility for the use of technologies is with us now, and it is time to build in a robust regulatory and safety framework to govern the fair use of AI.






Digital transformation fails ‘without 


emotional intelligence’


By Dr. Tim Sandle
SCIENCE EDITOR

DIGITAL JOURNAL
February 17, 2026



An office block in London. — Image by © Tim Sandle.

Technology sector leaders have long been praised for driving automation, AI and digital transformation. As the pace of change intensifies, a longstanding skill has possibly become the ultimate differentiator – emotional intelligence.

This revisiting of emotional intelligence has been picked up by Anna Murphy, Chief of Staff at Version 1. Murphy argues that empathy, clarity and emotional awareness are no longer ‘soft skills’, but rather the foundation of sustainable transformation.



Why tech-first strategies fail

Drawing on her experience leading people through complex change, Murphy has told Digital Journal why tech-first strategies fail without trust, how emotionally intelligent leadership empowers innovation, and why future-ready organisations must invest just as much in their people as their platforms.

Murphy contends that a strong human element is needed, noting: “Digital transformation, which has swiftly become imperative in modern business, is often seen as a technical challenge. In reality, it is a human journey because its successful implementation delivers a stronger workforce. The role of leadership is to ensure that innovation empowers people, rather than alienating them. The efficiencies gained through digital transformation need to be felt by the workforce and empower them.”


Digital transformation and the human element

This brings with it consequences. Murphy is of the view: “For leaders pursuing digital transformation, across a wider range of sectors, understanding people is as critical as understanding systems. As skilled workers adapt their capabilities in order to achieve their aspirations, leaders are recognising what motivates them and how technology can play its part.”

Drawing on her own findings, Murphy notes: “After all, my experience has shown that the best strategies often fail without empathy, while even the best technologies underperform without trust.

More than just automation

One of the common examples of digital transformation is automation. For Murphy this will only take an aspirant company so far – more is needed: “While these steps are integral to making the process operate effectively, the most successful transformations share a much more emotive truth. They elevate people, not just performance. Studies have found that while AI could automate parts of two-thirds of jobs, there is expectation that AI with work alongside humans, not replace them.

Developing the human element further, Murphy says: “In their infancy, the most successful companies have, understandably, been driven by logic and efficiency. Yet as artificial intelligence reshapes business models and workforces, the very skills that made these technological leaders successful are being put to the test.”

As an example, Murphy cites: “The ability to code, analyse, and optimise must never be taken for granted, but it is also no longer enough. Technical skills are the catalyst for innovation, but understanding how solutions solve problems for both team members and customers allows you to improve them further. Today’s leaders must also listen, empathise and inspire.”

And as top the key benefits for firms: “When employees feel empowered rather than replaced by technology, you can create an environment in which innovation thrives. This requires emotionally intelligent leadership. Leaders who communicate change with clarity, recognise resistance as a natural response, and create psychological safety for experimentation and learning.”

The future of work is deeply human

A few years ago, a Gartner survey found that 80% of executives think automation can be applied to any business decision. According to Murphy: “As we have learned more about the appropriate use cases for automation, and evolving technologies, this belief is swiftly changing as more business leaders revitalise the human element of their organisations.”

She also opines: “In a world of intelligent machines, emotional intelligence, the ability to understand and manage emotions in oneself and others, has become the ultimate competitive advantage. It is what separates leaders who can drive transformation from those who are, at times, consumed by it.”

Looking to the future, Murphy predicts: “As AI and automation continue to evolve, the emotional dimension of leadership will only grow more critical. Machines now have imperative uses in modern business. They analyse data and optimise operations, allowing workforces to concentrate on the human side of their output. Yet they are not able to build belief, nurture talent or foster belonging.”

Moreover, she expresses: “Perhaps it is worth considering that success in technology will hinge less on who has the most advanced systems, and more on who can bring people together around a shared sense of purpose. Emotional intelligence is not the opposite of innovation. It gives innovators a strong foundation.”

Murphy’s concluding comment is: “The leaders who are sensitive to how change impacts their workforce, through emotional intelligence, will not only build better businesses, but also offer better futures for the people who make the goals and outcomes of those businesses possible.”

Opinion: The Endless Epstein saga – Lots of name dropping, no action

By Paul Wallis
EDITOR AT LARGE
DIGITAL JOURNAL
February 18, 2026


The French diplomat started corresponding with convicted US sex offender Jeffrey Epstein in 2010 - Copyright AFP Pedro MATTEY

To expect competence from the Trump administration on any subject, however trivial, is much like expecting a rubber duckie to build the pyramids all by itself. You know it’s not going to happen, but that’s the storyline.

When the subject is law of any kind, it’s even more farcical. Trump typically uses up more court time than anyone. His personal court cases could go on well beyond his natural life span.

The Epstein cases could go on much longer. There’s not even the hint of a prosecution, despite the tonnage of information. Everyone but the victims is apparently innocent until proven guilty, and nobody seems to be in any hurry to do that.

The current hoopla about celebrity names alone could generate any amount of additional litigation to muddy the waters. That’s quite literally all that’s happening with the list of names, and nobody on that list is being accused of anything.

Meanwhile, there’s the Congressional investigation into Epstein. Sort of. Maybe-ish. Maybe not-ish. It’s not really doing very much. Right now it looks more ornamental than effective.

Mainstream media has mindlessly bought in to the Epstein story, but there’s no narrative. There’s not even the beginning of a narrative with an ending. It’s more like a “who didn’t dun it”. So far, it’s more soap opera than storyline.

Even this rather picky, ponderous, pointless process doesn’t have any clear end in sight in terms of law enforcement or even possible legal outcomes.

Right now, when matters related to the Epstein case is being called possible crimes against humanity, that’s almost the whole story.

The rest is innuendo.

You know –

The really useless type of innuendo where the medieval peasants get to wonder what happens in the castles of the rich and futile.

Scandals, however geriatric, rattle around in the big empty spaces in the story.

Famous names drip from the tabloid taps.

The king rants and babbles as the kingdom disintegrates.

Someone puts on a truly inexcusable costume drama as a series of daintily unspecified wars is threatened.

Nothing is actually done about the scandals. It’s like “Bleak House” written by an illiterate.

This wouldn’t even make a pathetic B movie. So what is supposed to be happening, you may well wonder? That’s comparatively simple.

These allegations all relate to major criminal offenses.

Prosecution MUST follow if any evidence is deemed fit to charge anyone with a crime.

That’s not “optional”. No ifs, no buts, no maybes.

Except in this case?

17 years after Epstein was convicted for procuring a child for prostitution, absolutely nothing that could be called “enforcement” has happened. It seems to have taken at least 10 years for anyone to admit anything untoward was happening.

How many victims are there? Anyone’s guess.

How many suicides? Keep guessing.

Who’s being silenced by third parties? Seen Waldo lately?

If you were so inclined, you could add to these elegant enquiries some frivolous merry quips, like:

How obviously corrupt is the law?

Who clearly and directly benefits from such total inaction?

How do so many famous people become so incredibly stupid?

Why has no action been taken?

Enough babble. Start prosecuting.


France opens twin Epstein inquiries and calls on victims to testify

France has launched two formal investigations into the Jeffrey Epstein affair, covering alleged sexual crimes and possible financial wrongdoing, as prosecutors call on potential French victims to come forward following the release of millions of case documents in the United States.

Issued on: 18/02/2026 -RFI

Undated photographs provided by the US Department of Justice on 30 January, 2026 as part of the Jeffrey Epstein files. © AFP

Paris prosecutor Laure Beccuau announced on Wednesday that her office was opening two “framework investigations” after the United States government released nearly 3 million documents linked to American financier and convicted sex offender Jeffrey Epstein on 30 January.

“We want to stand alongside these victims. We will receive all the statements they wish to make,” Beccuau told FranceInfo radio.

On Saturday, the Paris prosecutor’s office said it was taking up the documents published by US authorities as part of the case.

Victims encouraged to testify

Beccuau said the newly released material could prompt victims previously unknown to investigators to come forward.

“These publications will inevitably reactivate the trauma of certain victims, some of whom we believe are not necessarily known,” she said. “Perhaps these new publications will lead them to come forward.”

The two investigations will run in parallel. One concerns alleged sexual offences, while the other examines possible economic and financial matters connected to the case.

Five magistrates will oversee the inquiries, including three assigned to alleged sexual offences and two to financial matters.

“Decisions to conduct interviews will be taken once we have gathered evidence,” Beccuau said.

Investigators will analyse the documents using support from France’s anti-cybercrime office and artificial intelligence tools, while also relying on press reporting, open sources and possible complaints from organisations working to protect minors.

Beccuau said the prosecutor’s office could move quickly if clear evidence emerges.

“If we have fully established facts, nothing will prevent us from initiating initial proceedings,” she said, adding that the two investigations could last “several months, or even several years”.

Individuals named

Anyone named in the Epstein files could become the subject of an investigation if French law applies, the Paris prosecutor’s office said on Saturday.

Among those cited in France are former French culture minister Jack Lang and diplomat Fabrice Aidan. Daniel Said, a model recruiter described in the case as a possible associate of Epstein's in Paris, could also be questioned.

“He is among the people who could be interviewed,” Beccuau said, noting that some alleged incidents could fall under the description of organised human trafficking offences.

Prosecutors are already analysing two complaints linked to the case. One was filed last Wednesday by former model Ebba Karlson, who accuses Said of raping her in France in 1990.

The second case was transferred from prosecutors in Thonon-les-Bains, eastern France, and concerns alleged sexual harassment in 2016 involving conductor Frédéric Chaslin. Prosecutors said the complaint is currently being examined.

Epstein owned an 800-square-metre apartment on Avenue Foch in Paris, where he spent several weeks each year over two decades.

(with newswires)

French prosecutors announce special team to analyse Epstein files

The Paris prosecutor's office on Saturday announced it was setting up a special team to analyse files relating to convicted sex offender Jeffrey Epstein and investigate suspected crimes involving French nationals. As part of that initiative, they will be reopening their files on the late Jean-Luc Brunel, a former French modelling agency executive.


 15/02/2026 
By: FRANCE 24

A photo of Epstein on a inmate report that was included in the US Department of Justice release of the Jeffrey Epstein files, photographed Tuesday, February 10, 2026. © Jon Elswick, AP


The Paris prosecutor's office on Saturday announced it was setting up a special team of magistrates to analyse evidence that could implicate French nationals in the crimes of the convicted US sex offender Jeffrey Epstein.

With Epstein's known circle now extending to prominent French figures after the release of documents by the US authorities, the prosecutor's office said it would also thoroughly re-examine the case of a former French modelling agency executive Jean-Luc Brunel, a close associate of the American financier, who died in custody in 2022.

The new team will work closely with prosecutors from the national financial crimes unit and police with a view to opening investigations into any suspected crimes involving French nationals, the Paris prosecutor's office told AFP.

The aim is "to be able to extract any piece that could be usefully reused in a new investigative framework", it said.

Brunel was found dead in his cell in a Paris prison in 2022 after having been charged with raping minors. The case against him was dropped in 2023 in the wake of his death, with no other person charged.

Prosecutors said an investigation had showed Brunel was "a close friend of Jeffrey Epstein" who had offered modelling jobs to young girls from poor backgrounds.

He had engaged in sexual acts with underage girls in the United States, the US Virgin Islands, Paris and the south of France, they said.

Ten women had made accusations against Brunel, several describing how they had been led to drink alcohol and had been subjected to forced sexual penetration, according to the prosecutor's office.
New cases

Several French public figures feature in the latest US Department of Justice release of material from the Epstein files, though being mentioned there does not in itself mean any offence has been committed.

The prosecutor's office said it had been asked to look into three new specific cases involving a French diplomat, a modelling agent and a musician.

READ MORENew conspiracy theories hold that Jeffrey Epstein is alive and well

At the request of the French foreign ministry it was looking into the reported appearance of senior diplomat Fabrice Aidan in the cache of Epstein-related documents published by the US authorities.

"An investigation is underway to gather various pieces of evidence that could substantiate this report," the prosecutor's office stated.


© France 24
01:25



The prosecutor's office has also received a complaint filed by a Swedish woman against Daniel Siad, a model recruiter with close ties to Epstein. She accused him of "sexual acts that she describes as rape and that may have been committed in France in 1990".

The office has also received a complaint filed against French conductor Frédéric Chaslin alleging acts of sexual harassment allegedly committed in 2016, it said.

The latest release of Epstein files has led to French former minister Jack Lang resigning from his position as the head of a top cultural body, the Arab World Institute.

Lang has however denied any wrongdoing, saying he was "shocked" that his name appeared in the statutes of an offshore company Epstein founded in 2016.

The office of the national financial prosecutor said it had opened a preliminary investigation for "aggravated tax fraud and money laundering" against Lang and his daughter Caroline Lang.

Following this announcement, Lang resigned from the presidency of the Arab World Institute.

Epstein died in prison in 2019 while awaiting trial for trafficking children, in what the US authorities ruled was a suicide.

(FRANCE 24 with AFP)


Greek taxis kick off two-day strike against private operators


By AFP
February 18, 2026


The Athens taxi drivers union opposes new rules they say favour private vehicles, warning of a "battle for survival" - Copyright AFP Ludovic MARIN

Taxi drivers in Greece on Wednesday kicked off a two-day nationwide strike over new rules which they say excessively favour private vehicles for hire.

“This is a battle for survival,” the Athens taxi drivers union SATA, who began the strike a day earlier, said in a statement.

“We apologise to passengers… for any inconvenience they may experience in the coming days, but our struggle is also a struggle on their behalf,” SATA said.

“It is a fight to protect their right to have access to a public-service mode of transport (taxis), and not to private cartels,” the union said.

Until now, Greece had protected the sector by allowing platforms such as Uber to operate only with licensed taxis.

Cabbies also want the government to postpone a requirement that all new taxis that enter the fleet from January 1 be electric.

They are demanding a deadline of 2035 for the transition to electromobility, arguing that the measure is currently unworkable due to a significant lack of charging stations for electric vehicles.

Taxi drivers in the greater Athens area have threatened to launch an indefinite strike in the near future if their demands are not met.

Since the Covid-19 pandemic, tourism has been breaking records in Greece: more than 40 million visitors in 2024, a performance that is expected to be surpassed again for 2025.

GREEN REVANCHISM

US energy chief says IEA must ‘drop’ focus on climate change


By AFP
February 18, 2026


US Energy Secretary Chris Wright said the IEA had been 'infected' by a 'climate cult' - Copyright AFP/File Juan BARRETO



Laurent THOMET

US Energy Secretary Chris Wright urged the International Energy Agency on Wednesday to abandon its work on climate change and focus instead on its founding mission.

Wright threatened last year to pull the United States out of the IEA — which was founded to coordinate responses to major disruptions of supplies after the 1973 oil crisis — unless it reformed the way it operates.

The IEA was created “to focus on energy security”, Wright said on Wednesday at a ministerial meeting of the agency in Paris.

“That mission is beyond critical and I’m here to plead to all the members (of the IEA) that we need to keep the focus of the IEA on this absolutely life-changing, world-changing mission of energy security,” the former fracking magnate said.

He said he wanted to get support from “all the nations in this noble organisation to work with us, to push the IEA to drop the climate. That’s political stuff”.

Speaking earlier, IEA Executive Director Fatih Birol insisted that the Paris-based agency was “data-driven”.

“We are a nonpolitical organisation,” he added.

The IEA produces monthly reports on oil demand and supply as well as annual world energy outlooks that include data on the growth of solar and wind energy, among other analyses.

Wright praised Birol for reinserting a scenario that looked at the growth of oil and gas demand — which had been dropped from the reports in 2020 — in last November’s annual outlook.

In an interview with AFP on Tuesday, Wright said the IEA has “made some first steps” to reform but still has “a long way to go”.

But the US energy chief also pressed on with his criticism, telling reporters before the start of Wednesday’s meetings: “The IEA has been infected with sort of a climate cult that’s about energy subtraction.”



– ‘Age of electricity unstoppable’ –



President Donald Trump, who has called human-driven global warming a hoax, has pulled the United States out of the United Nations’ bedrock climate treaty and, last week, dismantled the legal basis for US climate rules.

Wright has used his time in Paris to challenge the consensus on climate science.

“This belief that climate change is urgent, it’s causing catastrophic damage today, and we have to drop everything and focus everything on that: I can tell you nothing, nothing in the climate data supports that,” he said.

The European Union’s climate monitor, however, says the last three years have been the hottest globally on record, driven by rising greenhouse gas emissions that are causing global warming.

Experts warn that rising global temperatures are bringing hotter summers, more frequent flooding, stronger storms and increasingly devastating wildfires and droughts.

In a sign that not all nations agree with Wright, British energy secretary Ed Miliband announced that the UK would contribute a further 12 million pounds ($16 million) to the IEA’s Clean Energy Transitions Programme.

“The age of electricity is unstoppable,” Miliband said.

For many countries, he added, “clean energy is the most secure and affordable way to meet this rising demand over the long term.”

He praised the IEA and Birol, saying: “You treat all members equally and fairly.”


U.S. Threatens to Quit IEA Over Green Energy Advocacy


The United States has threatened, once again, to quit the International Energy Agency (IEA) if the organization, created in the aftermath of the 1970s Arab oil embargo, doesn’t return to forecasting energy demand without strongly promoting green energy.

“If it goes back to what it was — it was a fabulous international data recording agency, it was getting into critical minerals, was focused on big energy issues — we’re all in on that,” U.S. Energy Secretary Chris Wright said ahead of an IEA ministerial meeting this week.

“But if they insist that it’s so dominated and infused with climate stuff — yes, then we’re out,” Secretary Wright said ahead of the meeting, as carried by Bloomberg

Last November, the IEA dropped its predictions that oil demand growth would peak in a matter of a few years in the first major shift since it started promoting net-zero and green energy early this decade.

The tension between the Trump Administration and the IEA has escalated in recent months. 

A House committee last summer approved a bill that the U.S. withdraw its funding to the IEA, as the Republican lawmakers consider that the agency has strayed from its mission to safeguard energy security and has been pushing green energy policies instead.   

In July 2025, Secretary Wright said that the United States could abandon the IEA if the organization continues with its strong advocacy for renewables and doesn’t return to rational analysis of energy demand and promoting energy security.   

“We will do one of two things: we will reform the way the IEA operates or we will withdraw,” Wright told Bloomberg in an interview in the middle of July. 

“My strong preference is to reform it,” Secretary Wright added.  

The official echoed voices in the U.S. Republican party that the agency has become an advocate of the energy transition and is not objective in forecasting energy demand trends.

By Tsvetana Paraskova for Oilprice.com


‘Climate cult’ hurts Europe’s economy, US energy secretary tells AFP


By AFP
February 17, 2026


US Energy Secretary Chris Wright told AFP the US remains a "stout ally" of Europe - Copyright GETTY IMAGES NORTH AMERICA/AFP/File ALEX WONG
Laurent Thomet, Kate Gillam and Ali Bekhtaoui

A “climate cult” has weighed on Europe’s economy, US Energy Secretary Chris Wright told AFP on Tuesday, adding that the United States has shown its allies “tough love” because it wants them to become stronger.

Wright is attending ministerial meetings at the Paris-based International Energy Agency (IEA) this week, months after US-European ties were rattled over President Donald Trump’s bid to acquire Greenland.

In an interview with AFP, Wright said Europe can count on the United States as a reliable partner despite the tensions over the Danish autonomous territory.

He also defended Trump’s decision last week to repeal the legal basis for US climate rules, downplaying concerns about rising carbon emissions.

“That’s been sort of a side effect of the modern world,” said Wright, a former fracking magnate.

“The real impact is the world’s a little bit warmer, a little bit greener, a little bit wetter … And all the policies, noise in Europe, in the US, and all that, don’t even move the needle on that.”

The EU’s climate monitor, however, says the last three years have been the hottest globally on record, driven by rising greenhouse gas emissions that are causing global warming.


– ‘Tough love’ –

Asked what message he had for Europe, Wright said: “We just need to be serious and sober about energy. Energy makes people’s lives better.”

He said the “climate cult” has driven up energy prices in Europe while the continent produces less of it.

“It has reduced economic opportunities for Europeans,” he said. “We want a strong, powerful, industrial, wealthy, prosperous Europe.”

EU energy commissioner Dan Jorgensen said last month that there were increasing worries over Europe becoming too dependent on the United States for liquefied natural gas (LNG) following the Greenland spat.

Europe vowed to buy huge amounts of fossil fuels from the United States as part of a trade deal to end a tariffs row last year.

“Geopolitical turmoil in the wake of the crisis in Greenland has been a wake-up call,” Jorgensen told reporters.

Speaking to AFP after a conference at the French Institute of International Relations think tank, Wright said Europe should not worry as the United States remained a “stout ally”.

Trump has a “very aggressive” style but “there was never a possibility the US was going to invade Greenland”, he said.

“In fact, all of the United States’ tough love is to try to get Europe to have a stronger military, stronger energy system, stronger economy, to be better, stronger allies with us.”

He said the United States would not use LNG as political leverage.

“We will be a rock solid, reliable supplier of LNG to Europe,” Wright said.

– ‘Crazy policy’ –

Wright, who is attending IEA meetings in the French capital on Wednesday and Thursday, has been critical of the organisation’s focus on renewable energy and threatened to withdraw the United States if it did not reform.

The 31-member IEA was founded in 1974 to help coordinate collective responses to major disruptions of supplies in the wake of the 1973 oil crisis.

Wright told AFP that IEA has “made some first steps” to reform but still has “a long way to go”.

“A lot of the IEA work is focused on climate change and the Paris net zero thing,” he said.

Scientists say that the world must reach net zero emissions by 2050 if it is to reach the Paris Agreement’s goal of limiting warming to 1.5C above preindustrial levels.

“That’s a crazy policy,” Wright said. “Climate advocacy groups can do what they want, but you can’t have climate advocacy within an honest group that’s about energy security.”

– Trump’s ‘revolutionary’ Venezuela idea –

Wright’s trip to Paris comes a week after he became the highest-ranking US official to visit Venezuela since US special forces captured and overthrew socialist leader Nicolas Maduro on January 3.

Trump, he said, had “a revolutionary geopolitical idea. And so far it’s working swimmingly”.

The goal, he said, is to “dramatically grow” Venezuelan oil production, improve the lives of Venezuelans, and reduce the “criminal and migration and kidnapping” threats on the United States.

Since Maduro’s capture, around $1 billion in oil revenue has flowed through US-controlled accounts, Wright said, adding: “All the money is going back to Caracas.”