May 17, 2026
Observer Research Foundation
By Prateek Tripathi
The current Artificial Intelligence (AI) revolution was largely driven by the development of the transformer model architecture in 2017 and the subsequent creation of Large Language Models (LLMs). The majority of ensuing progress in AI has largely hinged on LLMs, including generative AI (GenAI), diffusion models, and Agentic AI. The seemingly remarkable progress made by these models has led to a multitude of claims by AI developers and experts, ranging from mass potential layoffs to the supposedly near-term prospect of Artificial General Intelligence (AGI).
On closer inspection, however, most of these arguments seem to fall apart, with AI adoption and automation witnessing widespread failure across multiple domains and use cases. Moreover, the current hyperscaling model of AI development is gradually becoming unsustainable due to ever-increasing energy and resource requirements, further compounded by the massive debts being incurred by AI companies and hyperscalers pursuing it. This should serve as a wake-up call for the Global South, which is in the process of honing and deploying its own sovereign AI capabilities. In the aftermath of the IndiaAI Impact Summit 2026, these issues further necessitate a reassessment of the Global South’s current development model and underscore the need to retain its human-centric roots rather than relying on its increasingly AI-centric propensities.
Identifying Failures in AI Use Cases and Deployment
Since the inception of GenAI, multiple company executives have repeatedly claimed that AI would imminently automate tasks hitherto performed by humans, particularly in areas such as coding and remote labour. However, these claims have been undermined on multiple occasions. According to a randomised controlled trial conducted by Model Evaluation and Threat Research (METR) in 2025, open-source coders utilising AI took 19 percent longer to perform tasks than those operating without AI. The Remote Labour Index, developed by the Foundation for QC Innovation at IISc Bengaluru’s Centre for AI Safety and ScaleAI, found that virtually all frontier AI models remain woefully inadequate at automating remote labour tasks, with the best-performing model (Opus 4.6) achieving an automation rate of just 4.17 percent.
According to MIT’s State of AI in Business 2025 report, 95 percent of GenAI pilot projects have reportedly failed. Examples of failed AI adoption include multiple corporations such as McDonald’s, DPD, Air Canada, Klarna, and Salesforce, some of which fired employees in favour of AI agents only to subsequently re-hire them. Different sectors, such as fintech, healthcare, education, manufacturing, and government, each face their own misgivings regarding AI adoption. For example, multiple studies by the University of Oxford and Stanford University have pointed out the dangers of employing AI chatbots in healthcare. A recent study by the Emergency Care Research Institute (ECRI) identified the misuse of AI chatbots as the top health technology hazard in 2026.
This is further compounded by a deliberate obfuscation of the term “AI” to circumvent scrutiny, using it to describe tasks that do not require AI whatsoever. For instance, Norwegian tech company 1X announced NEO, the world’s first consumer-ready humanoid robot, in 2025. While NEO initially claimed to utilise AI, it was later found to rely on remote employees to perform certain tasks, potentially violating user privacy while claiming to be AI-automated.
Consequently, while AI automation remains in vogue amongst AI developers and enthusiasts, in several cases, it appears to function as a guise for austerity measures. Despite multiple claims to the contrary, the body of peer-reviewed and rigorous research on successful AI use cases is quite limited, with LLMs serving as inadequate replacements for humans in the vast majority of cases while facilitating an actively inhibitory supplementary effect in several others.
The Unsustainable Nature of Current AI Models
In addition to the aforementioned adoption failures, the massive energy requirements of data centres are steadily making the current hyperscaling model of AI development unsustainable, with multiple instancesof widespread blackouts, water shortages, and air pollution prompting numerous community protestsaround the globe. For instance, data centres already account for over 4.4 percent of annual US electricity consumption as of 2023, a figure that has nearly doubled since 2018. Furthermore, AI power bottlenecks have led to widespread delays in multiple data centre projects, with about 11 GW of planned 2026 global capacity remaining “in the announced stage with no signs of construction.”
Figure 1: Global Data Centre Capacity Additions by Operation Date (in Gigawatts)

On the financial front, most pure-play AI companies and hyperscalers have amassed massive debts due to limited return on investment, leading to increasing claims of circular investments and an imminent burst of the so-called “AI bubble”. For instance, despite over US$ 1.4 trillion in financial commitments, OpenAI registered an annual revenue of only about US$ 20 billion in 2025. The situation is similar for hyperscalers such as CoreWeave, which plans to spend US$30–35 billion in 2026 despite an annual revenue of just over US$ 5 billion in 2025.
While capital misallocation has been a common feature of tech booms such as the “Dot Com Bubble” in the past, the chief difference was that most of the built infrastructure was eventually salvageable even after the bubble burst. In the case of the AI bubble, the massive data centre infrastructure currently being built will have very limited utility once LLMs plateau. However, with Big Tech companies now firmly locked into the lengthy and cost-intensive hyperscaling paradigm, they do not possess the option to course-correct any longer.
Why AI Adoption Fails: The Fundamental Problem with LLMs
One of the primary reasons for the current interest and historic investments in large pre-trained models and the hyperscaling paradigm is the “emergent abilities” of LLMs, particularly when it comes to plausible reasoning, resulting in widespread speculation that they will inevitably evolve into increasingly efficient models, eventually paving the way to achieving the holy grail of AGI. However, there is evidence suggestingthat emergent abilities in LLMs are most likely an artefact of inadequate metrics and benchmarks. Furthermore, the rise in benchmark performance as LLMs scale may be a function of enhanced pattern memorisation rather than reasoning or linguistic abilities and is poised to plateau in the future, especially under more sophisticated benchmarks.
According to a survey conducted by the Association for the Advancement of Artificial Intelligence (AAAI) involving 475 experts, 76 percent of respondents stated that current machine learning paradigms are unlikely to yield AGI. Factuality remains a fundamental limitation in current LLMs and GenAI systems, contributing to issues including hallucinations and biases and undermining AI trustworthiness.
While approaches to improve factuality include reinforcement learning, Retrieval-Augmented Generation, and Chain-of-Thought reasoning, future AI advancement may rely on the development of new or hybrid neural network architectures, such as neuro-symbolic reasoning systems, as well as non-neural architectures such as Information Lattice Learning. However, these alternative paradigms remain at an early stage of development.
This suggests that the current AI paradigm, largely hinging on LLMs, suffers from systemic and structural inadequacies, rendering it unsuitable for mass applicability. Therefore, AI deployment requires enhanced scrutiny, particularly in use cases affecting critical human sectors.
Conclusion: The Case for a Human-Centric Global South Agenda
AI adoption and cooperation in the Global South, particularly in the realm of human and societal development, served as a major theme for the IndiaAI Impact Summit 2026. However, the dangers posed by rushing AI adoption cast this approach into serious doubt. For those claiming to maximise societal benefit, the current risks posed by LLMs far outweigh the benefits accruing from their mass adoption. Far from simply being a matter of hallucinations or fabricated outputs, societal AI applications affect real people and risk having a detrimental impact on their livelihoods.
This is not to say that AI is of no societal benefit. There have been multiple instances of successful AI utility. For instance, India’s deployment of chatbots for language translation through platforms such as Bhashinihas been demonstrably successful. Research tools such as AlphaFold have been highly effective in accelerating scientific innovation to the extent that the Google DeepMind team received a Nobel Prize in Chemistry in 2024. However, it must be emphasised that while AI can serve as a tool for supplementing human capabilities, it is far from replacing them and continues to require substantial human intervention and oversight. Furthermore, the primary reason behind such successful AI use cases is that errant outputs do not carry significant real-world consequences in these contexts. For instance, a hallucinated language translation or ChatGPT response does not pose a serious threat to any individual’s livelihood. On the other hand, even a small proportion of such outputs could have severe ramifications in the case of a healthcare chatbot or a farming assistant.
The global AI adoption narrative has had the unfortunate effect of gradually reducing human utility to the level of inhabiting mere points on a dataset, an antithesis to the decades-long pursuit of the Global South’s human development and inclusion agenda. Fast-tracking AI adoption under global peer pressure or succumbing to the “Fear of Missing Out” can have catastrophic consequences for the Global South, which risks falling victim to clever marketing strategies engineered by a handful of corporations. Consequently, the Global South needs to realign its increasingly AI-focused development agenda, centring it on labour rights and human development rather than merely prioritising AI adoption. It must identify risk-free AI adoption use cases and target sectors where it can have maximum utility while resisting the mass AI adoption narrative, at the very least in critical sectors where it serves to have even a minimal detrimental human impact.
Source: This article was published by the Observer Research Foundation.
Malta offers free ChatGPT Plus access to its citizens through a national AI program

Citizens and residents registered with Malta’s online identity system can apply to get access to ChatGPT Plus after completing a free online course.
OpenAI has signed its first partnership with a national government bringing the paid version of ChatGPT for free to residents of Malta.
OpenAI and the Government of Malta on Saturday announced a deal that will give every citizen free access to the artificial intelligence (AI) chatbot for one year through a government-led AI literacy programme.
Citizens and residents registered with Malta’s online identity system can apply after completing a free online course called AI for All, developed by the University of Malta.
According to the Malta Digital Innovation Authority, the course is designed to help people understand what AI is, what it can and cannot do, and how to use it responsibly at home and at work.
The first phase of the programme will launch in May, according to the announcement.
The Malta Digital Innovation Authority will manage access to the free subscriptions, and it said the programme will grow as more people complete the course.
“By pairing this education with free access to the most advanced digital tools available today, we are turning an unfamiliar concept into practical assistance for our families, students, and workers,” said Silvio Schembri, the country’s minister for economy, enterprise and strategic projects, in an announcement.
The partnership is the first of its kind, according to the announcement.
“Malta is leading the way by showing how countries can empower their citizens to benefit from the transformative potential of AI,” said George Osborne, head of OpenAI for Countries, an initiative by OpenAI “built around local priorities”.
The partnership is part of a growing trend among governments to find practical ways to help people build confidence using AI and apply it to everyday tasks.
Last year, Anthropic announced a project that gives all teachers in Iceland access to Claude, its AI assistant, to help with lesson planning, classroom materials and administrative tasks.
In September 2025, OpenAI announced a partnership with the Greek government to bring its technology to secondary schools and start-ups across the country.
Meanwhile, in February 2025, the UK government signed a memorandum of understanding with Anthropic to improve how people access and interact with government information and services online.

No comments:
Post a Comment