Monday, February 23, 2026

Off-Balance Sheet AI Financing Stirs Tech Bubble Fears

  • Big Tech firms like Meta and Oracle are utilizing special purpose vehicle (SPV) financing to keep billions of dollars of AI infrastructure borrowing, such as data centers, off their main balance sheets.

  • This accounting choice involves using an external entity to raise debt and lease the infrastructure back to the tech group, a practice that, while legal, is raising concerns among market watchers about complexity and hidden leverage, particularly if returns do not meet the massive spending.

  • Despite these financial structures and forecasts of huge corporate bond issuance to fund AI expansion, the largest US tech firms still hold substantial cash reserves, and the scale of off-balance sheet arrangements is considered modest relative to their enormous projected cash flows.

Meta is paying roughly $6.5bn (£4.82bn) in extra financing costs to keep $27bn of AI infrastructure borrowing off its balance sheet, a costly accounting choice that captures the mood in Big Tech’s race to build the pipes of AI without spooking investors.

The arrangement, known as special purpose vehicle financing (SPV), allows an external entity to raise debt, construct the data centre, and lease it back to the tech group.

On paper, Meta books lease payments rather than traditional borrowing, but really, it has committed to decades of payments tied to huge computing facilities.

The structure was used for Meta’s $30bn data centre project in Louisiana, which was financed largely through private credit heavyweights like Blue Owl Capital, Pimco, BlackRock and Apollo.

Meta owns around 20 per cent of the vehicle and has offered a residual value guarantee, meaning it could be required to compensate investors if the project’s value falls below agreed levels at the end of the lease.

Oracle, too, has pushed tens of billions of dollars of AI data centre investments in similar ways, including a $38bn package tied to its partnership with OpenAI.

Elon Musk’s xAI has raised $20bn via a comparable structure, with debt secured against Nvidia chips.

And in some cases, Nvidia has even invested equity in customers that then use it to buy its hardware, a circular flow of capital that keeps revenue ticking while the chip giant’s liabilities sit elsewhere.

The accounting is legal and disclosed, but it is unfolding against a backdrop of eye-watering AI forecasts and a surge in borrowing across the sector.

Morgan Stanley estimates hyperscalers could raise $400bn in corporate bonds in 2026 alone to fund AI expansion.

JPMorgan has calculated that AI and data centre firms now account for 14.5 per cent of its $10tn investment-grade bond index, which is about $1.5tn in debt exposure.

UBS says roughly $450bn has flowed from private capital into tech infrastructure as of early 2025.

For market watchers, that scale and complexity are stirring memories of past tech bubbles.

AJ Bell’s Russ Mould points to Richard Bookstaber’s study of past market crises. “He argued that leverage, complexity and opacity help to fuel bubbles,” he told City AM.

“The use of special purpose vehicles and off-balance sheet structures to fund enormous AI capital investment will bring back bad memories for experienced investors.”

He also adds that while these structures comply with accounting rules, “more debt and more complexity mean more risk”, particularly if returns do not match the spending.

Strong balance sheets

But this doesn’t yet look like a rerun of the late-1990s telecom crash, and the biggest US tech firms are still sitting on huge piles of cash.

Among the hyperscalers, only Oracle and Apple currently carry more long-term debt than cash and short-term investments.

Nvidia’s debt-to-capital ratio stands at 8.3 per cent; Alphabet’s at 10.3 per cent; Meta’s at 27.9 per cent.

Oracle’s is far higher at 83.9 per cent, though still investment grade, albeit on negative watch.

Matt Britzman, senior equity analyst at Hargreaves Lansdown, told City AM: “Among the four largest public market AI investors – Amazon, Alphabet, Meta and Microsoft – total calendar-year 2026 capex is forecast at over $600bn, so it’s not like these companies are trying to hide their ambitions”.

He adds that the combined operating cash flow across the group is expected to approach $700bn in 2026.

“Off balance sheet arrangements also look modest in scale relative to the enormous cash flows that big tech are pulling in, which reduces concerns about hidden leverage.”

“Demand for compute remains extremely strong, and cloud giants are still seeing rental demand for six-year-old A100 chips”, Britzman added.

The question, then, is less about solvency today and more about future durability.

Gartner forecasts global AI spending will hit $2.52tn in 2026, up 44 per cent year on year.

By 2030, it expects AI to completely dominate IT budgets. But credit agencies have flagged that some of the sector’s biggest customers, including OpenAI, are not expected to turn profitable until later in the decade.

These data centres are financed on the basis of 20-year demand assumptions.

And if that demand translates into projected revenue growth, these structures will look prescient. On the other hand, if it does not, the risk will sit in private credit vehicles and long-term leases.

By City AM 


Global summit calls for ‘secure, trustworthy and robust AI’


By AFP
February 21, 2026


The summit was attended by tens of thousands of people including top tech CEOs. — © AFP Arun SANKAR


Katie Forster

Dozens of nations including the United States and China called for “secure, trustworthy and robust” artificial intelligence, in a declaration issued Saturday after a major summit on the technology in New Delhi.

The statement signed by 86 countries did not include concrete commitments to regulate the fast-developing technology, instead highlighting several voluntary, non-binding initiatives.

“AI’s promise is best realised only when its benefits are shared by humanity,” said the statement, released by the five-day AI Impact Summit.

It called the advent of generative AI “an inflection point in the trajectory of technological evolution.”

“Advancing secure, trustworthy and robust AI is foundational to building trust and maximising societal and economic benefits,” it said.

The summit — attended by tens of thousands including top tech CEOs — was the fourth annual global meeting to discuss the promises and pitfalls of AI, and the first hosted by a developing country.

Hot topics discussed included AI’s potential societal benefits, such as drug discovery and translation tools, but also the threat of job losses, online abuse and the heavy power consumption of data centres.

Analysts had said earlier that the summit’s broad focus, and vague promises made at the previous meetings in France, South Korea and Britain, would make strong pledges or immediate action unlikely.

– US signs on –

The United States, home to industry-leading companies such as Google and ChatGPT maker OpenAI, did not sign last year’s summit statement, warning that regulation could be a drag on innovation.

“We totally reject global governance of AI,” US delegation head Michael Kratsios had said at the Delhi summit on Friday.

The United States signed a bilateral declaration on AI with India on Friday, pledging to “pursue a global approach to AI that is unapologetically friendly to entrepreneurship and innovation”.

But it also put its name to the main summit statement, the release of which was originally expected Friday but was delayed by one day to maximise the number of signatories, India’s government said.

On AI safety risks — from misinformation and surveillance to fears of the creation of devastating new pathogens — Saturday’s summit declaration struck a cautious tone.

“Deepening our understanding of the potential security aspects remains important,” it said.

“We recognize the importance of security in AI systems, industry-led voluntary measures, and the adoption of technical solutions, and appropriate policy frameworks that enable innovation.”

On jobs, it emphasised reskilling initiatives to “support participants in preparation for a future AI driven economy”.

And “we underscore the importance of developing energy-efficient AI systems” given the technology’s growing demands on natural resources, it said.

– ‘Unacceptable risk’ –

Computing expert and AI safety campaigner Stuart Russell told AFP that Saturday’s commitments were “not completely inconsequential”.

“The most important thing is that there are any commitments at all,” he said.

Countries should “build on these voluntary agreements to develop binding legal commitments to protect their peoples so that AI development and deployment can proceed without imposing unacceptable risks”, Russell said.

Some visitors had complained of poor organisation, including chaotic entry and exit points, at the vast summit and expo site in Delhi.

The event was also the source of several viral moments, including the awkward refusal of rival US tech CEOs — OpenAI’s Sam Altman and Dario Amodei of Anthropic — to hold hands on stage.

The next AI summit will take place in Geneva in 2027. In the meantime, a UN panel on AI will start work towards “science-led governance”, the global body’s chief Antonio Guterres said Friday.

The UN General Assembly has confirmed 40 members for a group called the Independent International Scientific Panel on Artificial Intelligence.

It was created in August, aiming to be to AI what the UN’s Intergovernmental Panel on Climate Change (IPCC) is to global environmental policy.

India has used the summit to push its ambition to catch up with the United States and China in the AI field, including through large-scale data centre construction powered by new nuclear plants.

Delhi expects more than $200 billion in investments over the next two years, and US tech giants unveiled a raft of new deals and infrastructure projects in the country during the summit.

TECH BRO'S

‘Alpha male’ AI world shuts out women: computing professor Wendy Hall



By AFP
February 20, 2026


The AI sector is 'totally male-dominated', warns top computer scientist Wendy Hall - Copyright AFP Ludovic MARIN


Katie Forster


Artificial intelligence could change the world but the dearth of women in the booming sector will undermine pledges for inclusive technology, top computer scientist Wendy Hall told AFP on Friday.

Hall, a professor at Britain’s University of Southampton known for her pioneering research into web systems, said that the gender imbalance had long been stark.

“All the CEOs are men,” the 73-year-old said, describing the situation at a major AI summit held in New Delhi this week as “amazingly awful”.

“It’s totally male-dominated, and they just don’t get the fact that this means that 50 percent of the population is effectively not included in the conversations.”

Gender bias “creeps through everything, because they don’t think about it when they build their products”, Hall said.

She was speaking in an interview at the AI Impact Summit, where dozens of governments are expected to lay out a shared vision on how to handle the promises and pitfalls of generative AI.

Prime Minister Narendra Modi, who is pushing for India to become a global AI power, said Thursday that advanced computing systems “must become a medium for inclusion and empowerment”.

But when he posed on stage for a photo with leading tech business figures, 13 men were present and only one woman — Joelle Pineau, a former Meta researcher who is now chief AI officer at Cohere.

It was a similar story at another photo opportunity with world leaders including French President Emmanuel Macron and Brazil’s Luiz Inacio Lula da Silva.

– ‘Biased world’ –

Many studies have shown how generative AI tools like ChatGPT and Google’s Gemini reflect stereotypes contained in the vast reams of text and images they are trained on.

“We’re a biased world, so the training is done on biased data,” Hall said.

A 2024 UNESCO study found that large language models described women in domestic roles more often than men, who were more likely to be linked to words like “salary” and “career”.

While tech companies work to counter these built-in machine biases, women have found themselves targeted by AI tools in other ways.

Several countries moved to ban Elon Musk’s Grok AI tool this year after it sparked global outrage over its ability to create sexualised deepfakes depicting real people — mostly women — in skimpy clothing.

Hall, a longtime advocate for women in technology, said that things had “not really improved that much” since she had her start decades ago.

“In AI, it’s getting worse.”

Few women choose to study computer science in the first place, then “once you get more senior, women fall away”, Hall said.

Women-led startups “don’t get the investment that the men get”, and many simply “get fed up”, she added.

Women also “drop out because they just don’t want to be part of that alpha male world”.

– ‘Felt like giving up’ –

Hall, who wrote her first paper about the lack of women in computing in the late 1970s, said she had faced “all sorts of barriers” during her career.

“I’ve had to push through, be strong, have good mentors. And yeah, I felt like giving up many times.”

She was made a dame in 2009, and has also acted as a senior adviser to the British government and the United Nations on artificial intelligence.

But at her first job interview at a university nearly five decades ago, “I was told I couldn’t have the job because I was a woman” by an all-male panel, she recalled.

“I was supposed to be teaching maths to engineers, and they said as a young woman I wouldn’t be able to control a class of male engineers.”

Although she has noticed no uptick in women entering the field overall, Hall said she had been inspired in New Delhi.

“The wonderful thing about this conference are the young people here,” she said.

“There are a lot of young women here from India and they’re all abuzz with the opportunities.”

 India chases ‘DeepSeek moment’ with homegrown AI models

By AFP
February 19, 2026


India's Prime Minister Narendra Modi (C) takes a group photo with AI company leaders at the AI Impact Summit in New Delhi on February 19, 2026 - Copyright AFP Ludovic MARIN


Katie Forster and Uzmi Athar

Fledgling Indian artificial intelligence companies showcased homegrown technologies this week at a major summit in New Delhi, underpinning big dreams of becoming a global AI power.

But analysts said the country was unlikely to have a “DeepSeek moment” — the sort of boom China had last year with a high-performance, low-cost chatbot — any time soon.

Still, building custom AI tools could bring benefits to the world’s most populous nation.

At the AI Impact Summit, Prime Minister Narendra Modi lauded three new models released by Indian companies, along with other examples of the country’s rising profile in the field.

“All the solutions that have been presented here demonstrate the power of ‘Made in India’ and India’s innovative qualities,” Modi said Thursday.

One of the startups making a buzz at the five-day summit attended by world leaders and top technology CEOs was Sarvam AI, which this week released two large language models it says were trained from scratch in India.

Its models are optimised to work across 22 Indian languages, says the company, which received government-subsidised access to advanced computer processors.

The five-day summit, which wraps up Friday, is the fourth annual international meeting to discuss the risks and rewards of the fast-growing AI sector.

It is the largest yet and the first in a developing country, with Indian businesses striking deals with US tech giants to build large-scale data centre infrastructure to help train and run AI systems.

Another Indian company that drew attention with product debuts this week include the Bengaluru-based Gnani.ai, which introduced its Vachana speech models at the summit.

Trained on more than a million hours of audio, Vachana models generate natural-sounding voices in Indian languages that can process customer interactions and allow people to interact with digital services out loud.

Job disruption and redundancies, including in India’s huge call centre industry, have been one key focus of discussions at the Delhi summit.

– ‘Biggest market’ –

The government-supported BharatGen initiative, led by a group based at a university in Mumbai, also released a new multilingual AI model this week.

So-called sovereign AI has become a priority for many countries hoping to reduce dependence on US and Chinese platforms while ensuring that systems respect local regulations including on data privacy.

AI models that succeed in India “can be deployed all over the world”, Modi said on Thursday.

But experts said the sheer computational might of the United States would be hard to match.

“Despite the headline pledges, we don’t expect India to emerge as a frontier AI innovation hub in the near term,” said Reema Bhattacharya, head of Asia research at risk intelligence company Verisk Maplecroft.

“Its more realistic trajectory is to become the world’s largest AI adoption market, embedding AI at scale through digital public infrastructure and cost-efficient applications,” she said.

Prihesh Ratnayake, head of AI initiatives at think-tank Factum, told AFP that the new Indian AI models were “not really meant to be global”.

“They’re India-specific models, and hopefully we’ll see their impact over the coming year,” he said.

“Why does India need to build for the global scale? India itself is the biggest market.”

And Nanubala Gnana Sai, a MARS fellow at the Cambridge AI Safety Institute, said that homegrown models could bring other benefits.

Existing models, even those developed in China, “have intrinsic bias towards Western values, culture and ethos — as a product of being trained heavily on that consensus”, Sai told AFP.

India already has some major strengths including “technology diffusion, eager talent pool and cheap labour”, and dedicated efforts can help startups pivot to artificial intelligence, he said.

“The end-product may not ‘rival’ ChatGPT or DeepSeek on benchmarks, but will provide leverage for the Global South to have its own stand in an increasingly polarised world.”
FacebookTwitterLinkedIn


German broadcaster recalls correspondent over AI-generated images


By AFP
February 20, 2026


ZDF called the damage done to its editorial reputation was 'considerable' - Copyright AFP/File Tobias SCHWARZ

German public broadcaster ZDF on Friday recalled a New York correspondent after AI-generated images were screened during a news report on ICE immigration raids in the United States.

ZDF said its journalist Nicola Albrecht, 50, used video taken from the internet in a report on children terrified by US Immigrations and Customs Enforcement operations.

One clip was AI-generated and not labelled as such, and another in fact showed a Florida arrest from 2022.

“The damage caused by disregarding journalistic rules is considerable,” ZDF editor-in-chief Bettina Schausten said in a statement. “At its core, this is about the credibility of our reporting.”

Albrecht’s original report broadcast on February 13 was accurate, ZDF said, but an updated version broadcast on the February 15 edition of the flagship nightly news programme contained the two misleading clips.

Presenter Dunja Hayali had introduced the segment saying the Trump administration’s immigration raids had created “a climate of fear that doesn’t even stop at children”.

One clip could be seen to feature the watermark of Sora, OpenAI’s platform that generates short video clips based on prompts.

“The AI-generated material should not have been used without journalistic justification and without being categorised according to ZDF’s internal rules for the use of AI-generated material,” the broadcaster said.

Journalists have been caught out before by synthetic content.

Publications including Wired and Business Insider in August withdrew features purportedly written by a freelance journalist following concerns they were in fact written using generative artificial intelligence.

In January, AFP factcheckers found that an image carried by ZDF purporting to show former Venezuelan president Nicolas Maduro after his capture by US soldiers was AI-generated.

No comments: