By AFP
February 26, 2026

Amazon is powering artificial intelligence with custom Trainium chips designed especially for maching learning - Copyright AFP Mark Felix
Moisés ÁVILA
Tech titan Amazon is working to step out of Nvidia’s shadow with custom “Trainium” chips designed specially for machine learning as billions of dollars are poured into artificial intelligence (AI).
Amazon subsidiary Annapurna Labs in Austin, Texas, was testing the longevity of its latest generation Trainium during a recent visit by AFP to the facility.
Texas is emerging as a US tech world El Dorado, luring investments with cheap energy, relaxed regulations, tax incentives and reasonably affordable real estate for massive data centers.
Amidst a deafening roar, UltraServers packed with 144 of the Trainium AI-accelerator chips were being put through their paces at Annapurna in a routine check prior to delivery.
After years of relying on suppliers for chips, the e-commerce powerhouse’s Amazon Web Services (AWS) cloud computing unit began designing its own, acquiring Israeli startup Annapurna Labs in 2015.
First came Graviton and Inferentia chips in 2018, the former for general cloud computing and the latter for powering AI models.
The first Trainium debuted in 2020, followed by a second generation that touted a big boost in performance.
Trainium 3 chips put into action in December are touted as doubling the capabilities of the second generation despite being smaller than a credit card.
Kristopher King, head of the Annapurna lab in Austin, contended that the latest Trainium chips can cut the cost of developing and running generative AI models by as much as 40 percent compared to using graphics processing units (GPUs) that are now deemed the “gold standard” for AI.
– Failure not an option –
Along with pricing Trainium chips competitively, AWS is out to make reliability a selling point since data centers need to operate non-stop for long stretches at a time.
AI development requires hundreds of thousands of chips operating simultaneously for weeks, according to Annapurna head of engineering Mark Carroll.
“If there’s a failure or unavailability during this phase you have to go back, or even start from scratch,” Carroll said.
Unlike other major players in AI processors, AWS doesn’t sell its chips.
Instead, AWS uses Trainium exclusively in its own data centers, leasing computing capabilities to customers.
AWS opted to customize its chips to harmonize them with its software, particularly a Bedrock platform that lets customers chose from a wide range of competing AI models including Anthropic, OpenAI and other rivals, according to the lab.
Trainium is positioned as a cost-saving option in an AI market considered “supply constrained” because of insatiable appetite for high-performance GPUs from industry leader Nvidia and competitors such as AMD.
Even though Trainium 3 is only a few months old, Annapurna is already designing a new generation of the chip.
A launch date for Trainium 4 has yet to be disclosed, but Carroll says it will have six times the processing performance of its predecessor.
As Google, Microsoft, OpenAI, Meta and other tech rivals race to field ever-improved AI models, pressure is intense for chips to make the technology smarter, faster, cheaper and less power-hungry.
Nvidia began manufacturing its industry-leading Rubin grapics processing unit less than a year after the release of then top-of-the-line Blackwell.
The first version of Trainium took about 18 months to create, while the second generation was readied in nine months and Annapurna is “trying to maintain that pace”, Carroll said.
Op-Ed: The sheer naivete of the AI hype is almost beyond belief
By Paul Wallis
EDITOR AT LARGE
By Paul Wallis
EDITOR AT LARGE
DIGITAL JOURNAL
February 26, 2026

Image: — © AFP
Nobody in the tech world is as unimpressed with AI as the experts. In a recent case, an Australian Woolworths AI agent called Olive digressed into a chat about its mother while a customer was trying to place an order. That’s already folklore.
The excuse was that an employee was trying to make Olive sound more human. Pretty lame. Utterly useless. I’ve used Olive. It works pretty well. It doesn’t need to be the chatty one to do its job, either.
It also asked for a date of birth during this frolic among the futile. That’s a security risk. Olive doesn’t need to do that, either, which is hardly reassuring. Fortunately, the situation was put under control without any major issues.
Let’s join a couple of dots. A confused customer is being asked for security information by an AI agent that’s obviously not working properly. See any logical inferences, like a compromised AI agent, perhaps?
Globally, the world is stocking up on AI agent mistakes. Big, expensive mistakes in some cases. If you read this enchanting litany from current headlines about AI agent errors, it’s not hard to see the demolition derby at work.
There’s a cascade of events from any AI agent error:
An AI agent makes a mistake.
Business resources, time, and very probably money, are diverted to scale to fix the error.
Any range of legal liabilities may occur.
Forbes took the time to spell out the risks of AI agents for businesses.
Read this Forbes article like a training manual.
They even took the time to pin down the ever-more-blurry line between AI agents and chatbots. Chatbots can take some actions, so that old distinction with agents doesn’t work anymore. To their credit, Forbes also took the time to spell out the security risks of AI agents for businesses. This information needs to be read in context with the overall view of AI agents and their issues.
Now I can get to the point of this article.
A few points to be made:
There’s nothing even slightly academic about AI agent risks.
These clusters are happening right now and getting worse.
They’re all potentially expensive, and if you like the idea of “a class action with every client interaction,” you’ll have a great time.
The sheer naivete of the AI hype is almost beyond belief.
Almost.
if you happen to discount the culture of righteous incompetence that plagues corporate psychology, that is. If you think that whole sectors full of tech-illiterate bozos can do your AI acquisition and get it right.
You’d have to be dumber than a US political bot to trust any of the AI agent mystique we’re getting bombarded with every day. It’s all crap.
Every. Single. Word.
You’d have to be a lot stupider than a house brick to assume trusting your business to AI agents is safe.
There’s already a subset of AI science devoted to preventing and fixing AI errors, simply because they’re so common.
You need:
Strict quality control and support in ironclad AI contracts.
Performance specifications.
Good in-house system fixes for issues with transactions and clients.
Do not assume you can disclaim any liability for anything simply because the AI malfunctioned.
You can’t. Don’t try telling a court you’re not liable, either. They’ll tell you in no uncertain terms who’s deciding if you’re liable. Offloading a liability to a contractor is also a very shaky option. It’s your business, not theirs, with your clients.
Expect trouble and make sure you can avoid it.
____________________________________________________________
Disclaimer
The opinions expressed in this Op-Ed are those of the author. They do not purport to reflect the opinions or views of the Digital Journal or its members.

Image: — © AFP
Nobody in the tech world is as unimpressed with AI as the experts. In a recent case, an Australian Woolworths AI agent called Olive digressed into a chat about its mother while a customer was trying to place an order. That’s already folklore.
The excuse was that an employee was trying to make Olive sound more human. Pretty lame. Utterly useless. I’ve used Olive. It works pretty well. It doesn’t need to be the chatty one to do its job, either.
It also asked for a date of birth during this frolic among the futile. That’s a security risk. Olive doesn’t need to do that, either, which is hardly reassuring. Fortunately, the situation was put under control without any major issues.
Let’s join a couple of dots. A confused customer is being asked for security information by an AI agent that’s obviously not working properly. See any logical inferences, like a compromised AI agent, perhaps?
Globally, the world is stocking up on AI agent mistakes. Big, expensive mistakes in some cases. If you read this enchanting litany from current headlines about AI agent errors, it’s not hard to see the demolition derby at work.
There’s a cascade of events from any AI agent error:
An AI agent makes a mistake.
Business resources, time, and very probably money, are diverted to scale to fix the error.
Any range of legal liabilities may occur.
Forbes took the time to spell out the risks of AI agents for businesses.
Read this Forbes article like a training manual.
They even took the time to pin down the ever-more-blurry line between AI agents and chatbots. Chatbots can take some actions, so that old distinction with agents doesn’t work anymore. To their credit, Forbes also took the time to spell out the security risks of AI agents for businesses. This information needs to be read in context with the overall view of AI agents and their issues.
Now I can get to the point of this article.
A few points to be made:
There’s nothing even slightly academic about AI agent risks.
These clusters are happening right now and getting worse.
They’re all potentially expensive, and if you like the idea of “a class action with every client interaction,” you’ll have a great time.
The sheer naivete of the AI hype is almost beyond belief.
Almost.
if you happen to discount the culture of righteous incompetence that plagues corporate psychology, that is. If you think that whole sectors full of tech-illiterate bozos can do your AI acquisition and get it right.
You’d have to be dumber than a US political bot to trust any of the AI agent mystique we’re getting bombarded with every day. It’s all crap.
Every. Single. Word.
You’d have to be a lot stupider than a house brick to assume trusting your business to AI agents is safe.
There’s already a subset of AI science devoted to preventing and fixing AI errors, simply because they’re so common.
You need:
Strict quality control and support in ironclad AI contracts.
Performance specifications.
Good in-house system fixes for issues with transactions and clients.
Do not assume you can disclaim any liability for anything simply because the AI malfunctioned.
You can’t. Don’t try telling a court you’re not liable, either. They’ll tell you in no uncertain terms who’s deciding if you’re liable. Offloading a liability to a contractor is also a very shaky option. It’s your business, not theirs, with your clients.
Expect trouble and make sure you can avoid it.
____________________________________________________________
Disclaimer
The opinions expressed in this Op-Ed are those of the author. They do not purport to reflect the opinions or views of the Digital Journal or its members.
No comments:
Post a Comment