Sunday, March 22, 2026

AI skills now boost pay across most sectors of the economy


By Dr. Tim Sandle
SCIENCE EDITOR
DIGITAL JOURNAL
March 16, 2026


Image: © AFP

AI skills provide a boost in wages, at least for the current time, according to a new survey. This could alter as the technology becomes commonplace and easier to use, but currently there is, for most sectors of the economy, a premium for those most familiar with manipulating artificial intelligence.

With an estimate 5% of workers currently AI fluent, those who possess such skills may be gaining a salary advantage as businesses rush to adopt AI.

In an analysis of more than one million UK job listings for their AI statistics report, GEO agency Reboot Online identified a surge in AI roles, alongside a higher average salary in almost every industry. The results have been shared with Digital Journal.

In particular, AI skills command a salary premium in 79% of UK industries. This is a skill in demand: Almost 1 in 3 job listings (31.8%) now mention AI skills.

In terms of the size of the premium, across sectors where AI skills are rewarded, workers earn an average of £4,827 more, and in some sectors, the premium reaches £20,000. Here, a technical AI understanding is not necessary to reach the premiums, as the most common terms are around AI assistants, rather than machine learning or LLMs. This is also the key reason why this premium may not last too long, since the level of skill required is undoubtedly overblown.
Low tech understanding required

Among the most frequently used AI terms are AI assistants and tools, straying away from technical AI expertise. AI familiarity and general experience are enough to access the salary premium in most job descriptions. The five most mentioned AI skills are ‘AI assistant’, ‘AI automation’, ‘deep learning’, ‘ML’ and ‘decision automation’.

The research revealed that on average, AI-skilled roles across the UK pay £2,930 more (+5.8%). While 79% of UK industries pay more for AI skills (30 out of 38), including non-profits, fitness and manufacturing. Among the industries where AI commands a premium, the average uplift is £4,827 (10.54%), almost double the overall figure.

Which sectors value AI skills most?

 Industry                        AI jobs average salary              Non AI jobs average salary             Salary difference     Percentage difference     
1Non-Profit & NGO£76,864.17£55,934.00£20,930.1737.42% 
2Quality Assurance£57,302.54£44,583.43£12,719.1128.53%
3IT & Networking£54,597.05£45,158.68£9,438.3720.90%
4Manufacturing£42,223.94£36,709.12£5,514.8315.02%
5Logistics & Supply Chain£49,393.12£42,969.38£6,423.7414.95%
6Data Science£55,840.45£48,779.30£7,061.1614.48%
7Fitness & Wellness£43,688.93£38,205.81£5,483.1214.35%
8Product Management£71,642.54£63,196.88£8,445.6613.36%
9Software Engineering£64,949.31£57,756.30£7,193.0212.45%
10Cybersecurity£64,277.53£57,569.81£6,707.7211.65%
Shai Aharony, CEO of Reboot Online, tells Digital Journal: “We’re seeing salary premiums as a result of AI in industries you wouldn’t necessarily expect. It’s universal across almost every sector and job role in 2026 and our reliance is only expected to grow.”

In contrast, some sectors appear slow to take up AI – or at least recruiting externally for this skill set.



Which sectors value AI skills least?

 Industry                        AI jobs average salary           Non AI jobs average salary       Salary difference       Percentage difference             
1Healthcare£30,623.61     £39,744.05-£9,120.43 -22.95%
2Education & Teaching£34,156.22£42,043.69-£7,887.46-18.76%
3Legal£46,950.36£50,952.93-£4,002.57-7.86%
4Retail£27,942.85£31,635.01-£3,692.16-11.67%
5Sales£34,083.95£37,069.65-£2,985.71-8.05%
6Translation & Localisation     £54,200.22£57,136.95-£2,936.73-5.14%
7Food & Hospitality£28,774.54£30,400.20-£1,625.66-5.35%
8Construction£44,563.20£45,809.09-£1,245.88-2.72%
9Arts & Entertainment£39,741.02£40,257.91-£516.89-1.28%
10  Customer Service£31,879.13£31,519.52£359.611.14%
The data was gathered by scraping various UK job listing websites. This resulted in more than a million job advertisements whose posting dates range from January 2021 to January 2026.
Q&A: Trusted data and consumer-first AI creates conversational financial guidance


By Dr. Tim Sandle
SCIENCE EDITOR
DIGITAL JOURNAL
March 19, 2026


Climate finance will be at the top of the agenda at the upcoming COP29 in November - Copyright AFP Marvin RECINOS

Experian recently announced the next evolution of its Experian Virtual Assistant, EVA, a significant advancement in its Consumer First AI strategy that expands personalized, conversational financial guidance to millions of consumers.

To understand more about a trusted data and consumer first approach to AI, Digital Journal spoke with Experian’s Debbie Hsu who serves as Executive Vice President of Product for Experian Consumer Services. Hsu leads product strategy, innovation, and the development of consumer-focused financial tools.

Digital Journal: There are lots of buzzwords spinning around AI these days. What do you mean “Consumer First AI?”

Debbie Hsu: You are right that AI can mean many different things depending on who is using the term. For us, Consumer First AI has a very practical focus. It is about using data and artificial intelligence to bring decision-ready clarity to financial information so people can quickly understand what it means and what to do next, while also making it more personalized for each individual.

Every day, people turn to our Experian Virtual Assistant, EVA, with real world questions such as:


• How do I freeze or unfreeze my credit report?
• What is Experian Boost and what bills qualify?
• Why did my credit score change?
• How can I improve my score?
• What credit cards do I qualify for?
• How does my spending look?

These everyday conversations help us continuously improve how we deliver guidance. Consumer First AI means meeting people where they are in their financial journey, whether they are building credit for the first time, recovering from a setback, or exploring new financial opportunities.

Our approach is built on four key drivers:Experian’s trusted, proprietary data foundation
Advanced artificial intelligence capabilities built on more than 15 years of AI product innovation
Adaptive insights from daily engagements with members, with EVA available to more than 85 million members
Experian Marketplace, which delivers personalized credit card, loan, insurance, and other financial offers that match an individual’s financial profile with the partner’s approval criteria

Ultimately, Consumer First AI reflects our commitment to applying data and advanced technology responsibly and at scale to translate complex financial information into meaningful understanding. Our goal is to consistently deliver decision-ready clarity that empowers people to move forward with confidence, make informed financial choices, and take greater control of their financial future.

DJ: How is this different than financial guidance that someone may receive from popular Large Language Model consumer apps?


Hsu: Some general purpose AI applications are beginning to connect to consumer permissioned financial account data. However, they are not designed as regulated financial services platforms.


EVA is differentiated by how it integrates trusted, structured credit data with consumer permissioned transaction data, Experian’s credit expertise and Marketplace integration. This enables EVA to deliver insights and guidance grounded in a member’s complete financial context, including credit behaviour, score drivers, and lender relevant eligibility signals.

Because EVA operates within Experian’s regulated and privacy-controlled data environment, it can explain why a credit score changed based on specific factors, connect those changes to real world financial behaviours, and surface insights designed to help improve credit health over time.

Importantly, EVA also bridges insight to action. Experian works directly with credit card issuers and lenders to understand exactly how their products work and what consumers are the best match for a given product. By aligning consumer credit data with partner-provided underwriting and eligibility criteria, it can present personalized, pre-approved credit card and lending offers within the Experian Marketplace. These are not broad suggestions but personalized offers based on a consumer’s financial profile and lender requirements.

While general purpose AI tools generate helpful conversational responses, EVA is purpose built to provide explainable, actionable financial guidance powered by trusted data, credit expertise, and Marketplace integration.

DJ: How does EVA arrive at the guidance that it provides?

Hsu: EVA’s insights and decision support are built on Experian’s credit expertise and trusted data foundation. It draws from credit profile data, consumer permissioned financial information, and curated, high quality financial education designed to reflect real world financial behaviours and outcomes.

The system does not operate as a black box. The insights are rooted in established credit principles, and the same types of factors lenders consider. This approach ensures transparency and credibility, giving members clarity into why they are seeing specific insights and what influences their financial profile.

Member interactions shape the experience in real time. As individuals engage, EVA surfaces relevant explanations and decision support aligned with their goals, whether focused on building credit, strengthening a score, managing debt, or preparing for a major financial milestone.

If a member’s credit score declines, for example, EVA can pinpoint contributing factors within the credit file, such as higher utilization or a recent delinquency. It then translates those factors into clear explanations and practical habits that can help improve financial momentum over time.

By combining advanced AI with consumer first design and trusted data, EVA expands access to personalized financial insights. The result is more informed decision making and a stronger foundation of trust, reinforcing Experian’s role as a reliable financial partner when it matters most.

DJ: Please explain the concept of a Big Financial Friend (BFF). This is represented in an ad campaign, correct?

Hsu: Yes, we launched the Big Financial Friend campaign in May 2025 to highlight how Experian provides tools and resources to help consumers with their overall financial health. We wanted to personalize what it looks like for Experian to come alongside a consumer to truly support them in achieving their financial goals, always being there for them.

We arrived at a larger-than-life, transcendent Big Financial Friend played by actor Sam Richardson. He portrays a literal giant who acts as a personal Big Financial Friend, a BFF, to the main character of the ad as she makes financial decisions throughout her life.

Our BFF campaign communicates to consumers that Experian can be a close and strong financial helper that has their back throughout their financial life. It marks the beginning of a brand transformation that highlights the variety of tools and support we offer in a consumer-accessible way.
What does Agentic AI mean for today’s businesses?


By Dr. Tim Sandle
SCIENCE EDITOR
DIGITAL JOURNAL
March 19, 2026


Many insurers remain reluctant to cover business mistakes made by 'agentic" AI programmes housed in data centres - Copyright AFP YASUYOSHI CHIBA

The “Agentic AI era” has arrived: This represents a shift in AI framed around autonomy, with the technology promising to execute tasks like lead research, intent scoring, and meeting preparation with minimal human intervention.

Whilst this is evidently exciting, experts warn that unsupervised AI can generate spammy outreach, compliance risks, and brand reputation issues.

Agentic AI refers to autonomous systems that go beyond generating content to actively achieving goals, making decisions, and managing multi-step workflows with limited human oversight. Unlike passive, prompt-driven AI, these systems, often powered by Large Language Models (LLMs), act as “digital employees” that plan, use tools, and adapt to feedback in real time to solve complex tasks

The Head of Growth/AI Product at LeadsNavi – Raphael Yu – has explained to Digital Journal how B2B marketers can leverage agentic AI for measurable pipeline lift without harming their brand, emphasizing human-in-the-loop governance, clear KPIs, and responsible integration.

Agentic AI in Sales: Where Automation Helps, and Where It Hurts

The marketing world is entering what some analysts call the “Agentic AI era.” A major AI model launch has explicitly positioned autonomy: AI acting on behalf of humans, as the next competitive battleground for sales and marketing.

What “Agentic AI” Means for Marketing

Agentic AI refers to models capable of autonomously executing tasks traditionally done by humans, such as:Researching prospects and accounts
Scoring leads based on intent and engagement

Drafting personalized outreach

Scheduling meetings or follow-ups

“Autonomy is exciting,” says Yu. “It allows teams to accelerate pipeline development, but unchecked use can backfire: sending irrelevant messages, violating compliance, or damaging brand trust.”

Where AI Adds Real Pipeline Value

Examples, from Yu, include:Research & Insights: AI can quickly scan websites, news, and social signals to identify buying intent.

Intent Scoring: Machine learning helps prioritize leads most likely to convert.
Meeting Prep & Briefings: AI drafts concise summaries, saving sales teams hours per week.

These uses translate directly to measurable pipeline lift without compromising brand integrity.

Where Agentic AI Can Hurt

Autonomy without oversight carries risks: Spammy Outreach: Automated messaging can create negative customer experiences.

Compliance & Privacy: Unsupervised AI could violate GDPR, CCPA, or internal policies.
Brand Reputation: Poorly crafted messaging can erode trust in high-value accounts.

“Agentic AI must operate with human-in-the-loop checkpoints,” Yu emphasises. “Every automated action should have governance, clear oversight, and measurable KPIs to ensure value without harm.”

Best Practices: Agentic Outbound Without Brand DamageImplement human review for all high-impact communications
Track pipeline lift separately from raw activity to measure real impact
Use AI for research, scoring, and prep, not unsupervised outbound
Enforce compliance rules and privacy safeguards in every workflow

By adopting these practices, sales and marketing teams can maximize productivity while protecting brand credibility.

The “Agentic AI era” promises faster, smarter lead generation, but the line between efficiency and risk is thin. Yu recommends a measured, governance-driven approach, ensuring AI accelerates pipeline while preserving compliance and customer trust. The real opportunity lies in autonomous assistance, not reckless automation.

Autonomy can accelerate pipeline development by handling tasks like research, intent scoring, and meeting preparation. These are measurable, high-value contributions that save teams time and increase efficiency. However, if agentic AI is used for unsupervised outbound messaging, it can create spam, compliance violations, and ultimately damage the brand.

There is great importance in human-in-the-loop checkpoints. Every automated action should have oversight, clear governance, and KPIs tied to actual pipeline impact. By separating where AI adds real value from where it can harm, marketers can safely adopt agentic AI without risking their reputation.

Yu recommends using agentic AI to assist, not replace, human judgment. For example: AI can analyse accounts, prioritise high-intent leads, and summarise research for sales teams, while humans approve messaging and maintain customer relationships.

Snowflake takes aim at AI’s follow-through problem



By Jennifer Kervin
DIGITAL JOURNAL
March 18, 2026


Photo by Visual Tag Mx

Most business leaders have a getting-stuff-done problem.

Everywhere they look, there’s data. There’s no shortage of dashboards, reports, or AI-generated insights. But someone still has to figure out what matters, pull the right data, build the output, and chase people for input along the way.

That work slows everything down, which is a no-no when you’re looking to scale.

Snowflake’s new Project SnowWork, now in research preview, is built with this battle in mind as enterprise AI that’s starting to move beyond analysis and into execution.

“We are entering the era of the agentic enterprise, ushering in a fundamentally new way to work,” says Sridhar Ramaswamy, CEO of Snowflake. “Project SnowWork looks to put secure, data-grounded AI agents on every desktop, so business leaders and operators can move from question to action instantly.”

For the past few years, across enterprise AI, the trend was for tools to focus on helping companies analyze data faster. The result has been more insight, but not always more progress.

SnowWork is designed to sit in that last mile, acting as a “proactive AI collaborator.”

“It’s about unlocking new levels of productivity and efficiency by embedding intelligence directly into the operating fabric of the enterprise,” Ramaswamy adds.
Why the ‘last mile’ matters more than the model

Many companies have already invested heavily in AI tools and data platforms.

But those investments often stall when it comes to everyday use. Employees still file requests with data teams, reports still take days, and decisions still move slower than they should.

There’s a gap between the optimism of news headlines proclaiming AI as a productivity saviour, and full buy-in for core operations.

“Enterprises have invested heavily in data platforms and AI, yet the last mile of translating governed data into everyday business outcomes remains largely manual,” says Sanjeev Mohan, principal at SanjMo.

In Project SnowWork, Snowflake is moving from “a system of insight to a system of action, which is where measurable business value is ultimately realized,” Mohan added.

In a new blog post, Ramaswamy argues that the real opportunity for enterprise AI is expanding access beyond technical teams, putting data and decision-making directly into the hands of business users across the organization.

Sridhar Ramaswamy, CEO of Snowflake (Photo courtesy of Snowflake)

“As adoption grows, a problem is emerging,” he writes. “These agents operate without shared context, governance, or coordination, making them fragmented and difficult to trust.”

SnowWork tries to address that by combining planning, analysis, and execution into a single system. It can query data, run analysis, generate outputs, and suggest next steps in one interaction.

For a sales operations team, that could mean building reports and presentations in minutes instead of days. For a marketing leader, it could mean identifying campaign gaps and generating recommended actions without waiting on another team.

It’s long past time we admit that the constant paper shuffle between teams, all in the name of completing what would otherwise be a simple project, is deeply silly.

The value in such an AI collaboration tool is reducing the coordination required to get work done.
What this changes for business leaders

In most organizations, getting from question to action still involves multiple handoffs.

Tools like SnowWork take out the ‘now what.’

First, they change who can act on data.

Tools like SnowWork are designed for non-technical users, reducing dependency on centralized data teams for everyday questions, a bottleneck that many organizations still struggle with.

That has implications for team structure. If more employees can generate and act on insights directly, the role of centralized data teams may shift toward governance, oversight, and complex problem-solving.

Second, it compresses decision cycles.

When reporting, analysis, and execution happen in one flow, timelines shrink. Decisions that once took days can happen in hours, or minutes in some cases.

That sounds incremental, but at scale it changes how organizations operate. Faster feedback loops often lead to more experimentation, quicker adjustments, and less reliance on static planning.

Third, it raises new governance questions.

SnowWork is built on “governed enterprise data,” with role-based access controls and auditability built in.

“​​In each case, intelligence is not just producing recommendations, it is driving action, within enterprise-defined boundaries,” Ramaswamy explains.

If AI systems are going to execute tasks, not just suggest them, companies need to be clear about who can do what, with which data, and under what conditions.

That means AI strategy is now about control.
Where work starts to move

Enterprise AI is moving into a new phase.

The first phase was about access to data. The second was about generating insights. The next phase is about execution.

SnowWork positions AI as a system that helps complete work on their behalf, like a “control plane” for enterprise AI. Systems coordinate actions across data, models, and applications instead of simply returning answers.

That idea is often described as the “agentic enterprise,” where systems can plan and carry out tasks with minimal human intervention.

“Enterprises need more than models and applications,” he writes. “They need a coordinating layer, a central control plane that aligns intelligence, enterprise data, policy, and execution across the organization to drive agentic cohesion.”

In other words, companies need AI to “show, don’t tell.”

Final shotsThe real bottleneck in AI adoption is execution. Tools that close that gap will shape the next phase of productivity.

Leaders should focus more on how work flows across teams, not just adding the latest tools. That is where most of the friction still lives.

Governance will move to a frontline priority as AI systems begin to take action instead of simply offering recommendations.




Written ByJennifer Kervin
Jennifer Kervin is a Digital Journal staff writer and editor based in Toronto.
Nvidia rides ‘claw’ craze with AI agent platform

By AFP
March 16, 2026


Nvidia's chief Jensen Huang calls OpenClaw 'the operating system for personal AI' - Copyright AFP Patrick T. Fallon

Nvidia announced Monday that it was joining the OpenClaw craze, unveiling tools to bring AI agents — which can manage your email, files and calendar while you sleep — into the corporate world.

OpenClaw has taken Silicon Valley and tech-savvy users across the globe by storm, sparking “lobster fever” in reference to its red crustacean mascot, with many of the biggest names in tech convinced the AI agent is redefining computing.

But security concerns have dogged its rise, prompting the Chinese government to block state enterprises from using the tool. Nvidia is betting it can address those fears.

“Mac and Windows are the operating systems for the personal computer. OpenClaw is the operating system for personal AI,” Nvidia’s chief executive Jensen Huang said in a statement.

“This is the moment the industry has been waiting for — the beginning of a new renaissance in software,” he added.

The chipmaker unveiled tools designed to add security and privacy controls to these AI agents, called “claws,” that run directly on a person’s computer and execute complex tasks without constant human oversight.



– Stunning success –



Unlike ChatGPT or other chatbots that simply answer questions, claws act independently and around the clock and can even be asked to create apps or programs from scratch.

The craze traces back to a weekend coding project by Austrian developer Peter Steinberger, who has since been hired by OpenAI.

In late 2025 he released a self-hosted AI assistant called Clawdbot — a nod to Anthropic’s Claude chatbot — that could be messaged through WhatsApp or Telegram and would quietly get to work on tasks in the background.

The response was immediate and overwhelming, with developers reporting they had stayed up all night finding new ways to exploit the tool, which can also be asked to write standalone software programs from simple text prompts.

After Anthropic filed a trademark infringement complaint, Steinberger renamed the project twice in quick succession, landing on OpenClaw.

The rebranding chaos generated only more headlines, and within months it had become the fastest-adopted open-source project in history.

But the technology’s explosive spread has alarmed security researchers and corporate IT departments wary of employees inadvertently exposing company systems to hackers or causing disruption.

Several technology heavyweights have barred staff from running claw agents on work machines, and China’s government has restricted state enterprises from using the platform over data security fears.

Nvidia, the world’s most highly values company on Wall street, is seeking to turn those concerns to its advantage.

The company launched the Nvidia Agent Toolkit — a suite of open-source models and software for building enterprise AI agents — anchored by a new security layer called OpenShell that enforces network and privacy guardrails.

Adobe, Salesforce, SAP and Siemens are among the major software companies that said they are building on Nvidia’s new platform.
Music popstar will.i.am meshes AI and ‘micromobility’


By AFP
March 18, 2026


Will.i.am says his Trinity 'micromobility' vehicles turn commutes into collaborations with AI agents - Copyright AFP JOSH EDELSON

Black Eyed Peas star will.i.am is putting artificial intelligence agents to work in three-wheel vehicles tailored for modern urban life.

The musician turned tech entrepreneur demonstrated a so-called autocycle called Trinity at Nvidia’s annual developers conference that ends Thursday in the heart of Silicon Valley.

“I’m an artistic creator because of tech,” will.i.am told AFP.

“Creating with musical teams is great, but hopping into a different realm and being hyper creative with full-stack developers, electrical engineers, mechanical engineers, world builders — that is the ultimate level of creativity.”

His Trinity startup is named for an alignment of human, vehicle and agentic AI.

The single-passenger electric vehicle, which shares its name with the startup, lets a human do the driving but is infused with an AI agent that acts as a virtual assistant for conversation-based collaborations on the move, he will.i.am said.

“When a human has an agent of their own, a company has a super employee,” he said of brainstorming and delegating tasks to Trinity AI agents conversationally while commuting.

“Their vehicle that got them to work is a part of their tool set; and it’s working in the parking lot while they work,” he added, referring to Trinity as “brains on wheels.”

The vehicle, designed to accelerate quickly from zero to 60 mph (96 kmh), uses an Nvidia graphics processor to power built-in AI that can interpret and reason about the world around it, according to the startup.

The vehicles are to be made in a Los Angeles facility that will also serve as a school for robotics and agentic AI systems.

“I was ambitious, audacious and a little bit of naive,” will.i.am said of pursuing the project.

“That’s a good combination, because if you don’t have that little bit of naive and everything is skeptical, you probably wouldn’t take crazy risks.”

An initial production of run of 500 units is planned, with an aim to begin deliveries in August of next year, and to keep the vehicle’s price at less than $30,000.

Jury signals tech titans on hook for social media addiction


By AFP
March 20, 2026


Parents adamant that social media has harmed their children are among those awaiting a verdict in the social media addiction trial taking place in Los Angeles - Copyright AFP Patrick T. Fallon

Romain FONSEGRIVES

A question by jurors in a landmark social media addiction trial on Friday signaled Meta or YouTube may have to pay for letting a girl get hooked onto their platforms.

The jury’s first full week of deliberations ended with the panel sending the judge a query related to calculating damages in the case, which is expected to set a precedent for thousands of similar suits in the nation.

“We don’t start dancing in the streets over what seems to be a good question,” said plaintiff’s attorney Mark Lanier.

“But we’re appreciative of the fact that they’re on the issues of damages.”

To turn their attention to damages, enough jurors had to essentially agree that one or both accused tech platforms was negligently or harmfully designed and users should have been warned, according to verdict forms.

Jurors will return to the Los Angeles courthouse on Monday to resume deliberations.

Since jury deliberations began on March 13, the jury has sent questions to the judge related to the plaintiff’s family troubles as well as how much she actually used Meta-owned Instagram as a child.

– Negligent in design? –

The verdict could turn on the question of whether familial strife and other real-world trauma, or YouTube and Meta apps such as Instagram, were to blame for mental woes of the woman who filed the suit.

A 20-year-old California woman identified as Kaley G.M. testified at trial that YouTube and Instagram fueled her depression and suicidal thoughts as a child, telling jurors that she became obsessed with social media, starting with YouTube videos, when she was six.

Under cross examination, however, Kaley also talked about feeling neglected, berated and picked on by family members.

A jury form given to jurors asks the panel to decide whether Meta or YouTube should have known their services posed a danger to children or if they were negligent in design.

If so, jurors are to decide if Meta or YouTube were “substantial factors” in causing Kaley’s woes and how much they should pay in damages.

Whatever the verdict, the trial highlights “an important tension” between social media platforms and vulnerable young internet users, reasoned University of Pittsburgh marketing professor Vanitha Swaminathan.

“The platforms have to address the concerns of this important segment,” Swaminathan told AFP.

The lawsuit is one of hundreds accusing social media firms of luring young users into becoming addicted to their content and potentially suffer from depression, eating disorders, psychiatric hospitalization and even suicide.

Internet titans have long shielded themselves with Section 230 of the US Communications Decency Act, which frees them of responsibility for what social media users post.

However, this case argues that the firms are responsible for defective products, with business models designed to hold people’s attention and to promote content that can harm their mental health.

The outcome of the trial is expected to establish a precedent for resolving other lawsuits that blame social media for fueling an epidemic of mental and emotional trauma.

EU lawmakers back ban on sexualised AI deepfakes


By AFP
March 18, 2026


The EU move comes after an outcry over sexualised deepfake images of women and minors created by AI chatbot Grok - Copyright AFP/File Pablo VERA

EU lawmakers on Wednesday approved a ban on AI systems generating sexualised deepfakes, after a global outcry over non-consensual nudes created by Elon Musk’s chatbot Grok.

Backed by EU member states last week, the ban targeting so-called “nudification” apps is being introduced as part of proposals to amend the bloc’s rules on AI.

“This is a huge win, especially for women and children in Europe,” said Kim van Sparrentak, a lawmaker with the Greens group.

“Too many people have already woken up one day in despair after finding deepnudes of themselves, feeling violated, intimidated and hunted.”

Lawmakers in the EU parliament’s civil liberties committee gave it their green light Wednesday, paving the way for approval by the full assembly on March 26.

Michael McNamara, the Irish EU lawmaker leading work on the AI file, said the ban aimed to stamp out “nudification apps without consent, which have caused much pain for the profit of some.”

X, the platform on which Grok is available, in January said it had “zero tolerance” for sexualised deepfakes of children and women, and implemented measures it said would stop the practice after the global outrage.

The European Commission, the bloc’s digital watchdog, in January kickstarted an investigation into Grok under the EU’s online content rules.

The ban will become law after negotiations on a final text including the changes to the AI rulebook between the EU parliament and member states.

Fraudsters are using AI to create ‘pixel-perfect’ fake shopping sites

 
By Dr. Tim Sandle
SCIENCE EDITOR
DIGITAL JOURNAL
March 17, 2026


Photo by freestocks on Unsplash

Fraudsters are stealing money from shoppers using increasingly sophisticated tactics powered by AI.

Online shopping and auction fraud now accounts for 20% of all online fraud incidents reported to Action Fraud, making it one of the most prevalent threats facing UK consumers. With broader fraud losses hitting £1.17 billion in 2024, driven partly by a 14% spike in unauthorised card fraud, the scale of the problem is increasing.

“The problem is now we have so many AI options, it is easier than ever for scammers to create fake sites, fake images, or fake offers,” warns Lior Pozin, founder of Build Your Store in a message sent to Digital Journal.

Pozin adds: “What used to take technical expertise can now be generated in minutes using artificial intelligence. This means that more criminals than ever have access to the tools to make their jobs easier, and your life harder.”

Five scams you need to watch out for

According to Pozin:

Fake Website Scams

You search for a product, click what looks like a legitimate retailer’s website, and everything appears normal. The logo is correct, the layout is professional, even the customer reviews seem genuine. But look closer at the URL, is it really the right address?

Fraudsters are creating pixel-perfect copies of legitimate retailer websites, often paying for advertising to ensure their fake sites appear at the top of search results. These cloned sites collect your payment information and personal details, but the goods you’ve ordered never arrive. By the time you realise something’s wrong, the website has vanished.

If there is any doubt on the legitimacy of the website, call the company themselves to check, just be sure to get the phone number for an official source.

The Delivery Text That Isn’t

Your phone buzzes. “Your parcel is being held, additional customs fees required.” The message looks official, includes tracking numbers, and creates urgency. But it’s a trap.

Fake delivery notifications are one of the most common scams to hit UK shoppers. These messages contain links to fraudulent payment portals designed to harvest your banking details. Some even mimic the exact formatting and sender names used by legitimate courier companies.

Social Media Storefronts That Disappear

Scrolling through Facebook Marketplace or Instagram Shopping, you spot the perfect gift at an unbeatable price. The seller has hundreds of positive reviews, professional product photos, and responds quickly to messages. What could go wrong?

Scammers are increasingly sophisticated in their use of social media platforms, creating temporary storefronts that look entirely legitimate. They use stolen images, fabricated reviews, and professional communication to build trust, then vanish the moment you’ve transferred payment.

Many scammers are part of “fakebook” networks, where thousands or fake or hacked accounts help to convince you a page, item or opinion is legitimate.

Charity Scams Exploiting Goodwill

The time following the festive period often brings out the best in people, and scammers know it. Fake charity shops and fundraising campaigns have multiplied, particularly around popular causes, collecting donations that never reach the intended beneficiaries.

These scams are particularly cunning because they exploit our desire to help others during the season of giving.

Hijacked Seller Accounts

Sometimes the seller account is real, it’s just not being controlled by its legitimate owner anymore. Criminals are hijacking established seller accounts on major marketplaces, leveraging their positive reputation and transaction history to process fraudulent sales before the real owner even realises their account has been compromised.

AI to drive growth despite geopolitics, Taiwan’s Foxconn says


By AFP
March 16, 2026


Strong demand for AI hardware brought a 24 percent annual net profit jump last year for Foxconn - Copyright AFP I-Hwa Cheng

Taiwanese tech giant Foxconn on Monday said it expected the booming market for artificial intelligence servers to drive growth this year, despite volatility caused by global conflict.

Strong demand for AI hardware fuelled a 24 percent annual net profit jump last year for Foxconn, the world’s largest contract electronics manufacturer.

Energy markets have been roiled by the war in the Middle East, raising concerns for big tech manufacturers, but company chairman Young Liu struck an upbeat tone at an earnings call with analysts.

“Over the past few months, there have been significant changes in tariffs, geopolitics, and global monetary policy,” he said.

“However, driven by the strong growth of AI servers, I believe 2026 will still be a very good year, and we expect to see robust growth.”

Foxconn — also known by its official name Hon Hai Precision Industry — has gone beyond assembling low-margin Apple iPhones to making AI servers for Nvidia along with electric vehicles and robotics.

It’s a move that is paying off as tech firms worldwide race to spend big on training and deploying rapidly evolving AI systems.

In 2025, Foxconn’s net profit came to NT$189.4 billion ($5.9 billion), up from NT$152.7 billion in 2024.

Revenue jumped 18 percent on-year to NT$8.1 trillion, the firm said, just beating the estimates of a Bloomberg survey of economists.

– AI ambitions –

Sky-high tech share results and valuations worldwide have led to concerns of an AI market bubble that could eventually burst.

But Foxconn on Monday forecast “strong AI server demand” with “high double-digit quarter-on-quarter growth” expected for AI rack shipments in the first quarter of 2026.

Liu said the company wanted to become “the most trusted industrial platform of the AI era”.

Cloud and networking services accounted for 40 percent of Foxconn’s business portfolio in 2025, up from 30 percent in 2024.

Meanwhile, smart consumer electronics declined from 46 percent to 38 percent.

Huge global demand for memory chips to use in AI data centres has caused a shortage that is threatening higher prices for everyday gadgets.

“Everyone is concerned about memory shortages and related price hikes” fors smart consumer products, Liu said Monday.

But “since our product portfolio is mainly composed of higher-priced models, the impact we’ve observed so far has been relatively limited” while demand has not changed, he added.

Ahead of Monday’s earnings release, Bloomberg Intelligence analyst Steven Tseng told AFP that for Foxconn, “so far the impact from the Middle East conflict appears largely manageable”.

“As the region is not a major market for either AI hardware or smartphones, the main risk is more on costs than demand, driven by higher oil prices and some logistic disruptions,” he said.

Alibaba pins hopes on AI as quarterly net profit drops



By AFP
March 19, 2026


China's tech titans, including Alibaba, are racing to develop AI agents - Copyright AFP HECTOR RETAMAL

China’s Alibaba said Thursday that revenue from AI-related products showed strong momentum, even as the tech giant reported a 66 percent year-on-year drop in quarterly net profit.

Alibaba, which runs some of China’s biggest online shopping platforms, has seen its core e-commerce business squeezed by price wars and sluggish consumption in the world’s second largest economy.

It is ploughing tens of billions of dollars into artificial intelligence — with its shareholders keen to see how the company will approach the tricky task of monetising these huge investments.

“AI is and will continue to be one of our primary growth engines,” CEO Eddie Wu said Thursday, noting that revenue from Alibaba’s Cloud Intelligence Group was up 36 percent on-year in October-December.

Net profit plunged 66 percent to 15.6 billion yuan ($2.2 billion) primarily due to a “decrease in income from operations”, the firm said.

Total revenue for the period stood at 284.8 billion yuan, missing the estimates of a Bloomberg survey of economists.

China’s tech titans, including Alibaba, are racing to develop AI agents — tools that execute real-life tasks such as sending emails or booking flights, touted as the technology’s next frontier after text and image generators.

This week, Alibaba announced an AI agent for businesses called Wukong, currently in beta testing.

It follows the unexpected boom in popularity in China of OpenClaw, an agent tool created by an Austrian researcher that has fascinated programmers worldwide despite cybersecurity concerns.

Alibaba’s open-source “Qwen” AI models are popular with programmers worldwide, and CEO Wu said Thursday that Qwen’s consumer interface had surpassed 300 million monthly active users.

The company is bringing its AI development and services teams together under the so-called “Alibaba Token Hub”, with the restructuring seen as a bid to focus on profitability.

Max Liu, an AI entrepreneur who has worked with several local AI startup teams, told AFP that Alibaba’s “previous structure was too dispersed, making it hard for all departments to work together”.

The OpenClaw phenomenon in China has led big tech companies, including Alibaba, to recognise that “token” — a unit of AI computing power — is becoming a new type of utility, much like water and electricity, Liu said.


China tech giant Tencent bets on AI agents


By AFP
March 18, 2026


So far Tencent has been seen as a cautious artificial intelligence player - Copyright AFP Adek BERRY


Luna Lin, with Katie Forster in Tokyo

Tencent wants to bring artificial intelligence agents into its WeChat social media app, the Chinese tech firm’s president said on Wednesday, a move that could change how hundreds of millions of users interact with the platform in the Asian nation and beyond.

Agents — programmes that execute real-life tasks such as sending emails or booking flights — are being touted as AI’s next frontier after chatbots such as ChatGPT.

Their incorporation into WeChat may alter how people in the world’s second-largest economy use the so-called “super-app” that already boasts social messaging, digital payments and a long list of other features.

Tencent, also the world’s largest video game publisher, reported a 16 percent jump in full-year net profit on Wednesday, with gaming still the main business driver even as it extends its AI push.

The company has sought in recent years to integrate AI into WeChat, known as Weixin in China.

“We hope to create AI agents in Weixin, which could leverage Weixin’s close connection with users,” company president Martin Lau told reporters.

“It will be a highly diverse ecosystem, encompassing mini-programs, content, commerce, social networking and payments,” Lau added, without giving details such as when the service would become available.

On Wednesday, Tencent said net profit for 2025 came to 224.8 billion yuan ($32.6 billion), beating estimates of 221.9 billion yuan in a Bloomberg survey of economists.

The company, which owns the developer of popular eSports including “League of Legends”, has sizeable operations in other sectors from cloud computing to entertainment.

Despite being China’s most valuable tech company by market capitalisation, so far Tencent has been seen as a cautious AI player, although founder Pony Ma has vowed to increase investment in the sector.

“Our highly resilient and cash generative core businesses provides us with the resources to fund our increasing investments in AI,” Ma said in a statement Wednesday.



– Agent fever –



Like its rivals Alibaba, Baidu and ByteDance, Tencent has recently branched out into the world of AI agents with its WorkBuddy app.

The Shenzhen-based company has also been among the Chinese tech giants racing to take advantage of a surge in interest in the country in OpenClaw — an AI agent platform created by an Austrian programmer that has fascinated the tech world.

Tencent and others are offering simplified installation and affordable coding plans to help users host OpenClaw agents on cloud servers.

Earlier this month the company’s cloud computing arm organised an OpenClaw setup event at its headquarters, which drew more than 1,000 attendees, with similar events planned across China.

The increasing capabilities of Tencent’s main large-language AI model, and AI agent tools such as WorkBuddy and new offering QClaw, “are encouraging early signs that these investments will unlock new opportunities”, the company said.

The Financial Times reported this month that the White House was debating whether Tencent’s investment in US and Finnish gaming groups pose a national security risk.

Discussions over its stakes in “Fortnite” creator Epic Games, Riot Games and Supercell revolve around the implications for US user data privacy, the British newspaper said, citing people familiar with the matter.

“We have been engaged in constructive discussions with the relevant US regulators for quite some time now,” said Tencent president Lau.

“Things are moving in a positive direction” with the overall risk “manageable”, he said.

“While there are due processes to be followed in the US, other regions are actually very keen for us to invest in gaming companies.”

Mistral chief calls for European AI levy to pay creatives


By AFP
March 20, 2026


Valued at 11.7 billion euros ($13.5 billion), Mistral has staked a place as Europe's challenger to the AI behemoths that have emerged in the US with valuations in the hundreds of billions. - Copyright AFP/File Lionel BONAVENTURE

Companies selling artificial intelligence models in Europe should pay a “levy” to support cultural industries, the head of French developer Mistral said Friday.

AI models are trained on vast swathes of human-generated data including text, audio and video, which has prompted complaints and legal challenges to their developers from both creators and copyright-owning companies in America and Europe.

Operators of AI models in Europe should pay “a revenue-based levy… reflecting their use of content publicly available online,” Mensch wrote in an op-ed for the Financial Times (FT) shared with AFP.

“Proceeds would flow into a central European fund dedicated to investing in new content creation, and supporting Europe’s cultural sectors,” he added.

Mistral’s external affairs chief Audrey Herblin-Stoop told AFP that the company was suggesting a levy of between 1.0 and 1.5 percent of revenues.

With most major AI developers based in the US, Mensch insisted that “this levy would apply equally to providers based abroad, creating a level playing field within the European market and ensuring that foreign AI companies also contribute when they operate here”.

Brussels’ AI regulation, adopted in 2024, requires systems to respect the EU’s copyright rules.

But the question of how to apply the law to generative AI systems remains undecided.

In exchange for paying the levy, AI developers “would gain what they urgently need: legal certainty,” Mensch wrote.

“The mechanism would shield AI providers from liability for training on materials accessible on the web,” he added — without replacing direct agreements between data owners and AI firms.

Valued at 11.7 billion euros ($13.5 billion), Mistral has staked a place as Europe’s challenger to the AI behemoths that have emerged in the US with valuations in the hundreds of billions.

Those dominant players enjoy “extremely permissive regulatory contexts on copyright,” Herblin-Stoop said.

American AI giant Anthropic nevertheless agreed in September to pay $1.5 billion to settle a class-action lawsuit by authors.

Mistral was itself accused last month of using copyrighted works including “Harry Potter” and “The Little Prince” to train its AI model, in an investigation by French media Mediapart.

The company told AFP at the time that it “respects the opt-out mechanisms and deploys safeguards” against including copyrighted material.

Nevertheless, some of the works involved are “especially popular and duplicated many times online”, making it difficult to exclude them completely from training data.


Global music market grows, calls for AI compensation: industry body

ByAFP
March 18, 2026


Taylor Swift was the biggest-selling global artist of 2025 - Copyright GETTY IMAGES NORTH AMERICA/AFP/File Frazer Harrison

The global music industry generated $31.7 billion last year, driven by online streaming, industry body IFPI said Wednesday, as it called on the sector to ensure AI-generated content compensates musicians.

Music revenues rose 6.4 percent, marking the eleventh consecutive year of expansion, according to the International Federation of the Phonographic Industry, which represents more than 8,000 global record labels.

Streaming accounted for nearly 70 percent of annual revenue, with paid streaming subscriptions reaching 837 million subscribers worldwide.

But the IFPI warned against the increasing threat of AI-generated streams of fake content.

“Streaming fraud is theft, plain and simple,” the group said in its annual report, calling instead for technology to “support and enhance creativity, not replace it.”

AI-generated tracks regularly go viral, such as the runaway success of an AI cover of Belgian musician Stromae’s “Papaoutai” at the end of January.

According to the report, Deezer revealed that it receives more than 60,000 AI-generated tracks every day.

AI music generation platforms — such as US based Suno and Udio — argue their work is covered by the American copyright loophole of “fair use,” which does not require rights holders’ consent.

The IFPI urged policymakers to uphold copyright protections.

“Music is embracing the future, demonstrated by record company partnerships with generative AI developers who respect the rights of creators,” the group said.

Suno reached an agreement with record label Warner Music Group in November to compensate artists whose work is used to create AI-generated tracks.


Image: — © AFP

Revenues from physical formats were up, including from vinyl which grew 13.7 percent.

Asia drove the rise in vinyls and CDs, while these formats were almost non-existent in the North Africa and Middle East market, where streaming accounts for 97.5 percent of revenue.

Taylor Swift was the biggest-selling global artist of 2025, followed by Korean group Stray kids and Canadian rapper Drake.



In Hollywood, AI’s no match for creativity, say top executives


By AFP
March 16, 2026


US filmmaker Steven Spielberg says he has never used AI in his award-winning films, and he doesn't support AI if it takes work from creatives - Copyright AFP Jean Baptiste Lacroix


Alex PIGMAN

Artificial intelligence is transforming Hollywood at a pace that has sent shockwaves through creative industries, but human creativity will always prevail, a leading executive at the cutting edge of that change told AFP.

The disruption was a dominant theme at this week’s South by Southwest conference in Austin, Texas where veteran director Steven Spielberg made clear he was drawing a line in the sand.

“I’ve never used AI on any of my films yet. We have a writer’s room. All the seats are occupied,” Spielberg said. “I am not for AI if it replaces a creative individual.”

Joshua Davies, chief innovation officer of Artlist — a Tel Aviv-based AI video platform that has most recently been positioning itself as a supplier of creative tools to filmmakers — told AFP the technology would never eclipse the human creative.

If given the choice between something made using an AI toold by a techie and a creative, “I know which one I would rather watch at the end,” said Davies, who founded video editing software company FXhome before it was acquired by Artlist in 2021.

Davies acknowledged the industry’s anxiety was not unfounded, with new video models having “struck fear in the hearts of everybody” — not just over copyright and personality infringement, but over the fundamental question of how film and television production will look in a matter of years.

“If I was bringing out an Iron Man movie in 2027, 2028 — would I be going to multiple visual effects houses, would I expect them to be utilizing AI? We’re all kind of working out our way through that,” he said.

Davies described the platform’s AI video tools as a way to “fill in the bits that you can’t shoot, or didn’t shoot, or you don’t have the budget to shoot,” rather than a wholesale substitution for going out on location.

– ‘Holy grail’ –

Yet the timing is charged. Editors, visual effects artists and other Hollywood professions have watched the rapid advance of generative AI with alarm, fearing that tools capable of producing broadcast-quality footage at a fraction of traditional costs could hollow out entire job categories.

Major studios are actively evaluating how AI can be integrated into production pipelines, foreshadowing significant workforce changes across an industry that has already endured a bruising period following the covid pandemic and writers’ and actors’ strikes of 2023.

Artlist made headlines in February when it produced a Super Bowl LX spot in under five days using its own products, at a fraction of the multi-million-dollar cost typical of Big Game advertising.

Davies was keen to push back on the narrative that the ad represented the future of production without human involvement.

That wasn’t what it was, he said. It was creatives “using the tool to get the very best out of it.”

A self-described “techie guy,” Davies said the platform’s current obsession is on giving creators nuanced control over creating or editing footage — something he described as the company’s “holy grail.”

Existing models, he said, handle simple static shots reasonably well but struggle with complex camera movements and consistent performance across multiple takes.

You can prompt an elaborate shot, but for now “you’ll get something random” that you can’t work with.

On cost, Davies cautioned against unrealistic expectations, suggesting AI would reduce production expenses significantly but not eliminate them.

Davies said his long-term hope was that AI would serve as a leveling force for independent filmmakers and content creators who currently lack the budgets to realize their ambitions.

“There are definitely YouTubers who make some of the best action work out there on no budget,” he said.

“AI will level that playing field completely — the story will be what matters.”

He struck a cautiously optimistic note on the creative industry’s direction, dismissing the most dystopian predictions.

“The idea that no one works at the end of it is the bit that doesn’t hold any water with me,” he said.

“There’s been more and more of everything, not less and less — and the cream rises to the top anyway, because the human element is what we crave.”