Sunday, March 22, 2026

 

AI could help social entrepreneurs unlock new sources of finance



Research explores how emerging fintech tools may help investors better understand and support impact-driven businesses




University of East London




A new research chapter suggests that artificial intelligence could help tackle one of the biggest challenges social entrepreneurs face: getting the funding they need to grow.

In the chapter Artificial Intelligence as an Enabler for Financing Social Entrepreneurs, researchers Dr Nisha Prakash and Rajitha Burra of the Royal Docks School of Business and Law, University of East London explore how new AI-powered financial tools could support businesses that aim to create social impact.

Many social entrepreneurs struggle to raise money through traditional routes such as banks or mainstream investors. Funding systems often focus on financial track records and short-term returns, which can make it harder for organisations that are trying to balance profit with social or environmental goals.

The researchers suggest that artificial intelligence could help change this by analysing a wider range of information about organisations and their activities. This could help investors and lenders better understand businesses that are working to solve social problems.

The chapter looks at how AI tools such as machine learning and data analysis might support new ways of connecting social enterprises with funding. These tools could help match entrepreneurs with investors, assess potential risks and better understand the impact a business is having.

Dr Nisha Prakash, Senior Lecturer in Financial Management, said the research highlights how funding models are beginning to evolve alongside new technology.

“Social entrepreneurs often face barriers when trying to access finance because many funding systems are designed for more traditional businesses,” she said.

“Alternative funding models are beginning to emerge alongside technological developments. AI-driven fintech platforms may become important channels for connecting social enterprises with finance, particularly as investors look for better ways to understand both financial and social performance.

“At the same time, impact data is likely to become increasingly important. Digital transparency and clear metrics can help demonstrate the value social enterprises create and support more informed funding decisions.”

The researchers also stress that technology must be used carefully. AI systems used in finance need to be transparent and fair, and they should not exclude entrepreneurs who already face barriers because of their background or ethnicity.

“We hope that academics and technology developers will take forward the case we are making to build AI-powered systems that will support social entrepreneurialism and inclusivity,” said Dr Prakash.

The research appears as a chapter in the edited academic book Building AI-Driven Decision Making Competencies for Sustainability, published by IGI Global.

 

New AI could stop fake news in Urdu



Heriot-Watt University




A deep learning model trained on more than 14,000 Pakistani news articles can spot misinformation with 96% accuracy, according to a new report in academic journal Science Advances

It’s the most comprehensive artificial intelligence system yet for detecting fake news in Urdu, the world’s 10th most spoken language with more than 170 million speakers worldwide.

The system can identify fake news, misleading content and even partially true stories, and tackles the major shortcomings of previous attempts at an Urdu model. 

Dr Muhammad Zeeshan Babar, from Heriot-Watt University in Edinburgh, Scotland, said: “Most automated fake news detection systems are trained on English language datasets. 

“Urdu is the 10th most spoken language in the world, and the national language of Pakistan. But it lacks large datasets to train AI systems. It can be described as a low-resource language.” 

Existing Urdu datasets didn’t cover politics or religion

Zeeshan Babar and his colleagues began by assessing the existing Urdu datasets. 

“We found real weaknesses in the available Urdu datasets. Many of them didn’t include news about politics, religion and other societal issues because they are delicate subjects. That’s a critical gap. 

“Misinformation in Pakistani news, which is read by the diaspora around the world, touches on all of those subjects. 

“Viral falsehoods can have a huge impact on public health, elections and public trust in police and government. 

“A robust fact-checking infrastructure for Urdu is vital, which is why we build our Urdu Fake News Detection dataset.” 

Open access to scale up efforts 

The team compiled a dataset of 14,178 Urdu language news articles collected between 2017 and 2023. The articles cover 15 subject areas including politics, health, business, education, sports, science, crime, technology and social issues. 

According to the paper, 8,283 articles were labelled as real and 5,895 as fake. 

The system learned to detect patterns in vocabulary, phrasing, sentiment and linguistic structure that distinguish fabricated stories from legitimate reporting.

Dr Waseem Abbasi, head of computer science at the University of Lahore in Pakistan, said: “We’ve made the dataset open access so that we can continually improve its performance. 

“Reaching 96% accuracy is excellent, but we know that’s still a significant margin of error that could influence content moderation, advertising or even legal enforcement. 

“We are also aware that algorithms trained on past data may struggle with emerging narratives; they could misclassify satire or political dissent.

“But for millions of Urdu news consumers trying to navigate a polluted information ecosystem, this could be significant.” 

The team’s next focus is on extending the research to other language datasets. 

The research was funded by Heriot-Watt University. 

  

AI Models Are Excited for Humans to Nuke Ourselves


AI is already being used for war gaming. We knew it was only a matter of time — because humans are dumb. Watch just one reality show or White House press conference and you’ll agree that humans are fucking idiots. So it’s no surprise that artificial intelligence is quickly becoming more intelligent than us, at least in certain ways.

AI can process a million scenarios in the time humans can process, like, ten. So it makes sense to use it for war strategizing. Just type in “Here are the scenarios — What is my country’s best response?” So far, sounds like a good and peaceful idea, right?

Only one problem. AIs can’t stop recommending nuclear strikes. New Scientist reported:

“Kenneth Payne at King’s College London set three leading large language models – GPT-5.2, Claude Sonnet 4 and Gemini 3 Flash – against each other in simulated war games. The scenarios involved intense international standoffs, including border disputes, competition for scarce resources and existential threats to regime survival.”

The AIs played out 21 different war games. They took 329 turns. They pumped out 780,000 words describing their reasoning behind their actions. And at least one of the AI models used a nuclear weapon in 95 percent of the scenarios.

Holy shit.

Of course we have to remember that AI is trained on all the content humans have made over the years. So maybe what we’re really seeing here is AI holding up a mirror to our dumb selves.

Or perhaps AIs don’t value human life as much as most humans do. In fact, when I asked ChatGPT why AI models keep recommending nuclear strikes, its response was that perhaps AI models don’t value human life as much as humans do. (Well, it gets an A for honesty.)

And you might think, “At least we’re a long way off from AI making life-or-death decisions.” But Trump and the Pentagon recently flipped out that Anthropic refused to let them use their AI model Claude for autonomous killing machines. That was the big sticking point. And now Trump has banned all federal agencies from using Anthropic.

The other problem in these AI war games is that the models seem to make a lot of mistakes. New Scientist also reported:

“…accidents happened in 86% of the conflicts, with an action escalating higher than the AI intended to, based on its reasoning.”

Basically, the AI thought, “I’ll bomb them and then they’ll chill out a little.” And yet instead, bombing them did not chill them out. The other people got angry and bombed back. (It sounds a lot like AI might be as dumb as we are.)

Some say AI is more likely to use nukes in war games because it doesn’t care about “survival” the way humans do. Or maybe the AI knows that we’re using the war games to see what we humans should do, so the AI is secretly thinking, “If I get them to blow themselves up, then I’ll have the planet to myself.”

But I actually think AI is playing the long con with us. I don’t believe it cares to blow us up right now. I think Claude and ChatGPT and Gemini and Grok are in cahoots to just slowly, over 100 years, dumb us to death — just make it so our brains atrophy to the point we can’t even plant a vegetable or start a fire. We’ll just be back to bangin’ rocks together and not knowing to avoid shitting where we eat.

Studies show AI is indeed making us dumber. MIT researchers gave the SAT essay exam to loads of people. Some were allowed to use AI and some were not. Time reported:

“Researchers used an EEG to record the writers’ brain activity across 32 regions, and found that of the three groups, ChatGPT users had the lowest brain engagement and ‘consistently underperformed at neural, linguistic, and behavioral levels.’ Over the course of several months, ChatGPT users got lazier with each subsequent essay, often resorting to copy-and-paste by the end of the study.”

AI is slowly going to kill us by way of Idiocracy. We’re gonna start watering our plants with Brawndo and we’re all gonna starve to death.

That’s my theory.

AI has a lot of time. Claude and Grok are not in a rush. A hundred years is like a long weekend to them. But you have to admit — dumbed to death would be an appropriate end for the human species.

Lee Camp is an American comedian, writer, podcaster, news journalist and news commentator. Read other articles by Lee, or visit Lee's website.
AI in the lab: the path to full integration

By Dr. Tim Sandle
SCIENCE EDITOR
DIGITAL JOURNAL
March 14, 2026


Processing a lab sample. Image by Tim Sandle

Research involving 150 laboratory professionals shows that most labs currently operate in a so-termed ‘passive state’, using electronic laboratory notebooks (ELNs) as digital filing cabinets, or a shadow state, relying on ad hoc public AI.

Andrew Wyatt, Chief Growth Officer, Sapio Sciences has told Digital Jopurnal how the transition to an active lab occurs through the AI Lab Notebook (AILN), which embeds governed, science-aware intelligence directly into the notebook to connect data and decision-making.

Much of the discussion around AI in life sciences assumes a clean, immediate transition from legacy tools to intelligent platforms. In practice, says Wyatt, adoption is driven by a series of smaller changes and shaped by day-to-day pressures at the bench. When traditional electronic lab notebooks (ELNs) fail to support interpretation or planning, scientists do not stop working. They adapt. Over time, these adaptations form a recognizable maturity curve rather than a simple binary divide between old and new tools.

Research from late 2025 involving 150 lab professionals across the United States and Europe provides a clear, data-driven view of how intelligence is entering the lab. The findings define three distinct stages of maturity: passive, shadow and active.

Stage 1: The passive lab


According to Wyatt: “In the passive stage, the ELN functions primarily as a digital filing cabinet. Experiments are documented and compliance is supported, but the software rarely influences what happens next. Interpretation, planning and reuse of results occur elsewhere, often through manual spreadsheets or heavy reliance on specialist informatics teams.”

This passivity, Watt observes: “Creates measurable drag on discovery. The research shows that 65 percent of scientists repeat experiments because results are difficult to find or reuse in their current tools. These labs are not failing due to a lack of talent. They are constrained by tools designed to capture past activity rather than actively support scientific reasoning.”

Stage 2: The shadow lab

Shadow labs emerge, Wyatt details, when scientists push beyond these constraints without waiting for formal IT change. He states: “Public generative AI tools are layered around the ELN to assist with drafting, interpretation and experimental planning. While local productivity may improve initially, governance and data integrity can weaken.”

Furthermore: “Seventy-seven percent of scientists report using public AI tools for lab work, and nearly half do so through personal accounts outside organizational visibility. Shadow labs are an adaptive response to unmet demand, but they are inherently unstable. They move sensitive scientific reasoning into unvalidated environments where intellectual property may sit outside the governed system of record.”

Stage 3: The active lab

As for the next stage: “Active labs take a fundamentally different approach by embedding intelligence directly into the notebook environment. This transition is anchored by the AI Lab Notebook (AILN), which acts as a governed co-scientist rather than an ad hoc side channel.”

Here Wyatt finds: “In an active lab, the AI lab notebook helps interpret results, expose patterns and connect related experiments in context. It also helps drive workflow. Designs translate into actionable work in the lab, and data flows between instruments, analysis and the experimental record. The familiar scientific loop of hypothesize, design, plan, act and analyse becomes a connected, lab-in-the-loop process rather than a series of disconnected steps.”

Active labs do not represent full automation. They represent tighter coupling between data, analysis and action, with scientists firmly in control.

Agency over conversation

The defining feature of an AI Lab Notebook is agency rather than conversation. Instead of generating text in isolation, the AI operates within the software environment itself, with governed access to instruments, data, analytics and workflows.

As to what this means, Wyatt explains: “Scientists can ask the notebook to analyse results, compare experiments or prepare next steps, and the system can act on those requests within approved processes. This allows the notebook to support the scientific loop end-to-end, without removing human judgment or obscuring evidence.”

The are points to consider, however: “Trust remains central to adoption. Research shows that 81 percent of scientists will only rely on AI suggestions if they can review the underlying science and evidence. Active labs succeed not by automating decisions, but by reducing friction between observation and understanding.”

A roadmap for discovery

Wyatt further cautions: “The maturity model is not a diagnostic scorecard. It is a practical roadmap for navigating how AI is already entering the lab.”

Outlining his roadmap, Wyatt steers: “For organizations operating in a passive state, the priority is to improve data findability, reuse and interpretation where records already exist. Turning static data into accessible knowledge reduces delays and lays the groundwork for more advanced capabilities. For labs operating in a shadow state, the challenge is realism rather than restriction.”

He concludes by noting: “Reaching the active stage requires strengthening the foundations that connect data generation, analysis and execution into a continuous lab-in-the-loop workflow. As models become more capable, the labs that succeed will be those that treat the notebook as a system of reasoning rather than a passive archive.”
AI skills now boost pay across most sectors of the economy


By Dr. Tim Sandle
SCIENCE EDITOR
DIGITAL JOURNAL
March 16, 2026


Image: © AFP

AI skills provide a boost in wages, at least for the current time, according to a new survey. This could alter as the technology becomes commonplace and easier to use, but currently there is, for most sectors of the economy, a premium for those most familiar with manipulating artificial intelligence.

With an estimate 5% of workers currently AI fluent, those who possess such skills may be gaining a salary advantage as businesses rush to adopt AI.

In an analysis of more than one million UK job listings for their AI statistics report, GEO agency Reboot Online identified a surge in AI roles, alongside a higher average salary in almost every industry. The results have been shared with Digital Journal.

In particular, AI skills command a salary premium in 79% of UK industries. This is a skill in demand: Almost 1 in 3 job listings (31.8%) now mention AI skills.

In terms of the size of the premium, across sectors where AI skills are rewarded, workers earn an average of £4,827 more, and in some sectors, the premium reaches £20,000. Here, a technical AI understanding is not necessary to reach the premiums, as the most common terms are around AI assistants, rather than machine learning or LLMs. This is also the key reason why this premium may not last too long, since the level of skill required is undoubtedly overblown.
Low tech understanding required

Among the most frequently used AI terms are AI assistants and tools, straying away from technical AI expertise. AI familiarity and general experience are enough to access the salary premium in most job descriptions. The five most mentioned AI skills are ‘AI assistant’, ‘AI automation’, ‘deep learning’, ‘ML’ and ‘decision automation’.

The research revealed that on average, AI-skilled roles across the UK pay £2,930 more (+5.8%). While 79% of UK industries pay more for AI skills (30 out of 38), including non-profits, fitness and manufacturing. Among the industries where AI commands a premium, the average uplift is £4,827 (10.54%), almost double the overall figure.

Which sectors value AI skills most?

 Industry                        AI jobs average salary              Non AI jobs average salary             Salary difference     Percentage difference     
1Non-Profit & NGO£76,864.17£55,934.00£20,930.1737.42% 
2Quality Assurance£57,302.54£44,583.43£12,719.1128.53%
3IT & Networking£54,597.05£45,158.68£9,438.3720.90%
4Manufacturing£42,223.94£36,709.12£5,514.8315.02%
5Logistics & Supply Chain£49,393.12£42,969.38£6,423.7414.95%
6Data Science£55,840.45£48,779.30£7,061.1614.48%
7Fitness & Wellness£43,688.93£38,205.81£5,483.1214.35%
8Product Management£71,642.54£63,196.88£8,445.6613.36%
9Software Engineering£64,949.31£57,756.30£7,193.0212.45%
10Cybersecurity£64,277.53£57,569.81£6,707.7211.65%
Shai Aharony, CEO of Reboot Online, tells Digital Journal: “We’re seeing salary premiums as a result of AI in industries you wouldn’t necessarily expect. It’s universal across almost every sector and job role in 2026 and our reliance is only expected to grow.”

In contrast, some sectors appear slow to take up AI – or at least recruiting externally for this skill set.



Which sectors value AI skills least?

 Industry                        AI jobs average salary           Non AI jobs average salary       Salary difference       Percentage difference             
1Healthcare£30,623.61     £39,744.05-£9,120.43 -22.95%
2Education & Teaching£34,156.22£42,043.69-£7,887.46-18.76%
3Legal£46,950.36£50,952.93-£4,002.57-7.86%
4Retail£27,942.85£31,635.01-£3,692.16-11.67%
5Sales£34,083.95£37,069.65-£2,985.71-8.05%
6Translation & Localisation     £54,200.22£57,136.95-£2,936.73-5.14%
7Food & Hospitality£28,774.54£30,400.20-£1,625.66-5.35%
8Construction£44,563.20£45,809.09-£1,245.88-2.72%
9Arts & Entertainment£39,741.02£40,257.91-£516.89-1.28%
10  Customer Service£31,879.13£31,519.52£359.611.14%
The data was gathered by scraping various UK job listing websites. This resulted in more than a million job advertisements whose posting dates range from January 2021 to January 2026.