IMF warns of ‘inevitable’ AI-powered threats to global financial system
ByAFP
May 7, 2026

Last month, AI company Anthropic warned that its latest model -- not yet available to the public -- was incredibly efficient at finding and exploiting software vulnerabilities - Copyright VATICAN MEDIA/AFP Handout
The International Monetary Fund (IMF) warned on Thursday of the risks to global financial stability posed by cyberattacks powered by advanced artificial intelligence tools, calling for greater international cooperation on the issue.
“IMF analysis suggests that extreme cyber-incident losses could trigger funding strains, raise solvency concerns, and disrupt broader markets,” the lender warned in a new report.
The study’s authors highlighted the risks posed by the highly interconnected nature of the global financial system, with advanced AI models able to “dramatically reduce” the time and cost of exploiting vulnerabilities.
The warning comes weeks after AI company Anthropic cautioned that its yet-to-be-released “Mythos” model was incredibly adept at finding and exploiting such weaknesses.
The model was particularly efficient at identifying vulnerabilities that developers and users had been previously unaware of.
In the hands of hackers, such so-called “zero-day” vulnerabilities are considered particularly dangerous.
On Wednesday, White House economic adviser Kevin Hassett told Fox News that an “all-government” and private sector effort was being made to test the model and ensure it does not cause harm to US businesses or government.
A day earlier, the US government announced a policy shift in which it would have access to tech giants’ new AI models to evaluate them before they are released.
The IMF warned that emerging and developing countries, “which often have more severe resource constraints, may be disproportionately exposed to attackers targeting regions with weaker defenses.”
The risks, the authors said, were systemic, cut across sectors and came with the threat of contagion, with the reliance on a small number of platforms and cloud providers likely to increase “the impact of any single exploited weakness.”
“Defenses will inevitably be breached, so resilience must also be a priority, specifically to limit how far incidents spread and ensure rapid recovery,” the report said.
IMF chief Kristalina Georgieva warned last month that the global financial system was not ready for the cybersecurity threats posed by AI.
“We are very keen to see more attention to the guardrails that are necessary to protect financial stability in a world of AI,” she told CBS News, seeking global collaboration on the issue.
ByAFP
May 7, 2026

Last month, AI company Anthropic warned that its latest model -- not yet available to the public -- was incredibly efficient at finding and exploiting software vulnerabilities - Copyright VATICAN MEDIA/AFP Handout
The International Monetary Fund (IMF) warned on Thursday of the risks to global financial stability posed by cyberattacks powered by advanced artificial intelligence tools, calling for greater international cooperation on the issue.
“IMF analysis suggests that extreme cyber-incident losses could trigger funding strains, raise solvency concerns, and disrupt broader markets,” the lender warned in a new report.
The study’s authors highlighted the risks posed by the highly interconnected nature of the global financial system, with advanced AI models able to “dramatically reduce” the time and cost of exploiting vulnerabilities.
The warning comes weeks after AI company Anthropic cautioned that its yet-to-be-released “Mythos” model was incredibly adept at finding and exploiting such weaknesses.
The model was particularly efficient at identifying vulnerabilities that developers and users had been previously unaware of.
In the hands of hackers, such so-called “zero-day” vulnerabilities are considered particularly dangerous.
On Wednesday, White House economic adviser Kevin Hassett told Fox News that an “all-government” and private sector effort was being made to test the model and ensure it does not cause harm to US businesses or government.
A day earlier, the US government announced a policy shift in which it would have access to tech giants’ new AI models to evaluate them before they are released.
The IMF warned that emerging and developing countries, “which often have more severe resource constraints, may be disproportionately exposed to attackers targeting regions with weaker defenses.”
The risks, the authors said, were systemic, cut across sectors and came with the threat of contagion, with the reliance on a small number of platforms and cloud providers likely to increase “the impact of any single exploited weakness.”
“Defenses will inevitably be breached, so resilience must also be a priority, specifically to limit how far incidents spread and ensure rapid recovery,” the report said.
IMF chief Kristalina Georgieva warned last month that the global financial system was not ready for the cybersecurity threats posed by AI.
“We are very keen to see more attention to the guardrails that are necessary to protect financial stability in a world of AI,” she told CBS News, seeking global collaboration on the issue.
ByAFP
May 7, 2026

The AI adoption gap between wealthy and developing nations continues to widen - Copyright AFP Kirill KUDRYAVTSEV
Generative artificial intelligence is being used by 17.8 percent of the world’s working-age population, but the gap between wealthy and developing nations continues to widen, according to a report published Tuesday by Microsoft.
In the first quarter of 2026, 27.5 percent of people aged 15-64 in developed countries used a generative AI tool, compared with 15.4 percent in the developing world — a gap that widened by 1.5 percentage points from the second half of 2025, according to the report’s estimates.
The divide stems from significant inequality in access to internet connectivity, basic digital skills and electricity, according to the Microsoft AI Economy Institute.
AI model performance — historically stronger in English as most of the major AI companies are based in the US — is also slowing the spread of such tools in non-English-speaking countries.
But progress in processing non-European languages is fueling a catch-up in adoption in some countries, particularly in Asia, the US tech giant noted.
The United Arab Emirates tops the ranking of AI usage at 70.1 percent, followed by Singapore, Norway, Ireland and France.
The estimates were based primarily on measurements from computers running Windows and Microsoft products such as Bing and Copilot.
They only partially captured usage on Apple devices, and consolidated data was lacking for Russia, Iran and China.
The United States — home to dominant large AI models like ChatGPT, Claude and Gemini — ranked only 21st, at 31.3 percent.
AI usage in China — the world’s second-largest economy which is jostling with the US for an edge in the AI race — was 16.4 percent, the report said.
Pushing back against fears of job losses driven by automation, Microsoft argued in the report that AI coding tools “could increase demand for developer jobs.”
The company cautioned, however, that “it is still too early to know the full impact” of AI on the labor market.
For the first time in its history, the company itself offered voluntary departures to nearly 9,000 of its US-based employees in April.
According to Layoffs.fyi, a private aggregator, nearly 99,000 people have been laid off in the tech sector since January 1, primarily in the United States.
AI disinfo tests South Korean laws ahead of local elections
ByAFP
May 6, 2026

South Korea has hired hundreds of staff to track and counter manipulated content ahead of local elections - Copyright AFP Jung Yeon-je
Hawon Jung
In an airy office in South Korea, workers comb through social media, uncovering AI-generated content whose growing sophistication is testing toughened election laws ahead of local polls.
Experts warn that cheaper, more advanced artificial intelligence models are driving the global spread of online disinformation — a major concern in South Korea, which has adopted AI particularly rapidly.
The government strengthened the law in 2023 to counter the misuse of AI around elections, and has hired hundreds of staff to track and counter manipulated content ahead of local ballots on June 3.
But some say they feel like they are fighting an uphill battle.
“We can literally see how fast this technology evolves — like how each new version of AI makes videos and audio look and sound even more convincing,” disinformation monitor Choi Ji-hee said.
“Our job keeps getting harder and harder,” she told AFP at the National Election Commission (NEC) headquarters in Gwacheon, just south of Seoul.
On a recent workday, Choi and 18 colleagues clicked through Instagram, YouTube and other platforms, as well as online chatrooms and “fan clubs” for local politicians, in search of content concocted by AI.
Recent finds include a fake TV news report claiming a mayoral candidate had made Time magazine’s list of rising political leaders, and a slick, AI-produced K-pop song praising a politician while mocking his rivals.
Once authorities confirm the content is the work of AI, authorities can demand its removal and issue harsh punishments, including jail time in extreme cases.
In one corner, workers discussed how to dissect a suspicious video, mulling whether to separately extract its audio, key frames, facial images and background footage.
Nearby, data analyst Kim Ma-ru mapped where, when, and by whom fake materials had been distributed, helping Choi’s team detect dubious content more quickly.
– ‘Whack-a-mole’ –
The local polls are the third major ballot in South Korea since an amended law to combat AI-fuelled election falsehoods was passed in 2023.
More than 45 percent of South Koreans use generative AI, according to government figures. ChatGPT maker OpenAI says the country has the most paid subscribers outside the United States.
At the same time, South Koreans consume more low-quality generative content — “AI slop” — than any other country, and reports of false AI-created content rose 27-fold between the general election in 2024 and the presidential campaign the following year.
“It’s an exhausting job that can feel like a (game of) whack-a-mole,” Kim told AFP.
“But it’s important work — there’s a sense of civic duty in it.”
AFP has debunked AI-generated election disinformation in South Korea, including a video of the 2025 presidential frontrunner Lee Jae Myung — now the country’s leader — purportedly faking a hunger strike.
Beyond fake content about candidates, conspiracy theories about vote-rigging in recent years have also dented public trust in elections.
Jailed ex-president Yoon Suk Yeol sent hundreds of armed troops to the NEC during his short-lived bid to impose martial law in late 2024, repeating widely disproven far-right claims of vote hacking.
On the street outside the office, pro-Yoon protesters have hung a banner reading: “Investigate the rigged elections immediately!”
Both Choi and Kim declined to be photographed or filmed, citing growing threats and online bullying targeting election workers.
– Strict laws –
“In such a short time, it has become so difficult for voters to tell what is real and what is not,” said Jung Hui-hun, a digital forensic specialist at the NEC’s cyber investigations unit, as he ran videos through state-developed software tools to detect AI imagery.
Officials say the programmes are about 92 percent accurate, with human experts reviewing the most sophisticated material.
Once confirmed, authorities demand that either the poster or the platform remove the content for violating the 2023 law, which bans AI material that involves candidates and looks realistic enough to confuse voters in the three months before a poll.
Repeat offenders, or those who create content deemed particularly harmful, can face up to seven years in jail or a maximum fine of 50 million won ($34,000).
“The rules may seem excessive to those outside South Korea, especially in places like the US that highly prioritise freedom of expression,” Kim Myuhng-joo, director of the Korea AI Safety Institute, told AFP.
But as swiftly as South Koreans embraced AI, many grew aware of its dangers, Kim said, citing the election conspiracy theories and a public scandal around deepfake pornography targeting women and girls.
“Public consensus has formed that we need tough regulations over the use of AI when it comes to election transparency,” Kim said.
A survey last year showed 75 percent of South Koreans believed AI-generated content could sway election results, and nearly 80 percent supported stronger efforts to detect and punish its use.
Jung, the digital forensic specialist, acknowledged the country’s response had “many limits” but voiced hope it would spur debate on how to tackle AI-fuelled disinformation.
“We’re still trying to figure out what is the best solution… but I think we are moving forward — slowly but surely,” he said.
ByAFP
May 6, 2026

South Korea has hired hundreds of staff to track and counter manipulated content ahead of local elections - Copyright AFP Jung Yeon-je
Hawon Jung
In an airy office in South Korea, workers comb through social media, uncovering AI-generated content whose growing sophistication is testing toughened election laws ahead of local polls.
Experts warn that cheaper, more advanced artificial intelligence models are driving the global spread of online disinformation — a major concern in South Korea, which has adopted AI particularly rapidly.
The government strengthened the law in 2023 to counter the misuse of AI around elections, and has hired hundreds of staff to track and counter manipulated content ahead of local ballots on June 3.
But some say they feel like they are fighting an uphill battle.
“We can literally see how fast this technology evolves — like how each new version of AI makes videos and audio look and sound even more convincing,” disinformation monitor Choi Ji-hee said.
“Our job keeps getting harder and harder,” she told AFP at the National Election Commission (NEC) headquarters in Gwacheon, just south of Seoul.
On a recent workday, Choi and 18 colleagues clicked through Instagram, YouTube and other platforms, as well as online chatrooms and “fan clubs” for local politicians, in search of content concocted by AI.
Recent finds include a fake TV news report claiming a mayoral candidate had made Time magazine’s list of rising political leaders, and a slick, AI-produced K-pop song praising a politician while mocking his rivals.
Once authorities confirm the content is the work of AI, authorities can demand its removal and issue harsh punishments, including jail time in extreme cases.
In one corner, workers discussed how to dissect a suspicious video, mulling whether to separately extract its audio, key frames, facial images and background footage.
Nearby, data analyst Kim Ma-ru mapped where, when, and by whom fake materials had been distributed, helping Choi’s team detect dubious content more quickly.
– ‘Whack-a-mole’ –
The local polls are the third major ballot in South Korea since an amended law to combat AI-fuelled election falsehoods was passed in 2023.
More than 45 percent of South Koreans use generative AI, according to government figures. ChatGPT maker OpenAI says the country has the most paid subscribers outside the United States.
At the same time, South Koreans consume more low-quality generative content — “AI slop” — than any other country, and reports of false AI-created content rose 27-fold between the general election in 2024 and the presidential campaign the following year.
“It’s an exhausting job that can feel like a (game of) whack-a-mole,” Kim told AFP.
“But it’s important work — there’s a sense of civic duty in it.”
AFP has debunked AI-generated election disinformation in South Korea, including a video of the 2025 presidential frontrunner Lee Jae Myung — now the country’s leader — purportedly faking a hunger strike.
Beyond fake content about candidates, conspiracy theories about vote-rigging in recent years have also dented public trust in elections.
Jailed ex-president Yoon Suk Yeol sent hundreds of armed troops to the NEC during his short-lived bid to impose martial law in late 2024, repeating widely disproven far-right claims of vote hacking.
On the street outside the office, pro-Yoon protesters have hung a banner reading: “Investigate the rigged elections immediately!”
Both Choi and Kim declined to be photographed or filmed, citing growing threats and online bullying targeting election workers.
– Strict laws –
“In such a short time, it has become so difficult for voters to tell what is real and what is not,” said Jung Hui-hun, a digital forensic specialist at the NEC’s cyber investigations unit, as he ran videos through state-developed software tools to detect AI imagery.
Officials say the programmes are about 92 percent accurate, with human experts reviewing the most sophisticated material.
Once confirmed, authorities demand that either the poster or the platform remove the content for violating the 2023 law, which bans AI material that involves candidates and looks realistic enough to confuse voters in the three months before a poll.
Repeat offenders, or those who create content deemed particularly harmful, can face up to seven years in jail or a maximum fine of 50 million won ($34,000).
“The rules may seem excessive to those outside South Korea, especially in places like the US that highly prioritise freedom of expression,” Kim Myuhng-joo, director of the Korea AI Safety Institute, told AFP.
But as swiftly as South Koreans embraced AI, many grew aware of its dangers, Kim said, citing the election conspiracy theories and a public scandal around deepfake pornography targeting women and girls.
“Public consensus has formed that we need tough regulations over the use of AI when it comes to election transparency,” Kim said.
A survey last year showed 75 percent of South Koreans believed AI-generated content could sway election results, and nearly 80 percent supported stronger efforts to detect and punish its use.
Jung, the digital forensic specialist, acknowledged the country’s response had “many limits” but voiced hope it would spur debate on how to tackle AI-fuelled disinformation.
“We’re still trying to figure out what is the best solution… but I think we are moving forward — slowly but surely,” he said.
Canada’s Cohere embraces ‘low drama’ amid AI giant tumult
ByAFP
May 15, 2026

Montreal-based Joelle Pineau joined Cohere last year after nearly eight years leading Meta's Fundamental AI Research lab - Copyright AFP ALAIN JOCARD
Alex PIGMAN
In an industry that runs on hype and grand gestures, Canadian AI firm Cohere is charting a different course from Silicon Valley. No talk of superintelligent machines, no public feuding, just one question: can it make money?
“Cohere is a very low drama company,” chief AI officer Joelle Pineau told AFP in a recent interview, noting that she counts many friends at OpenAI and Anthropic — and that Cohere is quite different.
The company was co-founded in Toronto in 2019 by Aidan Gomez, an AI researcher who co-authored a seminal paper that laid the foundations for modern AI systems, underscoring the central role of the Canadian AI research ecosystem in the field’s development.
Pineau, who joined Cohere last year after nearly eight years leading Meta’s Fundamental AI Research lab, said the company’s understated approach extends to one of the hottest buzzwords in the industry: artificial general intelligence, or AGI, the hypothetical point at which AI surpasses human intelligence.
“We don’t spend a lot of time talking about AGI,” Pineau said, dismissing the theorizing as a distraction.
Instead, she said, the company rallies around a decidedly less glamorous slogan — “ROI over AGI” — a reference to the return on investment that has yet to materialize across much of the cash-burning AI industry.
Pineau said the company’s focus on business clients shapes how the firm thinks about AI risk, cutting through what she described as fear-mongering around hypothetical scenarios.
“We’ve had a number of people who’ve gone around and essentially made people scared of AI as opposed to really understanding the real risks,” she said, arguing that time spent catastrophizing could be better spent addressing tangible safety challenges.
Those real risks, she said, include workforce disruption, data privacy and infrastructure security — concerns that Cohere’s enterprise customers in financial services, healthcare and government are actively grappling with.
“People are worried whether that’s going to impact their jobs, their ability to have a livelihood,” Pineau said. “These are completely legitimate questions.”
On the competitive threat from Chinese AI models, she pushed back against alarmist framing while acknowledging security considerations. The risk of malicious code injection through AI-generated software, she noted, is not unique to any one country.
“It’s not only the Chinese who can do this — any developer who decides that they want to do this” has mechanisms to do so, she said, adding that robust safety practices were good hygiene regardless of a model’s origin.
– ‘Spicy takes’ –
Pineau said Cohere was well-positioned to capitalize on demand from European and Asian markets wary of dependence on US technology platforms.
The company last month announced a deal to acquire German AI firm Aleph Alpha, creating a combined entity valued at around $20 billion with dual headquarters in Toronto and Berlin.
The deal, backed by both the Canadian and German governments, is designed to position Cohere as a sovereign alternative for businesses to American AI giants in the European market, as well as in Asia.
“Given the geopolitical context, some of them are afraid of just getting locked out of US tech solutions,” she said. “We are more than happy to offer an alternative.”
While Cohere will continue to call Toronto its global home, Pineau said the company’s ambitions stretch well beyond its borders. With offices in San Francisco, New York, London and Paris — and now a deepening presence in Germany — the goal is unambiguously international.
Still, she suggested the founders’ origins might leave a lasting imprint on the firm’s character.
“There may be some particular Canadian folklore that comes with it — some of the values of the co-founders that are going to permeate,” she said.
Asked whether leaning into splashier narratives — like rivals’ warnings of AI doom — might attract more investor attention and generate more publicity, Pineau suggested wryly that “maybe we’d get a lot more air time” by playing along.
“Maybe we’ll try some spicy takes once in a while,” she added.
ByAFP
May 15, 2026

Montreal-based Joelle Pineau joined Cohere last year after nearly eight years leading Meta's Fundamental AI Research lab - Copyright AFP ALAIN JOCARD
Alex PIGMAN
In an industry that runs on hype and grand gestures, Canadian AI firm Cohere is charting a different course from Silicon Valley. No talk of superintelligent machines, no public feuding, just one question: can it make money?
“Cohere is a very low drama company,” chief AI officer Joelle Pineau told AFP in a recent interview, noting that she counts many friends at OpenAI and Anthropic — and that Cohere is quite different.
The company was co-founded in Toronto in 2019 by Aidan Gomez, an AI researcher who co-authored a seminal paper that laid the foundations for modern AI systems, underscoring the central role of the Canadian AI research ecosystem in the field’s development.
Pineau, who joined Cohere last year after nearly eight years leading Meta’s Fundamental AI Research lab, said the company’s understated approach extends to one of the hottest buzzwords in the industry: artificial general intelligence, or AGI, the hypothetical point at which AI surpasses human intelligence.
“We don’t spend a lot of time talking about AGI,” Pineau said, dismissing the theorizing as a distraction.
Instead, she said, the company rallies around a decidedly less glamorous slogan — “ROI over AGI” — a reference to the return on investment that has yet to materialize across much of the cash-burning AI industry.
Pineau said the company’s focus on business clients shapes how the firm thinks about AI risk, cutting through what she described as fear-mongering around hypothetical scenarios.
“We’ve had a number of people who’ve gone around and essentially made people scared of AI as opposed to really understanding the real risks,” she said, arguing that time spent catastrophizing could be better spent addressing tangible safety challenges.
Those real risks, she said, include workforce disruption, data privacy and infrastructure security — concerns that Cohere’s enterprise customers in financial services, healthcare and government are actively grappling with.
“People are worried whether that’s going to impact their jobs, their ability to have a livelihood,” Pineau said. “These are completely legitimate questions.”
On the competitive threat from Chinese AI models, she pushed back against alarmist framing while acknowledging security considerations. The risk of malicious code injection through AI-generated software, she noted, is not unique to any one country.
“It’s not only the Chinese who can do this — any developer who decides that they want to do this” has mechanisms to do so, she said, adding that robust safety practices were good hygiene regardless of a model’s origin.
– ‘Spicy takes’ –
Pineau said Cohere was well-positioned to capitalize on demand from European and Asian markets wary of dependence on US technology platforms.
The company last month announced a deal to acquire German AI firm Aleph Alpha, creating a combined entity valued at around $20 billion with dual headquarters in Toronto and Berlin.
The deal, backed by both the Canadian and German governments, is designed to position Cohere as a sovereign alternative for businesses to American AI giants in the European market, as well as in Asia.
“Given the geopolitical context, some of them are afraid of just getting locked out of US tech solutions,” she said. “We are more than happy to offer an alternative.”
While Cohere will continue to call Toronto its global home, Pineau said the company’s ambitions stretch well beyond its borders. With offices in San Francisco, New York, London and Paris — and now a deepening presence in Germany — the goal is unambiguously international.
Still, she suggested the founders’ origins might leave a lasting imprint on the firm’s character.
“There may be some particular Canadian folklore that comes with it — some of the values of the co-founders that are going to permeate,” she said.
Asked whether leaning into splashier narratives — like rivals’ warnings of AI doom — might attract more investor attention and generate more publicity, Pineau suggested wryly that “maybe we’d get a lot more air time” by playing along.
“Maybe we’ll try some spicy takes once in a while,” she added.

No comments:
Post a Comment