Wednesday, February 12, 2025

UN says former Bangladesh govt behind possible ‘crimes against humanity’


By AFP
February 12, 2025


A protester who partially lost his sight after being shot during the student-led uprising - Copyright AFP LUIS TATO

Nina LARSON

Bangladesh’s former government was behind systematic attacks and killings of protesters as it tried to hold onto power last year, the UN said Wednesday, warning the abuses could amount to “crimes against humanity”.

Before prime minister Sheikh Hasina was toppled in a student-led revolution last August, her government cracked down on protesters and others, including “hundreds of extrajudicial killings”, the United Nations said.

The UN rights office said it had “reasonable grounds to believe that the crimes against humanity of murder, torture, imprisonment and infliction of other inhumane acts have taken place.”

These alleged crimes committed by the government, along with violent elements of her Awami League party and the Bangladeshi security and intelligence services, were part of “a widespread and systematic attack against protesters and other civilians,” a UN report into the violence said.

Hasina, 77, who fled into exile in neighbouring India, has already defied an arrest warrant to face trial in Bangladesh for crimes against humanity.

– Up to 1,400 killed –


The rights office launched a fact-finding mission at the request of Bangladesh’s interim leader Mohammed Yunus, sending a team including human rights investigators, a forensics physician and a weapons expert to the country.

Yunus welcomed the report, saying he wanted to transform “Bangladesh into a country in which all its people can live in security and dignity”.

Wednesday’s report is mainly based on more than 230 interviews with victims, witnesses, protest leaders, rights defenders and others, reviews of medical case files, and of photos, videos and other documents.

The team determined that security forces had supported Hasina’s government throughout the unrest, which began as protests against civil service job quotas and then escalated into wider calls for her to stand down.

The rights office said the former government had tried to suppress the protests with increasingly violent means.

It estimated that “as many as 1,400 people may have been killed” over a 45-day time period, while thousands were injured.

The vast majority of those killed “were shot by Bangladesh’s security forces”, the rights office said, adding that children made up 12 to 13 percent of those killed.

The overall death toll given is far higher than the most recent estimate by Bangladesh’s interim government of 834 people killed.

– ‘Rampant state violence’ –

“The brutal response was a calculated and well-coordinated strategy by the former government to hold onto power in the face of mass opposition,” UN rights chief Volker Turk said.

“There are reasonable grounds to believe hundreds of extrajudicial killings, extensive arbitrary arrests and detentions, and torture, were carried out with the knowledge, coordination and direction of the political leadership and senior security officials as part of a strategy to suppress the protests.”

Turk said the testimonies and evidence gathered by his office “paint a disturbing picture of rampant state violence and targeted killings”.

The report also documented gender-based violence, including threats of rape aimed at deterring women from taking part in protests.

And the rights office said its team had determined that “police and other security forces killed and maimed children, and subjected them to arbitrary arrest, detention in inhumane conditions and torture.”

The report also highlighted “lynchings and other serious retaliatory violence” against police and Awami league officials or supporters.

“Accountability and justice are essential for national healing and for the future of Bangladesh,” Turk said.

He stressed that “the best way forward for Bangladesh is to face the horrific wrongs committed” during the period in question.

What was needed, he said, was “a comprehensive process of truth-telling, healing and accountability, and to redress the legacy of serious human rights violations and ensure they can never happen again.”

‘Check the Label’ app lets Canadians scan to check if products are made in Canada

JUST IN TIME FOR U$ TRADE WAR

By Chris Hogg
DIGITAL JOURNAL
February 11, 2025


Check the Label is a community-driven app that helps empower Canadian consumers to make more informed decisions about the products they buy.

A new free app is helping Canadians verify whether the products they buy are made in Canada.

Built and launched in only a week by Punchcard Systems, Check the Label allows users to scan a product’s barcode and instantly see its origin. The beta version is live, as is a mobile app for Android, and iOS is coming soon.


The app launches as Canada faces new tariffs from the U.S., impacting domestic industries and increasing costs for consumers.

With uncertainty around trade policies, many Canadians are looking for ways to support local businesses.

Check the Label gives consumers a way to make informed purchasing decisions by providing transparency on where products are made.

Launched as a social initiative by Punchcard, the app is dubbed a “community-driven platform” where every user is a contributor to a growing knowledge base that helps everyone make more informed choices.
How Check the Label came to be

It all started with mustard. A simple condiment, yet its journey from seed to shelf is anything but straightforward.

The idea started on a Saturday morning when Estyn Edwards, Partner and CTO of Punchcard, was talking with his partner about how he could contribute during the economic uncertainty. His partner suggested something simple — start with the grocery store. That conversation led Edwards to explore how consumers could better understand where their products come from.

By the end of the day, Edwards had developed a rough prototype. On Sunday, he continued iterating on the idea and by Monday morning, the full Punchcard team was involved, refining and expanding the concept.

Within a week, Check the Label was live.

“We’re leveraging AI to be able to provide some value-added data to the consumer,” says Sam Jenkins, Managing Partner of Punchcard.

The app uses multiple data sources and artificial intelligence to assess whether a product meets the criteria for “Made in Canada” or “Product of Canada.”

This process, Jenkins says, isn’t entirely straightforward, as a “product of Canada” means that 98% of the production or manufacturing costs were incurred in Canada, while “Made in Canada” means that 51% of the total direct costs of production or manufacturing were incurred in the country.

With food, Jenkins notes that many products contain Canadian-sourced ingredients but undergo processing elsewhere before being sold back in Canada.

“The use case that came up for Estyn at the very beginning was mustard,” Jenkins says. “It’s 100% Canadian mustard seeds that are shipped over the border for secondary production, turned into mustard, and then shipped back to us as a Canadian mustard product,” says Jenkins. “Our goal is to make these distinctions clearer.”Sam Jenkins is Managing Partner of Punchcard Systems. – Photo by Digital Journal
Refining product data and expanding reach

Punchcard is developing other AI-driven tools aimed at improving business and consumer decision-making, and with Check the Label, increasing the accuracy of its data remains a key focus.

Collaboration with retailers, manufacturers, and consumers is necessary to improve the reliability of product origin information. Ensuring consumers have access to accurate product information requires collaboration and technological refinement.

With Check the Label, success depends on expanding its database through contributions from users, retailers, and manufacturers.

Jenkins emphasizes that crowdsourced data is key to improving product accuracy.

“Once a product is scanned, we try to provide as much information as possible, but we also allow users to contribute their own knowledge — maybe they see something on the label that isn’t reflected in our database yet,” he says. “This helps us make the data more reliable over time.”

In early anecdotal testing, Check the Label successfully identified about 95% of scanned products, but Jenkins says gaps remain because Canadian product databases are not as comprehensive as some international counterparts.

“The challenge being Canadian data sets aren’t nearly as robust,” says Jenkins. “That’s why partnerships and user contributions are so important. We’re actively working to close those gaps.”

To improve accuracy, Punchcard is seeking partnerships with retailers, manufacturers, and producers to expand its database.
Beta testers help refine accuracy

Early adopters play a critical role in refining Check the Label by scanning products and providing feedback.

“User engagement is key,” says Jenkins. “The more people scan and vote, the better this tool becomes. Even if a product isn’t in our database yet, every scan contributes to a better system. Our goal is to make sure no Canadian picks up a product and wonders where it really comes from.”

Retailers can strengthen consumer trust


Retailers can also play a crucial role by sharing verified product origin data and ensuring their Canadian-made goods are properly represented.

“We want to talk to retailers about how we can make sure we’re getting better data into the hands of Canadians,” Jenkins explains. “Many retailers already highlight Canadian-made products, but this app allows them to go a step further by providing clear, real-time verification. This strengthens trust with their customers and reinforces the value of buying local.”

Jenkins says many retailers already highlight Canadian-made products, but this app allows them to go a step further by putting data in the hands of every Canadian who uses the app anywhere in the country, at any retailer.

Manufacturers and data providers can enhance the platform


Manufacturers and suppliers with product origin databases can also significantly enhance Check the Label by contributing verified data.

“We’re integrating third-party databases and actively looking for manufacturers who want to ensure their Canadian-made products are recognized,” Jenkins says. “If you’re a company that produces goods in Canada, now’s the time to help make this platform as robust as possible. The more accurate the data, the better the experience for consumers — and the stronger the case for buying Canadian.”

As more users engage with Check the Label, its potential to influence consumer habits and retail transparency will only grow. With continued collaboration from businesses and individuals alike, the app aims to create a more informed and empowered marketplace.

For Canadians looking to support domestic products and businesses, Check the Label provides a practical tool to make that process easier and more reliable. As the database expands and participation increases, the impact of the platform will become even stronger.

Learn more about Check the Label here.




This article was created with the assistance of AI. Learn more about our AI ethics policy here.


Written ByChris Hogg

Chris is an award-winning entrepreneur who has worked in publishing, digital media, broadcasting, advertising, social media & marketing, data and analytics. Chris is a partner in the media company Digital Journal, content marketing and brand storytelling firm Digital Journal Group, and Canada's leading digital transformation and innovation event, the mesh conference. He covers innovation impact where technology intersections with business, media and marketing. Chris is a member of Digital Journal's Insight Forum.


Musk aide given payment system access by mistake

NO MISTAKE ABOUT IT


By AFP
February 12, 2025


Musk is aiming to cut over a trillion dollars in federal spending and pledged to push the legal limits of executive power to do so - Copyright AFP/File Jim WATSON

An Elon Musk aide was mistakenly given clearance to make changes to the US Treasury Department’s highly sensitive payments system containing millions of Americans’ personal information, a department official said Tuesday.

The admission came in a sworn statement to a federal judge amid heated criticism that the 25-year-old employee of billionaire Musk had editing rights to a system that handles trillions of dollars in government payments.

The employee, Marko Elez — who had no federal government status — resigned Friday after being linked to a racist social media account, only for Musk to announce that he was being reinstated.

President Donald Trump has tasked Musk with taking an axe to government spending as the leader of a new agency called the Department of Government Efficiency, or DOGE.

The sworn statement, seen by AFP, says that Elez was supposed to gain read-only access to the system, under the supervision of the Bureau of the Fiscal Service, the Treasury Department section that manages payments and collections.

“On the morning of February 6, it was discovered that Mr. Elez’s database access to SPS on February 5 had mistakenly been configured with read/write permissions instead of read-only,” said the statement from Joseph Gioeli, an official from the payments section.

SPS stands for Secure Payment System.

An initial investigation showed all of Elez’s interactions with the SPS system occurred within a supervised session and that “no unauthorized actions had taken place,” the official added.

Elez gained access through a Treasury Department laptop computer, triggering an uproar among critics of the Trump administration and worries about the safety of Americans’ personal data.

DOGE has no statutory standing in the federal government — which would require authorization from Congress — and neither Musk nor his aides are civil servants or federal employees.

Elez was one of two DOGE workers who gained access to the sensitive Treasury payments system.

A confidential internal assessment reported by US media warned the Treasury Department that this access represented an “unprecedented insider threat risk.”

Before he resigned, a court order forced Elez back to read-only permission for the payments system as Democratic lawmakers and citizen advocacy groups warned about the dangers to national security and the economy because of the data he could access.

Another member of the DOGE team, Thomas Krause, also submitted a sworn statement to the same judge on Tuesday, stating that he was employed by the Treasury on January 23 as an unpaid “Senior Advisor for Technology and Modernization.”

He was later delegated the duties of “Fiscal Assistant Secretary,” but said “I have not yet assumed the duties.”

Krause is listed in the Treasury Department’s organizational chart under this title.

“Although I coordinate with officials at USDS/DOGE, provide them with regular updates on the team’s progress, and receive high-level policy direction from them, I am not an employee of USDS/DOGE,” he said in his statement, adding that the department’s team within the Treasury consisted of himself and Elez.


AI feud: How Musk and Altman’s partnership turned toxic


By AFP
February 11, 2025


Image: — © AFP

The feud between Elon Musk and Sam Altman has become one of the bitterest rivalries in business history, with the Tesla tycoon bidding to buy Altman’s OpenAI in an apparent attempt to derail the ChatGPT maker’s ascent to becoming one of the world’s most important companies.

– What sparked the rivalry? –

Musk and Altman were among the 11-person team that founded OpenAI in 2015. Created as a counterweight to Google’s dominance in artificial intelligence, the project got its initial funding from Musk, who invested $45 million to get it started.

Three years later, Musk departed OpenAI. The company initially cited “a potential future conflict for Elon…as Tesla continues to become more focused on AI,” noting the electric vehicle company’s ambitions in autonomous driving.

However, subsequent lawsuits revealed a more contentious story: OpenAI claimed Musk left after his attempts to become CEO or to merge the company with Tesla were rejected.

The situation remained relatively quiet until November 2022, when OpenAI’s release of ChatGPT created a global technology sensation — one that didn’t feature Musk at its center and which made Altman a star.

Musk quickly began criticizing the company, trolling it on social media for keeping its source code private and signing a widely publicized manifesto calling for a pause in AI development, even as he pursued his own AI projects.



Elon Musk and Sam Altman were among the 11-person team that founded OpenAI in 2015 – Copyright AFP/File Frederic J. BROWN, Jung Yeon-je

The conflict escalated in August 2024 when Musk refiled a lawsuit against OpenAI and its backer Microsoft, claiming the ChatGPT maker had betrayed its founding mission of benefiting the public good in favor of pursuing profits.

Musk later updated the lawsuit to prevent OpenAI’s conversion to a for-profit company — a change Altman considers crucial for the company’s development.

– Buy OpenAI? –

OpenAI’s unusual structure — a non-profit with a money-making subsidiary — reflected its idealistic origins as a counter to Google.

However, the massive costs of designing, training, and deploying AI models have forced the company to seek a new corporate structure that would give investors equity and provide more stable governance.

This need for stability became particularly evident after a 2023 boardroom coup briefly saw Altman fired, only to be reinstated days later following Microsoft’s intervention.

The transition to a traditional for-profit company requires approval from California and Delaware authorities, who will scrutinize how the non-profit arm of OpenAI is valued when it becomes a shareholder in the new company.

Current investors prefer a lower valuation to maximize their share of the new company.

Musk’s bid, valuing the OpenAI non-profit at $97.4 billion — approximately $30 billion above current negotiations according to The Information — appears designed to disrupt the company’s fundraising efforts.

“Overall this is Musk’s attempt to hurt OpenAI’s conversion into a non-profit to slow them down. I doubt Musk’s business rationale for the bid will play out in his favor,” said Lutz Finger , visiting senior lecturer at Cornell University.

– Trump attention? –

Musk’s latest move to undermine his former ally came shortly after Altman made an appearance at the White House, announcing his involvement in Stargate, a Donald Trump-sponsored AI infrastructure project partnering with Japan’s SoftBank.

Musk, who plays a central role in the Trump White House, immediately criticized the $500 billion AI project claiming the funding wasn’t secured in an apparent dissension with the president.

Facing the barrage of hostility from the Tesla billionaire, Altman has increasingly suggested that Musk’s actions stem from regret over leaving OpenAI in 2018, particularly as Musk’s competing venture, xAI, struggles to gain traction despite massive investments.

“He’s just trying to slow us down. He obviously is a competitor,” Altman told Bloomberg TV.

“I wish he would just compete by building a better product…. Probably his whole life is from a position of insecurity. I don’t think he’s a happy person. I do feel for him.”
IMPERIALIST HUBRIS

US, UK decline to sign Paris AI summit declaration

AFP, AP, Reuters
 Feb 11,2024

Co-hosts France and India were able to secure Beijing's signature on a joint artificial intelligence declaration, but not London or Washington's. JD Vance said "excessive regulation" could stifle the growing industry.



The US and UK declined to sign a joint declaration at the summit, with visiting Vice President JD Vance warning of 'excessive regulation' deterring innovation and risk-taking
Image: Thomas Padilla/AP Photo/picture alliance

Dozens of countries signed a declaration in Paris on Tuesday calling for AI development to be "open, inclusive, transparent, ethical, safe, secure and trustworthy, taking into account international frameworks for all" and "making AI sustainable for people and the planet."

But the US and UK were notable absentees from the list of signatories of the "Statement on Inclusive and Sustainable Artificial Intelligence," even as China's support was secured by co-hosts France and India.

While dozens of countries signed up to the declaration, the global market leader did notImage: Michel Euler/AP Photo/picture alliance


Why did the US decline to sign?

Visiting US Vice President JD Vance laid out several US reservations in a speech at the summit at the Grand Palais.

"We believe that excessive regulation of the AI sector could kill a transformative industry," he told the gathering of world and industry leaders.

"We feel very strongly that AI must remain free from ideological bias and that American AI will not be co-opted into a tool for authoritarian censorship."

Vance alleged that the EU regulations such as the Digital Services Act and the GDPR rules on online privacy led to unacceptable compliance costs for smaller companies.

"Of course, we want to ensure the internet is a safe place, but it is one thing to prevent a predator from preying on a child on the internet, and it is something quite different to prevent a grown man or woman from accessing an opinion that the government thinks is misinformation," he said.

Vance also held bilateral talks with both Macron and European Commission President Ursula von der Leyen on the sidelines of the event, on his first European tour as vice president
Image: Thomas Padilla/AP Photo/picture alliance

Veiled China warning from Vance, and possibly the UK


To the surprise of some observers, China did sign up to Tuesday's declaration. And while Vance did not mention the government in Beijing by name, he appeared to refer to it at times on Tuesday.

"From CCTV to 5G equipment, we're all familiar with cheap tech in the marketplace that's been heavily subsidized and exported by authoritarian regimes," Vance said.

Chinese startup DeepSeek last month made its new AI reasoning model freely available, leading to a sharp 17% decline in the price of Nvidia shares. The tech company's stock price had risen more than tenfold over the past two years amid the emergence of AI models like ChatGPT.




Vance argued that partnering with these cheap options "means chaining your nation to an authoritarian master that seeks to infiltrate, dig in and seize your information infrastructure."

The British government was less forthcoming when explaining its reasons not to sign up. But a spokesman for Prime Minister Keir Starmer did say the UK government felt the declaration lacked "practical clarity" on issues like global governance, and ducked some "harder questions" on national security.

Macron also calls to cut red tape, but lobbies for 'trustworthy AI'

French President Emmanuel Macron told the summit — but not Vance, who left after giving his speech — in his closing address that he also favored cutting red tape.

However, he added that regulation was needed to ensure trust in AI, and to prevent people from rejecting it as unreliable.

"We need a trustworthy AI," Macron said, after spending the previous day touting France's efforts to accelerate development in the sector.

Some of the coolest AI cats, literally in this instance, flocked to Paris for the summit
Aurelien Morissard/AP Photo/picture alliance

European Commission President Ursula von der Leyen, whose office drafted the GPDR and Digital Services Act, similarly said the EU planned to reduce bureaucratic hurdles, as Europe risks falling behind the US and China in the nascent industry.
OpenAI's Altman rebuffs supposed buyout offer from fierce critic Musk

Meanwhile, back in the US, business mogul Elon Musk leaked news of an apparent bid to buy the company behind ChatGPT, OpenAI, to the Wall Street Journal newspaper.


Musk, who has been promoting his own chatbot Grok on the X platform, has been openly feuding with OpenAI CEO Sam Altman for months, including as recently as Monday.

Altman responded to the publication by the WSJ with a curt "no thank you" online, while a company official spoke about it at more length in Paris.

"OpenAI is not for sale and any such suggestion is really disingenuous," the company's Chief Global Affairs Officer Chris Lehane said on the sidelines of the summit, dismissing the offer as coming from a competitor "who has struggled to keep up with the technology and compete with us in the marketplace."


India to host next summit of this kind, Elysees Palace says


Indian Prime Minister Narendra Modi, the guest of honor and co-host in Paris, had spoken moments before Vance in Paris on Tuesday.

Modi and Macron embraced on stage after the French president's closing address on Tuesday
Michel Euler/AP Photo/picture alliance

He had appealed for international support, calling for "collective, global efforts to establish governance and standards that uphold our shared values, address risks and build trust" in AI.

The Elysees Palace said as the summit concluded in Paris on Tuesday that India would host the next event of its kind.

Edited by: Wesley Rahn

Mark Hallam News and current affairs writer and editor with DW since 2006.

World leaders seek elusive AI common ground at Paris summit


By AFP
February 11, 2025


Macron and Modi must mount a charm offensive to find consensus with other governments - Copyright AFP Thomas SAMSON


Tom BARFIELD and Daxia ROJAS

World leaders were set to hold formal talks in Paris on Tuesday on artificial intelligence (AI), seeking elusive common ground on a technology subject to a global race for promised economic benefits.

Hosted by French President Emmanuel Macron and Indian Prime Minister Narendra Modi, the gathering comes hours after Elon Musk reportedly put in a bid for star developer OpenAI, underscoring AI’s potential to gather power into a single pair of hands.

Attempts to reach global agreement may also frustrate major powers such as the United States and China, which have their own geopolitical tech priorities.

Media reports suggest that neither Britain nor the US — two leading countries for AI development — will sign a planned joint declaration as it stands.

“Good AI governance” requires “clear rules that foster the acceptance of AI technologies”, German Chancellor Olaf Scholz was to tell counterparts, according to a draft of his speech seen by AFP.

Tech and political leaders are expected to arrive at the opulent Grand Palais from 8:45 am (0745 GMT) before the plenary session begins at 10:00 am.

Among them will be US Vice President JD Vance, Chinese Vice Premier Zhang Guoqing and European Commission chief Ursula von der Leyen.

Outside observers criticised an alleged leaked draft of the joint statement for failing to mention AI’s suspected threat to humanity’s future as a species.

The supposed draft “fails to even mention these risks” said Max Tegmark, head of the US-based Future of Life Institute, which has warned of AI’s “existential risk”.

– ‘Plug, baby, plug!’ –

In recent weeks, the United States’ $500 billion “Stargate” programme led by ChatGPT maker OpenAI, and the emergence of the high-performing, low-cost Chinese start-up DeepSeek, have brought into focus the technical challenges and price of entry for nations hoping to keep abreast on AI.

Meanwhile, the Musk-led $97.4-billion bid for OpenAI reported by the Wall Street Journal would compound the tech influence of the world’s richest man, already boss of X, Tesla, SpaceX and his own AI developer xAI as well as a confidant of US President Donald Trump.

Sam Altman, the OpenAI chief set to speak in Paris later Tuesday, responded to the reported offer with a dry “no thank you” on X.

For France, Macron vowed Monday to blast through red tape to build AI infrastructure in his bid to keep Europe competitive.

“We will adopt the Notre Dame de Paris strategy” of streamlined procedures that saw France rebuild the landmark cathedral within five years of its devastation in a 2019 fire, he said.

Macron’s push to highlight French competitiveness saw him repeatedly trumpet 109 billion euros ($113 billion) to be invested in French AI in the coming years.

He has also hailed France’s extensive fleet of nuclear plants as a key advantage providing clean, scalable energy supply for AI’s vast processing needs.

“I have a good friend in the other part of the ocean saying ‘drill, baby, drill’,” Macron said in a reference to Trump’s pro-fossil fuels policy.

“Here there is no need to drill, it’s plug, baby, plug!” he said.

EU Commission chief von der Leyen is expected to make further announcements on the bloc’s competitiveness on Tuesday.

– Gender pay gap –

Away from the political pageantry, OpenAI’s Altman was to address business leaders later Tuesday at the Station F tech campus in southeast Paris, founded by French telecoms billionaire Xavier Niel.

Altman mused in a blog post Monday that with ever more powerful AI systems on the horizon, “it does seem like the balance of power between capital and labour could easily get messed up” in the near future.

On Monday, high-profile summit attendees had warned against squandering the technology’s economic promise in the shorter term.

World Trade Organization chief Ngozi Okonjo-Iweala said “near universal adoption of AI… could increase trade by up to 14 percentage points” from its current trend.

But global “fragmentation” of regulations on the technology and data flows could see both trade and output contract, she said.

In the workplace, AI is mostly replacing humans in clerical jobs disproportionately held by women, International Labour Organization head Gilbert Houngbo said.

That risks widening the gender pay gap even though more jobs are being created than destroyed by AI on current evidence, he added.

EU vows to "cut red tape" as US slams bloc's tech rules
11/02/2025 - 
DW

The EU has long been seen as more of a regulator — and less of an innovator — on tech and AI. Now the bloc wants to change the narrative.


EU Commission chief von der Leyen vowed to make it easier for AI innovators to seek investment and grow in Europe
Image: Eliot Blondet/ABACAPRESS.COM/picture alliance


Something of a global trend has emerged over the last two decades as technology made by big companies gradually crept into every part of our lives, from the phones in our pockets to AI: The United States innovates on tech, China innovates some more, and Europe regulates.

In the last three years, the European Union has rolled out a raft of world-first rules to rein in the power big tech wields within the bloc; from laws forcing firms to monitor and cut down on harmful online content, to comprehensive artificial intelligence legislation designed to foster "trustworthy" AI in Europe.

But as world leaders, CEOs and researchers mingled at the "AI Action summit" in Paris this week, the EU seemed keen to give its image as big tech's police officer an industry-friendly makeover.

When European Commission President Ursula von der Leyen took to the stage in the French capital, the words "rules" and "regulations" only passed her lips once. Instead "innovation" and "investment" were her talking points — with new plans to drum up €200 billion ($207 billion) in funding and plans to build new AI gigafactories announced, alongside a pledge to "cut red tape."

A meeting between the EU's von der Leyen and US Vice-President JD Vance took place on the sidelines of the AI summit in Paris
Image: Thomas Padilla/AP Photo/picture alliance


US piles pressure on the EU to ease tech rules


If regulation seems to be falling out of vogue here in Europe, it's welcome news in Washington. Vice President JD Vance did not mince his words in Paris on Tuesday, warning that "America cannot and will not accept" foreign governments "tightening the screws" on US tech companies.

The vice president went on to rebuke the EU's online content rulebook — dubbed the "Digital Services Act" — before declaring that the US is the world's AI leader and "plans to keep it that way."

Despite showing up to the summit, the US refused to sign up to an international document drafted and rubber-stamped by 60 nations at the event — including Germany, France, China, India, South Africa, Kenya, the UAE and Brazil.

The text calls for "open, inclusive, transparent, ethical, safe, secure and trustworthy" AI, and includes pledges to "reduce digital divides" and make AI "sustainable for people and the planet."


Too late for the EU to get ahead in global AI race?

The US may be busy drawing its big tech battle lines, but the pressure to rethink the EU's approach to tech is not just coming from across the Atlantic. 2024 research by Digital Europe, a technology industry lobby group based in Brussels, shows that Europe lags behind both the US and China in terms of investment in AI.

The organization also warned that "complex regulations hinder European companies' growth and scalability, often forcing them to seek more favorable markets."

Janosch Delcker, a tech expert and author, said the latest EU investment plans announced on Tuesday "could be one element to change the game."

While Brussels' pledge to "mobilize" funds was thin on detail, it followed a similar national announcement from France.


Image: Eliot Blondet/abaca/picture alliance

"Decision-makers seem to have understood that more investment will be necessary for the EU to compete in this global race for AI," Delcker said, adding that the recent release of Chinese startup DeepSeek's large language model sparked reflection.

"DeepSeek seems to have developed an AI model that's able to compete with the big players in certain aspects. But it did that with, from everything we know, a fraction of the resources," Delcker, who formerly hosted DW's Techtopia show, said.

"A lot of people here in Europe understood it as kind of an eye-opening moment to say: Hold on, if DeepSeek can do it, then we can do it as well."


Deregulation push a danger for EU values?


But the EU's broader change of tune is worrying some.

"Something I honestly fear right now is that this anti-regulation narrative will lead to us not taking implementation seriously," Carla Hustedt, who directs the Centre for Digital Society at the Mercator Foundation think tank, told DW.

While Hustedt praised the push for more investment in Europe, she warned: "We have a lot of really good regulation in place in the EU right now, and now is the time to enforce it in a good way."

Hustedt said transparency rules were particularly important for EU companies if the bloc wants to see more uptake of AI models. "They need to know what they're buying. Is this safe? Is this robust? Is it biased?" she said.

US President Donald Trump says he'll slap tariffs on the EU, complaining the bloc buys too little from the US
Image: Al Drago/ abaca/picture alliance


Big tech: Bargaining chip in EU-US standoff?


With tech billionaires now walking the halls of power in Washington and the US ramping up tariffs on global imports, many in Brussels now wonder how big tech may figure in a future EU response.

Seeking ways to avoid a trade war is priority number one in Brussels — with some EU leaders suggesting Europe buy more energy or military equipment from the US to keep Trump on side. Privately, some acknowledge that playing nice with big tech could curry favor in the US administration.

But if trade tensions escalate, big tech could be in Brussels' eyeline too, with EU retaliation tools allowing for restrictions on trade in services.

Pressed on broader EU retaliation options in the case of a potential trade war, EU parliamentarian Bernd Lange told DW last week: "It could be tariffs, it could be exclusion from public procurement, it could be market restriction."

"We want not to have a situation where a partner country, or a country which uses coercive measures, can calculate which kind of counter measures we will take," he added.

Edited by: Jess Smee.
Why the African continent has a role to play in developing AI

Heads of state, top government officials, and scientists from around 100 countries have gathered in Paris for a two-day international summit on developing artificial intelligence (AI). Decisions are expected to be reached on AI's real-world impact and how to take it forward together. The African continent has an important role to play, a Cameroonian AI specialist tells RFI.


Issued on: 11/02/2025 - RFI

Young men using computers in an internet café in Cameroon's capital Yaoundé. 
© RFI/Amélie Tulet

According to the African Union, AI is a "strategic asset pivotal to achieving the aspirations of Agenda 2063" (The Africa We Want) and Sustainable Development Goals (SDGs).

To get a sense of where the continent is at, RFI spoke to Paulin Melatagia, head of the research team on IA and data science at Yaounde I University.

RFI: Artificial intelligence will profoundly change our societies in many fields. Do you think the African continent has already begun its transformation?

Paulin Melatagia: Yes, I believe the continent has already started its transformation. There are a lot of initiatives across the continent – lots of startups and many public organisations are beginning to invest in the development of AI applications, notably in the fields of health, transportation, and agriculture. They're being proposed almost every month as part of competitions and hackathons to address Africa-specific issues.

Paulin Melatagia, head of the Data Sciences and AI research team at the Department of Computer Science at the University of Yaoundé 1 in Cameroon © INRIA

RFI: Would you say African leaders have grasped the magnitude of what is happening?

PM: There are already a set of measures at the African Union level, with documents that outline an AI strategy for the continent. Measures are also being taken at the institutional level in various countries, such as the creation of authorities responsible for data protection. Some countries are also setting up infrastructures like computing centres that allow data to be processed and used to develop AI. Governments in most countries are aware of the stakes and opportunities of AI, even if progress is quite uneven from one country to another.

RFI: Which African countries are currently leading in this field?

PM: According to the Oxford Insights ranking, the leading countries in Africa in terms of AI preparedness and implementation in North Africa are Egypt, Tunisia, and Morocco. In sub-Saharan Africa, notable countries include Mauritania, South Africa, Rwanda, Senegal, and Benin.

Paris hosts AI summit, with spotlight on innovation, regulation, creativity

RFI: Isn't internet access still a barrier to developing AI on the continent?

PM: Yes there are challenges. One major issue is connectivity because it's important, especially for startups, to get access to data. For that to happen smoothly, you need high-quality internet. Another challenge is the lack of computing infrastructure in order to develop artificial intelligence. It requires significant computing power, and unfortunately Africa currently has very few supercomputers capable of processing large datasets for AI development.

Another major obstacle is data availability. To create AI solutions that address Africa’s problems, we need African data. But when we look at the statistics, we see that very little data is collected on Africa. So when we analyse well-known AIs like ChatGPT, we notice significant biases regarding African realities. These biases stem from the limited amount of African data used to train these models.

Artificial intelligence experts meet in Morocco

RFI: Are there any 100 percent African AI projects?

PM: There are already some proposals for 100 percent African AI, but few for the moment. Take the example of African languages. Currently, African languages are rare in the digital and AI sectors. Yet we know that many people in rural areas speak these languages and don't speak colonial languages. About 26 percent of adults in Africa are illiterate when it comes to colonial languages.

Developing AI solutions that understand and process African languages would therefore be extremely beneficial for these populations. Unfortunately, African languages are considered "under-resourced," meaning there is not enough digitised data to create AI models tailored for Africa.

AI development cannot be left to market whim, UN experts warn

RFI: What message should Africa convey at a summit like the one in Paris?

PM: In my opinion, the fundamental message is that Africa has a role to play in the development of artificial intelligence, both in solving social problems on the continent and in contributing to new AI concepts and knowledge that can drive global AI progress forward.

This interview was adapted from the original in French and lightly edited for clarity.
Empowering youth and protecting others: Safer Internet Day 2025


By Dr. Tim Sandle
February 11, 2025
DIGITAL JOURNAL


Homework: Image by Tony Alter (CC BY 2.0)

Today is Safer Internet Day 2025. The day is marked in many countries around the world, with a focus on staying safe online. Aimed primarily at younger people, the global event focuses on creating a secure online environment for everyone, while encouraging positive and respectful interactions.

The day is also themed around a different topic each year, and for 2025 the theme in the U.K. is ‘Too good to be true? Protecting yourself and others from scams online’. Whereas the subject for the U.S. is “Empower Youth and Shape Online Safety Policies”.

Different themes are needed each year because technologies continue to evolve and so does the dynamic safety landscape. This presents new challenges and continued industry investment in safety features remains an imperative to make the Internet a safer place for all.

Safer Internet Day was created as an initiative of the EU SafeBorders project in 2004 to raise awareness of emerging online issues and current concerns. It was later adopted by other countries, including the U.S.

With the U.K. theme, there are three key messages:

• If something sounds too good to be true (like an in-game trade or social media giveaway) then it might be.
• Don’t share personal information online and remember that not everyone can be trusted in games or online.
• Watch out for phishing and don’t click on a links from unexpected messages, even if it looks like someone you know or a company you’ve heard of.


Among the global events taking place, there is one significant in Sacramento (as well as online). Larry Magid, CEO of ConnectSafely, is hosting an array of sessions including an in-person gathering in Sacramento, a virtual event for parents, and local school and community activities nationwide.

As the official U.S. coordinator, ConnectSafely aligns with global celebrations in over 100 countries, focusing on enhancing digital safety and well-being.

The Sacramento forum will host discussions on topics including school phone policies, media literacy, AI in education, social media age verification and parental controls, cyber scams, and AI regulation.

The main event in Sacramento features a Keynote from California Superintendent of Public Education Tony Thurmon, and additional discussions from assembly members Ash Kair, Buffy Wicks, and executives from big tech companies – Meta, Google, TikTok, and Snap.

The stream link is: Safer Internet Day event will be live streaming

Speaking ahead of the events, Magid states: “Our goal for Safer Internet Day is to transform dialogue into action by integrating the voices of youth directly into the conversations that affect them. These events are a critical step toward building a safer, more responsible digital world for everyone.”
Canada unveils new National Cyber Security Strategy to enhance digital resilience


By Jennifer Kervin
February 11, 2025
DIGITAL JOURNAL 

Image generated by Gemini Advanced

Cybercriminals aren’t slowing down — and neither is Canada’s response.

The federal government has introduced a new National Cyber Security Strategy (NCSS), aiming to strengthen the country’s resilience against evolving cyber threats.

Minister David McGuinty announced the initiative, emphasizing a “whole-of-society” approach to cyber security in an increasingly interconnected world.

To support these efforts, the new NCSS is backed by an initial investment of $37.8 million over six years for cyber security initiatives. This includes funding for awareness and education programs aimed at equipping children and youth with the knowledge to navigate digital spaces safely.

Addressing a growing cyber threat landscape

Technology underpins Canada’s critical infrastructure, from hospitals and energy suppliers to transit and telecommunications networks. With that reliance comes an expanding attack surface for cybercriminals, creating risks to national security and economic stability.

The new NCSS sets out a long-term framework to enhance cooperation among governments, industry, Indigenous communities, academia, and international allies. It aims to reduce disruptions to essential services, facilitate faster information sharing, and promote stronger preventive measures.

The strategy focuses on an approach based on three pillars:Work with partners to protect Canadians and Canadian business from cyber threats

Make Canada a global cyber security industry leader

Detect and disrupt cyber threat actors

“Canada must continue to be a leader in cyber security, especially in the face of persistent and ongoing cyber threats,” McGuinty said in a statement.

“The new National Cyber Security Strategy demonstrates the Government of Canada’s commitment to a whole-of-society and agile approach to protecting our nation’s cyber security for citizens across our great country, for Canadian businesses and for essential cross-border services and critical infrastructure.”

The strategy also underscores Canada’s commitment to aligning its cyber security efforts with the United States and other allies. The goal is to bolster cross-border infrastructure protections and ensure a unified approach to deterring cyber threats.
Building on past initiatives

The NCSS builds on the foundation laid by the 2018 strategy, which established key cyber security institutions such as the Canadian Centre for Cyber Security and the National Cybercrime Coordination Centre under the Royal Canadian Mounted Police.

Other recent efforts include the launch of a Cyber Attribution Data Centre at the University of New Brunswick, announced in December 2024, to enhance Canada’s capabilities in identifying and countering cyber threats. Additionally, the Federal Cyber Incident Response Plan, published in 2023, provides protocols for managing cyber security incidents affecting non-governmental systems.

Looking ahead

The government’s latest National Cyber Threat Assessment for 2025-2026 warns that malicious actors will continue to target Canadians through fraud, scams, and ransomware attacks. The NCSS is positioned as a proactive response to these challenges, reinforcing Canada’s commitment to maintaining a secure, stable, and accessible digital environment for all citizens.

As cyber threats evolve, so too will Canada’s approach. With a renewed emphasis on collaboration, investment, and education, the government aims to ensure the country remains resilient in an increasingly digital world.

This article was created with the assistance of AI. Learn more about our AI ethics policy here.



Written By Jennifer Kervin
Jennifer Kervin is a Digital Journal staff writer and editor based in Toronto.