Wednesday, February 25, 2026

 

Is social media addictive by design and can you beat the algorithm?

FILE - The TikTok logo is seen on a mobile phone in front of a computer screen which displays the TikTok home screen, Saturday, March 18, 2023, in Boston
Copyright AP Photo/Michael Dwyer, File


By Anna Desmarais
Published on 


Social media features such as infinite scroll and personalised feeds can drive compulsive use. Experts argue that Big Tech should change its business models for meaningful change.

A recent European Commission ruling that TikTok’s “addictive design” breaches EU law has reignited the debate over whether social media is truly addictive.

Infinite scroll, autoplay, notifications, and a personalised feed were flagged by the Commission as potentially harmful to users’ mental and physical well-being.

Across the Atlantic, a California social media “addiction” trial is evaluating similar claims against Google and Meta platforms.

The plaintiff, known as KGM, and her lawyers argue that apps such as Instagram are deliberately engineered to keep young users hooked.

Are these platforms designed to be addictive, and if so, what can be done to beat them?

Is social media addictive?

Social media platforms work similarly to slot machines as they deliver unpredictable rewards, offer rapid feedback, such as comments and likes, said Natasha Schull, associate professor of media, culture and communication at New York University.

Design features on social media platforms, such as the “like” button, “For You” pages that recommend new content and “infinite scroll,” where the feed never ends, can also lead to compulsive use of the platforms, said Christian Montag, professor of cognitive and brain sciences at the University of Macau in China.

“Getting a like feels good,” Montag told Euronews Next. “Then they want to feel good again, so they post something again, [which] can lead to habit formation.”

TikTok adds autoplay and short-form videos into the mix, which creates an even faster reward cycle.

“The human brain responds strongly to novelty, and here something new is happening all 15 seconds,” Montag said. “So even if the current video snippet is not great, I’m always already in the expectation mode that the next one at least could be.”

The European Commission warned in its decision that users can slip into “autopilot mode,” on platforms like TikTok, where they passively consume content rather than actively engaging with it, said Daria Kuss, programme leader at Nottingham Trent University in the United Kingdom.

This type of social media consumption has been linked with “poorer mental health, including addiction, upward social comparison, fear of missing out, social isolation and loneliness,” Kuss said.

TikTok rejected the Commission’s characterisation of its platform as addictive, calling its findings “categorically false.” The company said it offers screen time controls and other tools for people to regulate how much time they spend online.

Change the business model, change the behaviour

Experts argue that social media companies measure success as the amount of time spent on the device, which then drives advertising revenue. Both Montag and Schull said that the model inherently rewards maximising engagement.

“If you ask [social media companies], are you intentionally designing to addict people, they’d say absolutely not, we’re intentionally designing to optimise engagement,” Schull said, noting that the companies likely did not design their products to create addictions.

Montag and Schull suggest that platforms shift to subscription models. If users paid a small fee, platforms would no longer depend on advertising and personal data tracking for profit, which means some of those features could be removed.

Montag’s research found that people are not willing to pay for social media subscriptions because they are not used to the idea. However, once his participants learned how that model could reduce screen time or hire fact-checkers to fight misinformation, he said they were more likely to pay.

Another possibility is directing public funding that goes to legacy media organisations to also fund alternative platforms, Montag added.

Some public bodies have already tried that. In 2022, the European Data Protection Supervisor (EDPS) launched EU Voice and EU Video, two European social media channels for EU institutions. The platforms shut down in 2024 due to a lack of funding.

The Public Spaces Incubator, a working group of public broadcasters from Belgium, Germany, Switzerland, the United States, Canada, and Australia, said they developed over 100 prototypes to improve online conversation.

One example from Canada’s Broadcasting Corporation (CBC) shows a “public square view,” embedded in a live video feed. The feature allows users to watch together and comment in real time, offering more nuanced opinion options such as “respectfully disagree,” “made me think,” or “changed my mind.” It is immediately unclear which tools, if any, have been deployed or whether they could replace social media.

Schull said that meaningful change for the Big Tech social media platforms may only come through legal action.

“If you'reyoure a designer and you'reyoure working for a company, your purpose is to increase engagement … and the only way I think that that is going to be stopped is if there are just cold and hard limits put on it, limits on time and access and age,” she said.

Are there alternatives?

The Fediverse, a decentralised social media network where independent platforms connect users without adverts, tracking or data sharing, offers alternatives to Big Tech’s platforms.

These sites include Mastodon, a replacement for X (formerly Twitter), Pixelfed, an Instagram-like picture-sharing app, and PeerTube, a video app similar to YouTube.

As of 24 February, there are 15 million accounts in the Fediverse, with 66 percent of them on the social media platform Mastodon.

Mastodon gained in popularity when billionaire Elon Musk acquired Twitter, now X, in 2022. However, Montag notes the difficulty for more responsible social media companies.

“[I think it] will be a pretty hard task, to be honest, to come up with platforms which are convenient on the one hand, but not overdoing it in terms of user engagement and prolonging online times,” Montag continued.

How to limit doomscrolling

Social media users can also reduce compulsive scrolling themselves.

Schull recommends making it as hard as possible to access social media sites. One strategy is to move apps into a folder labelled “social media” on the last page of their smartphone’s screen, so it is harder to get to. She also advised setting screen time limits on phones.

And you could also consider deleting social media apps from smartphones altogether, Kuss and Montag recommended. If users want to go on social media, a better way would be to access the sites from a desktop computer, Montag added, so it is less convenient.

“I'm not saying don't use social media at all, but don't have it accessible all the time, [because] that can reduce the online time,” Montag said, noting that people should disable notifications for the apps they want to keep on their phone.

Montag also suggested that users swap their phones for analogue technology when possible, such as using a manual alarm clock or a wristwatch to check the time instead.

If all else fails, hiding the phone from a user’s direct eyesight in “everyday situations,” can also help, Kuss said.

Still, both Montag and Schull said responsibilityshouldn’t be on the consumer to self-regulate, but on the platforms to change.

 

Are ‘microwave safe’ labels misleading? New report exposes health and environmental harms

Your microwave meal could contain a 'cocktail' of chemicals.
Copyright Canva

By Angela Symons
Published on 

Microwave meals are convenient – but a new report reveals just how much they could be harming our health and the planet.

Microwave meals are a convenience that’s hard to resist on a busy day. But they could be quietly wreaking havoc on our health and environment, a new report warns.

The paper by Greenpeace International analyses 24 recent scientific studies on the hidden health risks of plastic-packaged ready meals.

It paints a grim picture: hundreds of thousands of tiny plastic particles leaching into our food along with hazardous chemicals that could have far-reaching health impacts.

“People think they’re making a harmless choice when they buy and heat a meal packaged in plastic,” says Graham Forbes, global plastics campaign lead from Greenpeace USA.

“In reality, we are being exposed to a cocktail of microplastics and hazardous chemicals that should never be in or near our food.”

And the contamination doesn't stop in our bodies. Plastic food trays and films pollute across their entire lifecycle – from fossil fuel extraction to energy-intensive manufacturing and eventual disposal.

When the time comes to throw these single-use plastics away, their multilayer materials make them tricky to recycle. As they break down into micro- and nanoplastics, these tiny fragments accumulate in soil, rivers and oceans, harming animals and re-entering our food system.

Even when they do make it into the circular economy, plastics degrade in quality and can re-release hazardous additives into new products.

Are plastic ready meals safe to heat and eat?

Convenience food items marked ‘microwave safe’ may be giving false reassurance to consumers, the report warns.

The label, the authors argue, generally refers to the structural stability of the container – not whether it releases microplastics or chemical additives into food.

One study found 326,000 to 534,000 micro- and nanoplastic particles leaching into food simulants after just five minutes of microwave heating. Nanoplastics are small enough to potentially enter organs and the bloodstream.

Plastics are also known to contain more than 4,200 hazardous chemicals. Most of these are not regulated in food packaging and some are linked to cancer, infertility, hormone disruption and metabolic disease, the report notes.

At least 1,396 food contact plastic chemicals have been detected in human bodies, with growing evidence linking exposure to neurodevelopmental disorders, cardiovascular disease, obesity and type 2 diabetes

Higher temperatures, longer heating times, worn containers and fatty foods – which absorb more chemicals – significantly increase the amount of plastic particles and additives that leach into meals, according to the report.

Regulatory guidance on microplastics released from food packaging is insufficient globally, the report states, adding that industry denial has contributed to regulatory delays.

In the European Union, for example, food contact plastics are regulated based on ‘migration limits’ for known chemical substances, based on advice from the European Food Safety Authority, but there are currently no specific thresholds for microplastic particles.

Plastic pollution is growing, fast

Global plastic production is set to more than double by 2050, and plastic packaging is a huge part of the picture. It currently accounts for 36 per cent of all plastics, analysis by the International Energy Agency shows.

Already worth over €160 billion, plastic-packaged ready meals are set to grow in value to almost €300 billion in 2034 as consumers continue to chase convenience, research by global consulting firm Towards FnB found.

In 2024, 71 million tonnes of ready meals were produced globally, averaging 12.6 kg per person, according to market research published by Statista.

Greenpeace argues that food-contact plastics should fall under stricter global controls in the forthcoming UN Global Plastics Treaty, including phase-outs of hazardous additives rather than relying on downstream recycling.

“The risk is clear, the stakes are high and the time to act is now,” says Forbes.

 

Is this the future of train travel? Robot dogs and drones take over a metro station in China


By Theo Farrant & AP
Published on 

China's first full-space "robot cluster" is designed to support staff, speed up important train inspections, and make metro travel safer.

During one of the busiest travel periods of the year, commuters in Hefei, in a city in east China's Anhui Province, were greeted not just by trains, but by robots.

Humanoid assistants, four-legged inspection dogs and drones patrolled metro stations and tunnels, helping passengers with directions, checking infrastructure, and scanning for faults.

It’s China’s first full-space "robot cluster" for rail transit, deployed during the Spring Festival travel rush.

“The full-space robot intelligent dispatching platform mainly operates in three areas: intelligent service within stations, vehicle inspection, and tunnel inspection,” said Dai Rong, the director of the Science and Education Center at Hefei Rail Transit.

"We hope it can assist human staff, improve our work efficiency, and reduce work intensity to empower Hefei's rail transit operations through technology."

Robots on the platform and under the trains

At several stations, humanoid robots guided passengers with directions and transfer inquiries, while robot dogs patrolled platforms for safety.

Underneath the trains, autonomous inspection robots navigated 1.5-metre-deep maintenance trenches, scanning wheels, bolts and other components with high-definition cameras and ultrasonic sensors.

Any cracks or loose parts were flagged immediately, speeding up checks that would normally take hours.

A humanoid robot and a drone assisting at a metro station in Hefei, Japan. Credit: CNS

"In the future, we aim to build this platform using large AI model technologies to provide these robot dogs and drones with a better central 'brain' for control," said Luo Lei, a senior supervisor at the Science and Education Center. "This will enable them to identify and respond to various abnormal situations more accurately."

The technology does raise the question: just how much can these machines do, will human input still be needed in the future, and should other cities elsewhere be paying attention?

While the Hefei system is designed to assist humans rather than replace them, its capabilities hint at the growing role AI and robotics could play in public transport, infrastructure monitoring, and urban safety in the years to come.

 

‘Save this country’: Robert De Niro's passionate speech prior to Trump's State of the Union address

‘Save this country’: Robert De Niro delivers passionate anti-Trump speech prior to State of the Union address
Copyright AP Photo

By David Mouriquand
Published on 

The Oscar winning actor shared his prediction that Donald Trump “will never leave” office and that it is up to Americans to “get rid of him”.

As Donald Trump delivered his nearly two-hour State of the Union address, which came as his poll numbers on the economy plummet ahead of the 2026 midterms, Oscar winner Robert De Niro gave an emotional speech in which he urged people to “resist” Trump and his administration.

De Niro, a longtime and fervent Trump critic, appeared on MS NOW to speak about the current US president, sharing his prediction that Trump “will never leave” office and that it is up to Americans to “get rid of him”.

“He will never leave. We have to make him leave,” said the actor. “He jokes now about nationalizing the elections. He’s not joking. We’ve seen enough already.”

When asked whether he thinks that Trump will leave in three years, as per the Constitution which stipulates that no individual can be elected more than two terms as president, the 82-year-old screen legend replied: “He ain’t leaving. No way. Let’s not kid ourselves. He will not leave. It’s up to us to get rid of him.”

He continued: “The story is our country, and Trump is destroying it, and who knows what his reasons are, but it’s sick, it’s fucked up. We have to save this country.”

The actor said with his voice cracking: “All I know is people have to have to resist, resist, resist. There’s no easy way. It’s not going to come to you easy. You know, there’s a time when you know in your own life and your own survival, you better do this. You better jump and run through the fire because if you don’t run through the fire, you’re not getting out, and that’s what we have to do.”

De Niro previously referred to Trump as “sadistic” and a “clown,” while the president has repeatedly blasted De Niro, stating that he “suffers from an incurable case of Trump Derangement Syndrome” - the oft-trotted out pseudo-scientific pejorative / Orwellian Newspeak weaponised by those who want to silence critics of Trump’s actions and policy positions.

Last October, De Niro encouraged the country to “keep fighting” during the No Kings protests, saying: “There’s no other way to face a bully. You have to face him and fight it out”.

Trump at the 2026 State of the Union address AP Photo

Prior to Donald Trump’s State of the Union address, a Reuters/Ipsos poll found that six in 10 American think that Trump has become erratic as he ages, with 61 per cent of respondents (89 per cent of Democrats, 30 per cent of Republicans and 64 per cent of independents) saying they would describe Trump as having "become erratic with age."

The poll also showed that most Americans think the US’ political leadership is too old, with 79 per cent of respondents agreeing with the statement that "elected officials in Washington, D.C., are too old to represent most Americans."

The average age in the US Senate is 64, and 58 in the US House of Representatives.

White House spokesman Davis Ingle said the poll results were examples of "fake and desperate narratives."

However, according to another recent poll by Washington Post/ABC News/Ipsos, only 39 per cent of Americans approve of the way Trump is handling the job of president.

Mexico travel: Your rights during civil unrest explained after cartel boss killing sparked violence

National Guards escort an ambulance to the General Prosecutor's headquarters in Mexico City on Sunday 22 February
Copyright Copyright 2026 The Associated Press. All rights reserved.


By Dianne Apen-Sadler
Published on 

While the situation in Puerto Vallarta and Guadalajara appears to have returned to normal, many travellers have faced disruption over the past few days. Here’s what you need to know about your rights in cases of civil unrest.

The killing of a cartel boss sparked violence in parts of Mexico earlier this week causing travel disruption in popular tourist destinations including Puerto Vallarta.

While the situation has now returned to normal, the cancellation of all international flights from the resort city on Sunday 22 February meant that many tourists were left stranded. Others still may be looking to shorten their stay despite travel alerts being lifted.

To understand your rights, we spoke to InsureMyTrip CEO Suzanne Morrow about the situation in Mexico.

The importance of how travel insurance classifies an event

Broadly speaking, travel insurance is meant to cover expenses incurred during unexpected events, like if you fall sick while abroad or if bad weather ruins your trip.

Your policy will list inclusions and exclusions in the fine print, and so your first port of call when something goes wrong when abroad should be your insurance provider.

According to Morrow, the situation in Mexico most likely falls under civil unrest, meaning “public disturbances, riots, rebellion against a government or civil authority involving acts of violence, damage, or injury to others”.

“It is not automatically considered terrorism unless the US government formally declares it an act of terrorism under specific definitions outlined in insurance policies,” Morrow told Euronews.

“That distinction matters because coverage can differ depending on how an event is officially classified.

“Most comprehensive travel insurance policies treat civil unrest differently than terrorism, and in many cases, civil unrest is not a covered reason to cancel a trip before you leave.”

Leaving a trip early due to civil unrest

Unfortunately, standard policies do not typically cover your choice to leave early out of concern or fear alone (but again, you will need to check your individual travel insurance agreement).

“Trip interruption benefits require a covered reason (which will be defined in a policy),” Morrow added.

“Civil unrest alone may not qualify unless it directly prevents you from reaching your destination or causes you to lose the majority of your trip. This is where coverage varies significantly by plan.”

Having said that, Morrow notes that you may be covered if you paid for a plan with an Interruption for Any Reason benefit. The optional benefit can be used to reimburse a percentage – usually up to 75% – of your unused prepaid non-refundable trip cost.

Interruption for Any Reason benefits are time sensitive, and will need to have been purchased soon after your initial trip payment or deposit. There may also be rules around how long after departure you have to wait to use this benefit.

Missing flights due to civil unrest: Are you covered?

“If flights are delayed, cancelled, or grounded unexpectedly causing you to lose a portion of your trip, certain comprehensive travel insurance plans may offer coverage,” Morrow said.

Benefits that may be in your policy include travel delay, which covers meals, hotel stays and transportation; political or security evacuation; and emergency assistance benefits.

These all depend on your specific policy, when you bought your insurance, how the event is officially classified, and whether the issue directly impacts a trip.

According to Morrow, in this situation your first call should always be the airline as you may also be entitled to a refund depending on Department of Transportation rules.

The current situation in Mexico

The US Embassy in Mexico stopped urging its citizens to shelter in place on Tuesday 24 February, while in the UK, the Foreign, Commonwealth and Development Office has said that “services appear to be resuming operations, although you should continue to follow local security advice”.

Operator Pacific Airports Group has said that Guadalajara Airport is operating 96% of its scheduled flights, while Puerto Vallarta Airport is operating at 95%.

Many airlines, including United, American Airlines and Delta, are waiving change fees, although the dates covered vary from carrier to carrier.

It should be noted that even prior to the events on 22 February, the US had level 4 do not travel advisories for several states in Mexico, while the UK has advised against all but essential travel in some areas.

It is vital you check these alerts prior to booking a trip as they may render your travel insurance invalid.

“Insurance companies look at whether something is considered a ‘known peril’. If unrest or advisories were already in place before the policy was bought, that can affect eligibility for cancellation coverage,” Morrow said.

“That said, if you’re already in Mexico and the situation escalates unexpectedly, some benefits may apply, but others may specifically be excluded if there were level 4 warnings prior to your arrival into the country.”

French actors slam 'systematic plundering' of voices and images by AI tools

Ahead of the 51st César Awards on Thursday, France's biggest film awards, 4,000 French actors and filmmakers have condemned "systematic plundering" of their work by artificial intelligence tools, which reproduce their voices or images.



Issued on: 25/02/2026 - RFI

French actors and filmmakers have condemned the use of their voices and images by AI tools. © Vertigo3d / iStock / Getty Images

"We are facing a profound change in our profession since the advent of artificial intelligence [AI]. This tool, which is extraordinarily valuable for certain professions, is also a devouring hydra for artists like us," wrote the signatories in a text published by Le Parisien newspaper on Sunday.

They include actors Swann Arlaud, Gérard Jugnot, Karine Viard, Franck Dubosc and José Garcia, Léa Drucker and Élodie Bouchez.

"The cloning of actors' voices without their permission is becoming commonplace," the open letter continues, adding that "not a week goes by without an artist warning about the brutal competition that AI is putting on their work".

"Sometimes hundreds of less established artists, who often cannot afford to turn down a contract, surrender their rights to AI, despite the risks to their image and their future."

"This systematic plundering is not a fantasy, it is happening here and now. It is unbearable and it is happening right before our eyes," they warn, calling for a "legal framework" so that "AI can coexist with the work of artists and respect for copyright and related rights".

Dubbing industry under threat


There has been a surge in initiatives within the profession over the last few months in response to the threat posed by AI to the industry, and the flood of content that reproduces artists and their voices almost perfectly.

At the end of January, eight French actors specialising in dubbing sent formal notices to two American companies that had cloned their voices without their consent.

Actors recently took to the streets in Paris and launched a collective called Touche pas ma VF ("hands off my VF" – for Version française).

It's calling for "dubbing created by humans for humans", and has launched a petition that has garnered nearly 250,000 signatures.

Europe's voice actors call for tougher regulation of AI technology

In early 2025, the dubbing world was shocked by an excerpt from the Sylvester Stallone film Armor in which the voice of Alain Dorval, the actor who had long dubbed Stallone, was modelled by AI.

Not only was the result deemed poor by the industry, but the actor had died in February 2024, raising ethical questions.

"AI is taking away artists' jobs. Can we do without artists in society? " actress Brigitte Lecordier told RFI at the time. "AI does not create. It reproduces a mediocre version of what has already been done."

The debate extends beyond France. Last week, Chinese software Seedance 2.0 was accused by major Hollywood studios of "massive" copyright infringements after releasing an AI-generated video showing a fight between Tom Cruise and Brad Pitt.

(with AFP)


What Is Shaping Artificial Intelligence Governance Policies In Southeast Asia? – Analysis



February 25, 2026 
 ISEAS - Yusof Ishak Institute
By Kristina Fong


The rapid proliferation of AI and the greater awareness of its capabilities and risks over the past few years have catalysed attempts to make AI development and deployment safer. In line with this, countries in Southeast Asia, as well as ASEAN as a whole, have made significant efforts to formulate guardrails around AI. For ASEAN, the release of the ASEAN Guide on AI Ethics and Governance[1] (ASEAN AI Guide) in February 2024, followed by a supplementary guide specifically for Generative AI[2] a year later, provided a set of holistic frameworks for the responsible design, deployment and usage of AI systems.

Besides ASEAN-wide guidance, individual ASEAN Member States (AMS) have also taken steps to strengthen their own safeguards to help ensure safe and ethical AI developments at a measured pace. Although the AMS may have taken different approaches to AI governance in their respective jurisdictions, there is an emerging utilisation of an umbrella soft law supported by hard law baseline regulations. Moreover, certain pertinent aspects of international governance benchmarks are also setting the foundation for their implementation, such as the EU’s risk-based approach leveraged in the EU AI Act.
Getting Your Ducks in a Row

Prior to the release of the ASEAN AI Guide, we conducted an assessment of AI policies in ASEAN Member States (AMS). In addition to that, we also explored if the regulatory building blocks required to better manage AI developments, such as Personal Data Protection (PDP) and Cybersecurity legislation, were in place.[3] Before the ASEAN AI Guide, six out of ten AMS had some form of AI strategy in place, while four had not crafted such policies as yet, the latter being Brunei, Cambodia, Lao PDR and Myanmar. By the end of 2025, however, Brunei had released the Artificial Intelligence (AI) Governance and Ethics Guide for Brunei Darussalam (2025), under the Authority for Info-Communications Technology Industry, and gazetted its Personal Data Protection Order 2025. Notably, Brunei cited the ASEAN AI Guide as a key piece of guidance in the formation of its own AI governance strategy.[4]

Cambodia, Lao PDR and Myanmar, meanwhile, continue to develop their own national AI strategies, guided in the interim by their respective digital economy national strategies such as the Cambodia Digital Economy and Society Policy Framework (2021-2035) and the Laos Digital Economy Strategy (2021-2030). In these documents, AI-related initiatives such as studying and fostering the use of AI technologies are outlined, albeit without much detail. For Timor-Leste, ASEAN’s newest member state, Timor Digital 2032 (2023-2032) under TIC[5] Timor, is the main strategic document for the digital economy. However, it does not explicitly mention AI but rather, encompasses digital and ICT developments to facilitate economic, e-government, health, education and agriculture initiatives, which could include AI technologies. That said, prior to embarking on any AI-specific plans, Timor-Leste will need to bolster its overall digital development; it scores low on digital availability, access and adoption. As of 2023, it was estimated by UNCTAD that only 34 per cent of the population used the internet, with the next lowest in ASEAN being Myanmar at 58.5 per cent.


However, the more pressing issue is that Cambodia, Myanmar and Timor-Leste have yet to implement Personal Data Protection Laws. Moreover, cybersecurity enforcement also remains weak. Lao PDR employs the Law on Electronic Data Protection No. 25/NA[6] which pertains to data in digital form. The absence of an overarching national strategy on AI may put these countries in less competitive positions compared to their ASEAN peers. Without robust implementation of baseline regulations on AI, countries are posited more precariously amid the rapidity of technological developments.
The Evolving Role of Data-Protection Authorities (DPAs)

The role of DPAs has become more important in the context of the evolving model of AI development management and enforcement. In particular, the proposed use of DPAs as the best placed institution to act as a central body to coordinate AI policy compliance and enforcement, is gaining traction. Leading the way is the European Union (EU) with their EU AI Act. In the attempt to formulate the most optimal enforcement mechanism, the European Data Protection Board (EDPB) has encouraged the appointment of national DPAs as Market Surveillance Authorities (MSAs).[7] G7 Privacy officials support the idea, citing their familiarity gained in the development of guidelines and policy documents, the experience garnered in assessing AI technology impacts on its stakeholders, as well as the actions taken in this area following governance breaches at data source.[8]

In Asia, DPAs have been central to shaping key policy documents. In South Korea, the Personal Information Protection Commission (PIPC) released generative AI (genAI) guidelines in August 2025. In Singapore, the Model AI Governance Framework, as well as the updated version that incorporates genAI, and subsequently the AI Verify tool – the world’s first AI governance testing framework and toolkit – were jointly produced by the Infocomm Media Development Authority (IMDA) and the Personal Data Protection Commission (PDPC).[9] These are prime examples of how DPAs have shaped public policy. However, the capabilities of DPAs across Southeast Asia are very diverse. As such, the use of DPAs as central market surveillance authorities may be viable for some, but not others. For the latter, existing DPA capacities will need to be bolstered to become effective regulatory bodies. This may need human capital enhancements and capital investments before it can be considered an ideal option as an MSA.

For countries with established DPA capabilities and who are keen to pursue this model of AI governance enforcement, capacity building will need to focus on the additional skills and knowledge to effectively monitor the entire AI system supply chain and not only the input level of data governance. Both the EU and the G7 have recognised this aspect to be integral for this model to work. The other integral aspect cited is the need to have a seamless coordination mechanism amongst the MSA and other regulatory bodies tasked with the supervision of the AI ecosystem. These bodies could include those involved in competition matters and consumer protection. In Southeast Asia, countries with more established AI ecosystems (Singapore, Malaysia and Thailand)[10] and regulatory coordination mechanisms in place would be better placed to implement this model.
Cherry Picking What’s Best for ASEAN

The AI governance risk frameworks around the region have largely been developed to be voluntary in nature – such as those in Brunei, Malaysia[11] and Singapore – but some countries are moving ahead with the establishment of more formal AI laws. In December 2025, Vietnam became the first country in Southeast Asia to make a concrete move on this with the promulgation of the AI Law which is to take effect in a phrased approach, from March 2026 over four years. The AI Law was passed along with updated laws on intellectual property (IP) and cybersecurity, which include revisions pertaining to AI-specific incidents.[12] Detailed enforcement mechanisms and other specifics will be more formally established during the implementation phases. The establishment of a regulatory infrastructure and the appointment of regulatory authorities are set for 2026.[13]


Whilst Vietnam has referenced the risk-based approach (emphasising safeguards in the risk-return trade-off) of the EU and the innovation-led approaches (emphasising technological development in the risk-return trade-off) taken by South Korea and Japan in the development of their own AI legislation,[14] it is important to also note that international guidance has not been merely taken ‘off the shelf’, but rather, customised to reflect domestic priorities and institutional capacities. Vietnam has adopted a risk-based approach in classifying risks, much like the EU AI Act; the number of risk categories has however been reduced. In particular, Vietnam’s AI risk classifications only apply to AI systems deemed lawful. In the EU AI Act, prohibited AI systems are included in the formal risk framework as Unacceptable Risks. Apart from that, the EU’s risk classification framework focuses on the use of the AI system and the risks therein,[15] whereas Vietnam’s approach looks at risks from the angle of impact.[16]Thailand is also targeting the creation of an AI Law. A drafting process initiated in 2023 experienced a quiet period of two years with limited traction. The Electronic Transactions Development Agency (ETDA) later explained that the reason for this was attributed to the need for the draft AI Law, which had originally used the EU AI Act as a template, to be refined to suit local circumstances,[17] much like in the case for Vietnam.

In this draft Law, the risk-based approach helps articulate what compliance requirements are needed for high-risk AI applications and systems. However, the draft Law proposes to leave it to sectoral bodies to determine which activities are high risk in their respective areas. This is based on the notion that sectoral bodies would have better understanding of these activities and could more accurately discern the level of risks they present to society. The cost of risk mis-classification could stifle economic activities as well as create undue compliance costs for affected business activities. To note, this sectoral approach has also been taken by Indonesia which is also proposing a soft AI Law or framework.[18] Some of the sectors identified include finance, education and healthcare. For Thailand, the timeline for passing the law is still unclear, although ETDA is revising the consolidated Draft Principles after a public consultation process in 2025.[19] For Indonesia, the AI framework is expected to be signed off as Presidential regulations in early 2026.

The EU’s risk-based approach to AI legislation seems to be the most adapted form of international guidance so far. The innovation-centric angle preferred by South Korea and Japan has also been cited as having some influence over the approaches taken by the AMS. International guidance is also important in limiting the risk of digital fragmentation. International alignment is important for digital integration, and to leverage upon the porous technology that AI is. Most of the principles underlying AI governance frameworks in the region take guidance from international AI governance frameworks such as the OECD AI Principles, EU’s Ethics Guidelines for Trustworthy AI, US National Institute of Standards and Technology (NIST) Artificial Intelligence Risk Management Framework (AI RMF), UNESCO’s Recommendation on the Ethics of Artificial Intelligence, as well as ISO standards in this area. Thus, some form of regional alignment does occur when it comes to AI ethics and governance; regional documents are also quite consistent across the board. The proposed Digital Economy Framework Agreement (DEFA) which is expected to have a chapter on emerging issues including AI, will also help to streamline the operational standards[20] and best practices, and limit the risk of fragmentation.
Geopolitical Pressures Remain

Although China has its own set of AI governance principles in the form of the Interim Measures for the Management of Generative Artificial Intelligence Services,[21] as well as a strategy to socialise their vision for global AI governance in the form of the Global AI Governance Initiative (GAIGI),[22] these have not gained much traction compared to other international frameworks mentioned earlier. This could perhaps be due to the differences in their approach to governance and defining principles,[23] or the gravitation of countries in the region to multi-stakeholder models, wary as they are of being caught up in geopolitical wranglings. Quite notably, in America’s AI Action Plan[24] released in July 2025, there is a stated objective to ‘Counter Chinese Influence in International Governance Bodies’, which is as clear as it gets with respect to the US stance regarding this matter.


That said, China does noticeably take a more aggressive position in that of the AI stack[25] (or AI software and hardware infrastructure) in Southeast Asia.[26] Apart from physical foundations such as data centres, Chinese Large Language Models (LLMs) are open-source, as well as more competitively-priced compared to close market competitors, allowing more access and customisation for system deployers.[27] As at end-2025, Chinese models accounted for around 30 per cent of the global share in usage, marking a rapid rise from only 13 per cent at the start of 2025.[28] Thus, the influence of China in this space cannot be overlooked and the current trend may act as a catalyst for greater interest in China’s AI governance in the future. Although the economic benefits may be the prevalent concern, the potential acceleration in this trend is bound to accentuate geopolitical complexities. In May 2025, Malaysia announced that the country’s sovereign full-stack AI ecosystem utilised China’s DeepSeek LLM.[29]Most recently in October, Malaysia signed the Agreement on Reciprocal Trade[30] (ART) with the US. In the ART, specific terms discourage Malaysia from forging preferential economic cooperation with ‘a country that jeopardises essential US interests’, failing which would send US reciprocal tariffs on Malaysia back to 24 per cent (now negotiated down to 19 per cent). Thus, Southeast Asian countries with strong ties to both superpowers will once again be saddled with difficult choices.

Southeast Asian countries remain at different levels of AI policy implementation and enforcement. Though there may be various approaches across countries, by and large, there exists a similar basis and understanding of AI governance principles. As the governance ecosystem continues to evolve, international best practices aligned with ASEAN objectives will be adopted through customisation for local AMS conditions. However, the regulatory catch-up dynamic will be put to the test; technological developments will be largely operationalised before safeguarding frameworks can be institutionalised. There are also high risks of existing safeguarding regulations becoming outdated amid the rapid dynamism of this space.


For endnotes, please refer to the original pdf document.


Kristina Fong is Lead Researcher (Economic Affairs) of the ASEAN Studies Centre at ISEAS – Yusof Ishak Institute.

Source: This article was published by ISEAS – Yusof Ishak Institute.

ISEAS - Yusof Ishak Institute

The Institute of Southeast Asian Studies (ISEAS), an autonomous organization established by an Act of Parliament in 1968, was renamed ISEAS - Yusof Ishak Institute in August 2015. Its aims are: To be a leading research centre and think tank dedicated to the study of socio-political, security, and economic trends and developments in Southeast Asia and its wider geostrategic and economic environment. To stimulate research and debate within scholarly circles, enhance public awareness of the region, and facilitate the search for viable solutions to the varied problems confronting the region. To serve as a centre for international, regional and local scholars and other researchers to do research on the region and publish and publicize their findings. To achieve these aims, the Institute conducts a range of research programmes; holds conferences, workshops, lectures and seminars; publishes briefs, research journals and books; and generally provides a range of research support facilities, including a large library collection.
Louvre set for fresh start as leadership change follows string of scandals

After months of scandal and scrutiny, the Louvre is preparing for new leadership as France moves to restore confidence in its most visited museum, as the museum president resigns.


Issued on: 25/02/2026 - RFI

France's President Emmanuel Macron talks to Director of the Louvre Museum, Laurence des Cars, at the museum in Paris on 28 January 2025. Des Cars resigned from her position on Tuesday, 24 January 2026 in the wake of a high profile jewellry heist in late 2025. AFP - BERTRAND GUAY

France’s world famous Louvre museum is poised for a leadership shake-up after the resignation of its president Laurence des Cars on Tuesday, with a new chief expected to be appointed swiftly, in a bid to restore confidence following months of turbulence.

Christophe Leribault – the current head of the Palace of Versailles – is widely expected to take over the role, according to a source within the French executive.

His appointment is due to be announced by the Council of Ministers, with a mandate focused on securing and modernising the institution, as well as delivering the ambitious “Louvre – New Renaissance” overhaul.

The move comes at a delicate moment for the world’s most visited museum, which has been grappling with a string of high-profile incidents that have exposed weaknesses but also prompted renewed momentum for reform.


Months of pressure

Des Cars formally stepped down after submitting her resignation to President Emmanuel Macron, who accepted it while praising what he described as a responsible decision at a time when the museum needs “calm” and a fresh push to carry out major security projects.

Her departure follows sustained pressure linked to an audacious October robbery in which French crown jewels worth around €88 million were stolen in broad daylight. The jewels have yet to be recovered, although four suspects remain in custody and investigations are ongoing.

Initially, Macron had declined an earlier offer by des Cars to resign shortly after the break-in. But as further issues emerged – including reports of systemic security failings, a ticket fraud scandal, and even a water leak in a gallery housing the Mona Lisa – the pressure became harder to withstand.

Parliamentary inquiries and audits have painted a sobering picture. Lawmakers have pointed to “systemic failures” after dozens of hearings, while France’s Court of Auditors criticised the museum for prioritising high-profile projects over essential security investment, despite earlier warnings dating back to 2017.

Des Cars herself acknowledged shortcomings in later interviews, conceding that structural weaknesses had remained and that concerns were justified. Even so, Macron thanked her for her commitment and recognised her expertise, underlining that her tenure was not without achievement.

Rebuilding confidence

Attention is now turning to what comes next – and to the challenge awaiting Leribault, should his appointment be confirmed. His mission is expected to centre on restoring trust, strengthening security infrastructure, and successfully delivering the Louvre’s long-term renovation strategy.

The museum, housed in a former royal palace and home to masterpieces such as Leonardo da Vinci’s Mona Lisa, welcomes around nine million visitors each year. Its global stature means that any disruption resonates far beyond France – making stability a top priority.

Despite the recent setbacks, there are signs of a reset already under way. Emergency measures have been introduced since the robbery, including upgrades to security systems, while multiple investigations – from the culture ministry, parliament and the Senate – are expected to produce detailed recommendations in the coming months.

(With newswires)