Sunday, September 01, 2024

Meta CEO Mark Zuckerberg and Spotify Criticize EU AI Regulations

ByDimitra Gkatzelaki
September 1, 2024
Meta CEO Mark Zuckerberg standing against a blue background with the word “privacy” written on it in white font. Zuckerberg and Spotify CEO Daniel EK issued a joint statement on Meta’s website criticizing EU AI regulations. 
Credit: Anthony Quintano / Flickr CC BY 2.0

Last Friday, August 23rd, the CEOs of Meta and Spotify, Mark Zuckerberg and Daniel Ek, issued a joint statement on Meta’s website expressing their growing concern over and criticizing strict EU AI regulations.

Specifically, Zuckerberg and Ek spoke about how, in today’s world, access to the latest technologies remains largely unequal across different regions. To bridge this gap and “democratize” tech access, they argue that open-source AI is essential—referring to models where the weights are publicly shared under a permissive license.

They believe that by making this technology accessible, startups with limited resources can compete on equal footing with larger companies. This would in turn accelerate innovation and drive progress not only when it comes to technology but also science and society.

However, Europe has imposed strict regulations to avoid the uncontrollable use of open-source AI. In their joint statement, Zuckerberg and Ek push back against these rules and maintain that Europe should ease its stance or risk falling behind in tech innovation.
Zuckerberg and Ek criticize European AI regulations

In their statement, Zuckerberg and Ek openly criticized Europe’s tight AI regulations, arguing that its “fragmented regulatory structure, riddled with inconsistent implementation” stifles innovation—not only in AI but also when it comes to economic growth.

They view these regulations on open-source AI as “pre-emptive” measures aimed at “theoretical harms” in emerging technologies, which can hold Europe back in the tech sector and beyond.

Though they talk about innovation and progress, their statement heavily focuses on the economic benefits, suggesting that Europe stands to gain “big rewards” from AI. As major players in the tech industry, it is evident they have much to gain from this, as well, but Europe’s AI regulations are holding them back.

EU privacy regulators told Meta to delay user data AI training

Meta CEO Zuckerberg and Spotify CEO Ek also criticized how the EU’s General Data Protection Regulation (GDPR) is being applied, particularly in Meta’s case. The GDPR ordered Meta to pause training its AI models on publicly shared content from Facebook and Instagram users. Zuckerberg claims Meta hasn’t violated any laws but that regulators are uncertain about how to move forward.

The delay mainly affects Meta’s Llama AI model and its upcoming multimodal version, which can interpret images. The CEOs warned this could leave European citizens with “AI built for someone else.”

Meta’s earlier plans to use Facebook and Instagram posts to train AI

In May 2024, Meta revealed that starting June 24th, it would update its privacy policy to permit the use of public posts and photos from Facebook and Instagram for AI training. This adjustment aligns with GDPR, which allows data use based on “legitimate interest”—in this case, improving Meta’s AI.

While users in the EU and UK could opt out through a form, those elsewhere with public profiles had no option to prevent their data from being used in this way.

Digital rights groups quickly condemned Meta’s AI training plans. In early June, the European Center for Digital Rights (Noyb) filed eleven complaints across Europe to block the initiative. Co-founder Max Schrems argued that by broadly using any data for any AI technology, Meta has “clearly left almost the entire GDPR framework. We counted violations of at least ten Articles of the law.”

Although Meta has been told to delay its user data training, the situation is far from settled. For now, the outcome remains uncertain.



No comments: