Story by Reuters
FILE PHOTO: A TikTok logo is displayed on a smartphone in this illustration taken January 6, 2020. REUTERS/Dado Ruvic/Illustration/File Photo© Thomson Reuters
KUALA LUMPUR (Reuters) -Malaysia's communications regulator will issue a warning to social media firms TikTok and Meta for allegedly blocking pro-Palestinian content on their platforms, its communications minister said on Thursday.
Meta has since said it was not deliberately suppressing voices on its Facebook platform after Communications Minister Fahmi Fadzil said the social media platforms had been accused of restricting content supporting the Palestinians.
FILE PHOTO: People are seen behind a logo of Meta Platforms, during a conference in Mumbai, India, September 20, 2023. REUTERS/Francis Mascarenhas/File Photo© Thomson Reuters
"If this issue is ignored, I will not hesitate to take a very firm approach and stance," Fahmi said in a posting on the social media platform X, formerly known as Twitter.
Without elaborating, Fahmi said many parties had urged the government to take firm action against social media platforms for allegedly restricting pro-Palestinian content.
A spokesperson from Meta said there was "no truth" to the claim.
"Our policies are designed to keep people safe on our apps while giving everyone a voice," the spokesperson said in an email to Reuters on Thursday.
TikTok did not immediately respond to requests for comment.
Fahmi said Malaysians have a right to freedom of speech regarding the Palestinian cause, adding that right will not be taken away.
He said he will meet TikTok representatives next week to discuss the matter.
The concerns come two weeks after Fahmi said TikTok had not complied fully with Malaysia's laws and had not done enough to curb defamatory or misleading content. TikTok in a response said it would take proactive measures to address the issues raised.
(Reporting by Danial Azhar, Rozanna Latiff and A. Ananthalakshmi; editing by Martin Petty and Jason Neely)
Editorial: States have a good case in suing Meta for preying on kids
2023/10/27
2023/10/27
Meta corporate headquarters is seen in Menlo Park, California, on Nov. 9, 2022.
- Josh Edelson/AFP/Getty Images North America/TNS
Thirty-three state attorneys general left, right and center have filed suit in California federal court against internet giant Meta, and another nine are suing the company in their respective states. The central claim — that the company has built Instagram and Facebook features “to entice, engage, and ultimately ensnare youth and teens … and has repeatedly misled the public about the substantial dangers of its social media platforms” — must be taken seriously.
Endless, and endlessly circular, is the debate about whether social media is bad for teenagers, and if so, how bad. There’s no question that for many kids, platforms from Instagram to TikTok to X (formerly Twitter) to YouTube and beyond provide real value, exposing them to new ideas, facilitating healthy social connections and encouraging creativity.
However, there’s simultaneously little doubt that America’s young people are in the throes of a mental health crisis. According to the Centers for Disease Control, feelings of persistent sadness and hopelessness, and suicidal thoughts and behaviors, rose sharply among young people — and that was before COVID’s disruptions kicked in. Research is growing that there’s something psychologically unhealthy in one’s formative years to be endlessly glued to a barrage of nastiness and unhealthy comparisons.
A May 2023 advisory by U.S. Surgeon General Vivek Murthy asserted that “95% of youth ages 13-17 report using a social media platform, with more than a third saying they use social media ‘almost constantly’” — and while 13 is the generally the minimum legal age for using social media, “nearly 40% of children ages 8-12 use social media.”
And while acknowledging that “More research is needed to fully understand the impact of social media,” the surgeon general says that although it may have benefits for some younger youngsters, “there are ample indicators that social media can also have a profound risk of harm to the mental health and well-being of children and adolescents.” Those risks include exposure to deeply disturbing content including live depictions of self-harm, low self-esteem from ceaseless body image scrutiny, and more.
“Excessive and problematic social media use, such as compulsive or uncontrollable use, has been linked to sleep problems, attention problems, and feelings of exclusion among adolescents,” he wrote.
Big research questions remain — which is to say, a moral panic that tries to cut young people off entirely would be terribly excessive — but the law is the law, and the AGs make a persuasive case that Meta has been whistling past it. Building on the Wall Street Journal’s Facebook Files investigation, the lawsuit cites a series of devious design decisions specifically designed “to induce young users’ compulsive and extended use,” including “infinite scroll” and persistent alerts.
It’s against federal statute to deceive users about a product’s safety, which is precisely what the complaint alleges the company did. And it is against federal statute to collect young people’s personal data without their parents’ permission, another credible claim made in the legal papers.
As scrutiny has mounted in recent years, Meta and its many rivals have been adding safety features. The question for the courts is whether requiring a few basic but still overlooked ones, such as forcing alerts to be turned off by default and fixing data collection practices, will protect young people and bring the behemoths in line with the law.
___
© New York Daily News
Thirty-three state attorneys general left, right and center have filed suit in California federal court against internet giant Meta, and another nine are suing the company in their respective states. The central claim — that the company has built Instagram and Facebook features “to entice, engage, and ultimately ensnare youth and teens … and has repeatedly misled the public about the substantial dangers of its social media platforms” — must be taken seriously.
Endless, and endlessly circular, is the debate about whether social media is bad for teenagers, and if so, how bad. There’s no question that for many kids, platforms from Instagram to TikTok to X (formerly Twitter) to YouTube and beyond provide real value, exposing them to new ideas, facilitating healthy social connections and encouraging creativity.
However, there’s simultaneously little doubt that America’s young people are in the throes of a mental health crisis. According to the Centers for Disease Control, feelings of persistent sadness and hopelessness, and suicidal thoughts and behaviors, rose sharply among young people — and that was before COVID’s disruptions kicked in. Research is growing that there’s something psychologically unhealthy in one’s formative years to be endlessly glued to a barrage of nastiness and unhealthy comparisons.
A May 2023 advisory by U.S. Surgeon General Vivek Murthy asserted that “95% of youth ages 13-17 report using a social media platform, with more than a third saying they use social media ‘almost constantly’” — and while 13 is the generally the minimum legal age for using social media, “nearly 40% of children ages 8-12 use social media.”
And while acknowledging that “More research is needed to fully understand the impact of social media,” the surgeon general says that although it may have benefits for some younger youngsters, “there are ample indicators that social media can also have a profound risk of harm to the mental health and well-being of children and adolescents.” Those risks include exposure to deeply disturbing content including live depictions of self-harm, low self-esteem from ceaseless body image scrutiny, and more.
“Excessive and problematic social media use, such as compulsive or uncontrollable use, has been linked to sleep problems, attention problems, and feelings of exclusion among adolescents,” he wrote.
Big research questions remain — which is to say, a moral panic that tries to cut young people off entirely would be terribly excessive — but the law is the law, and the AGs make a persuasive case that Meta has been whistling past it. Building on the Wall Street Journal’s Facebook Files investigation, the lawsuit cites a series of devious design decisions specifically designed “to induce young users’ compulsive and extended use,” including “infinite scroll” and persistent alerts.
It’s against federal statute to deceive users about a product’s safety, which is precisely what the complaint alleges the company did. And it is against federal statute to collect young people’s personal data without their parents’ permission, another credible claim made in the legal papers.
As scrutiny has mounted in recent years, Meta and its many rivals have been adding safety features. The question for the courts is whether requiring a few basic but still overlooked ones, such as forcing alerts to be turned off by default and fixing data collection practices, will protect young people and bring the behemoths in line with the law.
___
© New York Daily News
No comments:
Post a Comment