San Francisco (AFP) – Members of Congress on Thursday called on Meta chief Mark Zuckerberg to give them details regarding ads for opiods and other illicit drugs on the tech titan's platform.
Issued on: 16/08/2024 - AFP
The Tech Transparency Project says blatant ads for OxyContin and other illegal drugs were found in Meta's ad library
© Handout / US Drug Enforcement Administration/AFP
A letter signed by 19 lawmakers pressed for details about such ads given disturbing reports by the Tech Transparency Project and the Wall Street Journal.
"Meta appears to have continued to shirk its social responsibility and defy its own community guidelines," the letter read.
"What is particularly egregious about this instance is that this was not user generated content on the dark web or on private social media pages, but rather they were advertisements approved and monetized by Meta."
The Tech Transparency Project in March reported finding more than 450 ads on Instagram and Facebook selling an array of illegal drugs.
Many of the ads "made no secret of their intentions," showing photos of prescription drug bottles or bricks of cocaine, and encouraging people to place orders, according to the non-profit research group.
The investigation involved searching Meta's Ad Library for terms including "OxyContin," "Vicodin," and "pure coke," TTP reported.
The letter from Congress members to Zuckerberg asked for answers from Zuckerberg by Sept. 6.
Questions included how may illicit drug ads Meta has run on its platform, what it has done about them, and whether viewers were targeted for such ads based on personal health information.
Meta planned to respond to the letter.
"Drug dealers are criminals who work across platforms and communities, which is why we work with law enforcement to help combat this activity," a Meta spokesperson said in response to an AFP inquiry.
"Our systems are designed to proactively detect and enforce against violating content, and we reject hundreds of thousands of ads for violating our drug policies."
Meta continues to invest in improving its ability to catch illicit drug ads, the spokesperson added.
© 2024 AFP
A letter signed by 19 lawmakers pressed for details about such ads given disturbing reports by the Tech Transparency Project and the Wall Street Journal.
"Meta appears to have continued to shirk its social responsibility and defy its own community guidelines," the letter read.
"What is particularly egregious about this instance is that this was not user generated content on the dark web or on private social media pages, but rather they were advertisements approved and monetized by Meta."
The Tech Transparency Project in March reported finding more than 450 ads on Instagram and Facebook selling an array of illegal drugs.
Many of the ads "made no secret of their intentions," showing photos of prescription drug bottles or bricks of cocaine, and encouraging people to place orders, according to the non-profit research group.
The investigation involved searching Meta's Ad Library for terms including "OxyContin," "Vicodin," and "pure coke," TTP reported.
The letter from Congress members to Zuckerberg asked for answers from Zuckerberg by Sept. 6.
Questions included how may illicit drug ads Meta has run on its platform, what it has done about them, and whether viewers were targeted for such ads based on personal health information.
Meta planned to respond to the letter.
"Drug dealers are criminals who work across platforms and communities, which is why we work with law enforcement to help combat this activity," a Meta spokesperson said in response to an AFP inquiry.
"Our systems are designed to proactively detect and enforce against violating content, and we reject hundreds of thousands of ads for violating our drug policies."
Meta continues to invest in improving its ability to catch illicit drug ads, the spokesperson added.
© 2024 AFP
Meta fends off AI-aided deception as US election nears
San Francisco (AFP) – Russia is putting generative artificial intelligence to work in online deception campaigns, but its efforts have been unsuccessful, according to a Meta security report released Thursday.
Issued on: 16/08/2024 - AFP
Meta says its focus on how accounts act has enabled it to expose deception campaigns on its platform © Tobias SCHWARZ / AFP
The parent company of Facebook and Instagram found that so far AI-powered tactics "provide only incremental productivity and content-generation gains" for bad actors and Meta has been able to disrupt deceptive influence operations.
Meta's efforts to combat "coordinated inauthentic behavior" on its platforms come as fears mount that generative AI will be used to trick or confuse people in elections in the United States and other countries.
Facebook has been accused for years of being used as a powerful platform for election disinformation.
Russian operatives used Facebook and other US-based social media to stir political tensions in the 2016 election won by Donald Trump.
Experts fear an unprecedented deluge of disinformation from bad actors on social networks because of the ease of using generative AI tools such as ChatGPT or the Dall-E image generator to make content on demand and in seconds.
AI has been used to create images and videos, and to translate or generate text along with crafting fake news stories or summaries, according to the report.
Russia remains the top source of "coordinated inauthentic behavior" using bogus Facebook and Instagram accounts, Meta security policy director David Agranovich told reporters.
Since Russia's invasion of Ukraine in 2022, those efforts have been concentrated on undermining Ukraine and its allies, according to the report.
As the US election approaches, Meta expects Russia-backed online deception campaigns to attack political candidates who support Ukraine.
Behavior based
When Meta scouts for deception, it looks at how accounts act rather than the content they post.
Influence campaigns tend to span an array of online platforms, and Meta has noticed posts on X, formerly Twitter, used to make fabricated content seem more credible.
Meta shares its findings with X and other internet firms and says a coordinated defense is needed to thwart misinformation.
"As far as Twitter (X) is concerned, they are still going through a transition," Agranovich said when asked whether Meta sees X acting on deception tips.
"A lot of the people we've dealt with in the past there have moved on."
X has gutted trust and safety teams and scaled back content moderation efforts once used to tame misinformation, making it what researchers call a haven for disinformation.
False or misleading US election claims posted on X by Musk have amassed nearly 1.2 billion views this year, a watchdog reported last week, highlighting the billionaire's potential influence on the highly polarized White House race.
Researchers have raised alarm that X is a hotbed of political misinformation.
They have also flagged that Musk, who purchased the platform in 2022 and is a vocal backer of Donald Trump, appears to be swaying voters by spreading falsehoods on his personal account.
"Elon Musk is abusing his privileged position as owner of a... politically influential social media platform to sow disinformation that generates discord and distrust," warned Imran Ahmed, CEO of the Center for Countering Digital Hate.
Musk recently faced a firehose of criticism for sharing with his followers an AI deepfake video featuring Trump's Democratic rival, Vice President Kamala Harris.
© 2024 AFP
The parent company of Facebook and Instagram found that so far AI-powered tactics "provide only incremental productivity and content-generation gains" for bad actors and Meta has been able to disrupt deceptive influence operations.
Meta's efforts to combat "coordinated inauthentic behavior" on its platforms come as fears mount that generative AI will be used to trick or confuse people in elections in the United States and other countries.
Facebook has been accused for years of being used as a powerful platform for election disinformation.
Russian operatives used Facebook and other US-based social media to stir political tensions in the 2016 election won by Donald Trump.
Experts fear an unprecedented deluge of disinformation from bad actors on social networks because of the ease of using generative AI tools such as ChatGPT or the Dall-E image generator to make content on demand and in seconds.
AI has been used to create images and videos, and to translate or generate text along with crafting fake news stories or summaries, according to the report.
Russia remains the top source of "coordinated inauthentic behavior" using bogus Facebook and Instagram accounts, Meta security policy director David Agranovich told reporters.
Since Russia's invasion of Ukraine in 2022, those efforts have been concentrated on undermining Ukraine and its allies, according to the report.
As the US election approaches, Meta expects Russia-backed online deception campaigns to attack political candidates who support Ukraine.
Behavior based
When Meta scouts for deception, it looks at how accounts act rather than the content they post.
Influence campaigns tend to span an array of online platforms, and Meta has noticed posts on X, formerly Twitter, used to make fabricated content seem more credible.
Meta shares its findings with X and other internet firms and says a coordinated defense is needed to thwart misinformation.
"As far as Twitter (X) is concerned, they are still going through a transition," Agranovich said when asked whether Meta sees X acting on deception tips.
"A lot of the people we've dealt with in the past there have moved on."
X has gutted trust and safety teams and scaled back content moderation efforts once used to tame misinformation, making it what researchers call a haven for disinformation.
False or misleading US election claims posted on X by Musk have amassed nearly 1.2 billion views this year, a watchdog reported last week, highlighting the billionaire's potential influence on the highly polarized White House race.
Researchers have raised alarm that X is a hotbed of political misinformation.
They have also flagged that Musk, who purchased the platform in 2022 and is a vocal backer of Donald Trump, appears to be swaying voters by spreading falsehoods on his personal account.
"Elon Musk is abusing his privileged position as owner of a... politically influential social media platform to sow disinformation that generates discord and distrust," warned Imran Ahmed, CEO of the Center for Countering Digital Hate.
Musk recently faced a firehose of criticism for sharing with his followers an AI deepfake video featuring Trump's Democratic rival, Vice President Kamala Harris.
© 2024 AFP
No comments:
Post a Comment