Arturo Béjar says Mark Zuckerberg didn't respond to an email detailing concerns about harms facing kids.
Kris Holt
·Contributing Reporter
Tue, November 7, 2023
Another Meta whistleblower has testified before Congress regarding safety issues on the company's platforms. On the same day that Frances Haugen told Congress in 2021 how Meta could fix some of its safety problems, Arturo Béjar, a former director of engineering for Protect and Care at Facebook, sent CEO Mark Zuckerberg and other executives an email regarding the harms that young people may face while using the company's products.
Two years later, Béjar was the sole witness in a Senate Judiciary subcommittee hearing titled "Social Media and the Teen Mental Health Crisis." In his testimony, Béjar claimed he was subpoenaed earlier this year to testify regarding emails he sent Meta higher-ups. He said he realized that since he sent them, nothing had changed at the company.
"Meta continues to publicly misrepresent the level and frequency of harm that users, especially children, experience on the platform," Béjar told the Subcommittee on Privacy, Technology and the Law in prepared remarks. "And they have yet to establish a goal for actually reducing those harms and protecting children. It’s time that the public and parents understand the true level of harm posed by these 'products' and it’s time that young users have the tools to report and suppress online abuse."
Béjar was an engineering director at Meta between 2009 and 2015, during which time he was responsible for protecting Facebook users. He supported a team that worked on "bullying tools for teens, suicide prevention, child safety and other difficult moments that people go through," according to his LinkedIn profile.
He testified that he initially left Meta feeling "good that we had built numerous systems that made using our products easier and safer." However, he said that, since they were 14, his daughter and her friends "repeatedly faced unwanted sexual advances, misogyny and harassment" on Instagram. According to The Wall Street Journal, which first reported on Béjar's claims, he stated that Meta's systems typically ignored reports they made or responded to say that the harassment they faced didn't break the rules.
Those issues prompted him to return to Meta in 2019, where he worked with Instagram’s well-being team. "It was not a good experience. Almost all of the work that I and my colleagues had done during my earlier stint at Facebook through 2015 was gone," Béjar said in his testimony. "The tools we had built for teenagers to get support when they were getting bullied or harassed were no longer available to them. People at the company had little or no memory of the lessons we had learned earlier."
Béjar claimed that Instagram and internal research teams gathered data showing that younger teens dealt with "great distress and abuse." However, "senior management was externally reporting different data that grossly understated the frequency of harm experienced by users," he told senators.
In a 2021 email to Zuckerberg and other executives laying out some of his concerns, Béjar wrote that his then-16-year-old daughter uploaded a car-related post to Instagram only for a commenter to tell her to "get back to the kitchen." Béjar said his daughter found this upsetting. “At the same time the comment is far from being policy violating, and our tools of blocking or deleting mean that this person will go to other profiles and continue to spread misogyny," Béjar wrote. "I don’t think policy/reporting or having more content review are the solutions.”
Béjar said that along with his daughter's experiences with the app, 13 percent of users aged between 13 and 15 indicated in user perception surveys that they received unwanted sexual advances on Instagram within the previous seven days. While former chief operating officer Sheryl Sandberg offered sympathy toward his daughter for her negative experiences and Instagram head Adam Mosseri asked to set up a meeting, according to Béjar, Zuckerberg never responded to the email.
"That was unusual," Béjar said in his testimony. "It might have happened, but I don’t recall Mark ever not responding to me previously in numerous communications, either by email or by asking for an in-person meeting."
Béjar told the Associated Press that Meta has to change its approach to moderating its platforms. This, according to Béjar, would require the company to place a greater onus on tackling harassment, unwanted sexual advances and other issues that don't necessarily break the company's existing rules.
He noted, for instance, that teens should be able to tell Instagram that they don't want to receive crude sexual messages, even if those don't violate the app's current policies. Béjar claims it would be easy for Meta to implement a feature through which teens could flag sexual advances that were made to them. "I believe that the reason that they're not doing this is because there's no transparency about the harms that teenagers are experiencing on Instagram," he told the BBC.
Béjar laid out several other steps that Meta could take to reduce harm users face on its platform that "do not require significant investments by the platforms in people to review content or in technical infrastructure." He added that he believes adopting such measures (which primarily focus on improving safety tools and getting more feedback from users who have experienced harm) would not severely impact the revenues of Meta or other companies that adopt them. "These reforms are not designed to punish companies, but to help teenagers," he told the subcommittee. "And over time, they will create a safer environment."
"My experience, after sending that email and seeing what happened afterwards, is that they knew, there were things they could do about it, they chose not to do them and we cannot trust them with our children," Béjar said during the hearing. "It's time for Congress to act. The evidence, I believe, is overwhelming."
“Every day countless people inside and outside of Meta are working on how to help keep young people safe online," a Meta spokesperson told Engadget in a statement. "The issues raised here regarding user perception surveys highlight one part of this effort, and surveys like these have led us to create features like anonymous notifications of potentially hurtful content and comment warnings. Working with parents and experts, we have also introduced over 30 tools to support teens and their families in having safe, positive experiences online. All of this work continues.”
Béjar hopes his testimony will help spur Congress to “pass the legislation that they've been working on” regarding the online safety of younger users. Two years ago, Haugen disclosed internal Facebook research indicating that Instagram was "harmful for a sizable percentage of teens." Growing scrutiny led Meta to halt work on a version of Instagram for kids.
Since Haugen's testimony, Congress has made some efforts to tackle online safety issues for kids, but those have stuttered. The Kids Online Safety Act (KOSA) twice advanced from a Senate committee (in the previous Congress and earlier this year), but it hasn't reached a floor vote and there's no companion bill in the House. Among other things, the bill seeks to give kids aged under 16 the ability to switch off "addictive features and algorithm-based recommendations, as well as having more protections for their data. Similar bills have stalled in Congress.
Last month, attorneys general from 41 states and the District of Columbia sued Meta over alleged harms it caused to young users. “Meta designed and deployed harmful and psychologically manipulative product features to induce young users’ compulsive and extended Platform use, while falsely assuring the public that its features were safe and suitable for young users," according to the lawsuit. Béjar said he consulted with the attorneys general and provided them with documents to help their case.
"I'm very hopeful that your testimony, added to the lawsuit that's been brought by state attorneys general across the country ... added to the interest that I think is evidenced by the turnout of our subcommitee today, will enable us to get the Kids Online Safety Act across the finish line," subcommittee chair Sen. Richard Blumenthal (D-CT) told Béjar. Blumenthal, one of KOSA's original sponsors, expressed hope that other legislation "that can finally break the straitjacket that Big Tech has imposed on us" will be enacted into law.
Over the last few years and amid the rise of TikTok, Meta has once again been focusing on bringing younger users into its ecosystem, with Zuckerberg stating in 2021 (just a couple of weeks after Haugen's testimony) that the company would refocus its “teams to make serving young adults their North Star rather than optimizing for the larger number of older people.” Recently, the company lowered the minimum age for using its Meta Quest VR headsets to 10 through the use of parent-controlled accounts.
EU digital chief urges TikTok, X to increase clean-up efforts
Reuters
Tue, November 7, 2023
BRUSSELS (Reuters) - Video sharing app TikTok and social media platform X both need to step up their efforts to counter illegal hate speech, European Commission Vice President Vera Jourova said after meetings with their executives on Tuesday.
Jourova, the EU commissioner responsible for the digital economy, met TikTok Chief Executive Shou Chew and X's head of global affairs, Nick Pickles, as the European Union investigates Big Tech's efforts to remove harmful content.
The tech giants have faced mounting scrutiny in the past month, with a surge in harmful content and disinformation in the wake of the attack on Israel on Oct. 7 by Hamas, which the EU considers to be a terrorist organisation.
The EU is also looking to prevent disinformation influencing the EU parliament election in June 2024.
Under the European Union's Digital Services Act, which entered into force a year ago, very large tech platforms and search engines must do more to tackle harmful and illegal content or risk fines.
TikTok has said artificial intelligence and more than 6,000 moderators have removed millions of posts since the Hamas attack, and that it has a large team specially dedicated to removing violent content linked to children.
Jourova said on X and via her spokesperson that she was pleased by some improvements and urged TikTok to continue stepping up its work against illegal and harmful content and child abuse.
For X, Jourova said it had insufficient staff speaking some EU languages to counter disinformation and expressed concern about reports of high numbers of violent and illegal content. According to Jourova's spokesperson, X's Pickles said violent content had flourished online after the Hamas attack, not only on X.
Jourova, who has previously met Meta and YouTube executives, said the lower language coverage than on other large platforms made her worried about X's preparedness for EU elections.
(Reporting by Philip Blenkinsop and Foo Yun Chee; editing by Jonathan Oatis)
Reuters
Tue, November 7, 2023
BRUSSELS (Reuters) - Video sharing app TikTok and social media platform X both need to step up their efforts to counter illegal hate speech, European Commission Vice President Vera Jourova said after meetings with their executives on Tuesday.
Jourova, the EU commissioner responsible for the digital economy, met TikTok Chief Executive Shou Chew and X's head of global affairs, Nick Pickles, as the European Union investigates Big Tech's efforts to remove harmful content.
The tech giants have faced mounting scrutiny in the past month, with a surge in harmful content and disinformation in the wake of the attack on Israel on Oct. 7 by Hamas, which the EU considers to be a terrorist organisation.
The EU is also looking to prevent disinformation influencing the EU parliament election in June 2024.
Under the European Union's Digital Services Act, which entered into force a year ago, very large tech platforms and search engines must do more to tackle harmful and illegal content or risk fines.
TikTok has said artificial intelligence and more than 6,000 moderators have removed millions of posts since the Hamas attack, and that it has a large team specially dedicated to removing violent content linked to children.
Jourova said on X and via her spokesperson that she was pleased by some improvements and urged TikTok to continue stepping up its work against illegal and harmful content and child abuse.
For X, Jourova said it had insufficient staff speaking some EU languages to counter disinformation and expressed concern about reports of high numbers of violent and illegal content. According to Jourova's spokesperson, X's Pickles said violent content had flourished online after the Hamas attack, not only on X.
Jourova, who has previously met Meta and YouTube executives, said the lower language coverage than on other large platforms made her worried about X's preparedness for EU elections.
(Reporting by Philip Blenkinsop and Foo Yun Chee; editing by Jonathan Oatis)
No comments:
Post a Comment