Facebook’s ‘Supreme Court’ investigates deepfake nudes after Taylor Swift controversy
Matthew Field
Tue, 16 April 2024
Fake explicit images of Taylor Swift on X in January sparked concern over online misogyny and abuse - Allison Dinner/Shutterstock
Meta’s oversight board is investigating the spread of deepfake nude pictures of women and celebrities on Instagram and Facebook, weeks after explicit fake images of singer Taylor Swift went viral on social media.
The oversight board, which reviews moderation decisions and has been likened to Facebook’s “Supreme Court”, is examining two incidents where images of naked women generated by artificial intelligence were reported on Meta’s platform.
One incident involved a synthetic nude image of a public figure from India on Instagram. The second centred around an “AI-generated image of a nude woman with a man groping her breast”. The woman resembled an unnamed American celebrity, with the board declining to provide details on who the person in question was when asked.
While not mentioning Ms Swift, the investigation comes after explicit fake pictures of the singer spread rapidly across Facebook, Instagram and X in January, prompting an outcry about online misogyny and abuse directed at women.
The controversy reached the White House, with press secretary Karine Jean-Pierre declaring the spread of deepfake nudes of the singer “very alarming”.
The oversight board said it would investigate the effectiveness of Meta’s enforcement practices, as well as “the nature and gravity of harms posed by deepfake pornography including how those harms affect women, especially women who are public figures”.
The deepfakes of Ms Swift also sparked concerns that AI tools will be misused on a grand scale to create huge volumes of deepfake pornography. Fake images of celebrities have also been used to promote scams on social networks.
The oversight board said it was investigating “whether Meta’s policies and its enforcement practices are effective at addressing explicit AI-generated imagery”.
The picture from India was not reviewed within 48 hours and remained online until it was appealed to the board, before Meta ultimately decided to take the picture down.
The deepfake of the US celebrity was blocked under Meta’s bullying and harassment policy, specifically its rules on “derogatory sexualised photoshop or drawings”. The user who posted the image has since appealed for a review.
Meta has a policy of labelling images uploaded to Instagram or Facebook, in some cases automatically, if it detects the image is made with AI.
Launched in 2020, Meta’s oversight board has reviewed thorny moderation decisions made by the technology giant, such as its decision to ban former US President Donald Trump.
The 22-person board features figures such as former Guardian editor Alan Rusbridger and Helle Thorning-Schmidt, the former prime minister of Denmark.
Some critics have raised questions over its impact, and its decisions are not binding on the tech giant.
No comments:
Post a Comment