It’s possible that I shall make an ass of myself. But in that case one can always get out of it with a little dialectic. I have, of course, so worded my proposition as to be right either way (K.Marx, Letter to F.Engels on the Indian Mutiny)
Friday, November 19, 2021
World weighs laws to rein in mighty algorithms
US whistleblower and former Facebook engineer Frances Haugen says people deserve to know more about how the content they see is shaped by the jumble of data that gets fed into the social media machine -- or how algorithms work (AFP/JOEL SAGET)
Katy LEE
Fri, November 19, 2021
From the YouTube videos we're recommended to deciding who gets a job, algorithms wield an ever-growing influence over our lives -- and policy-makers worldwide want to rein them in.
While China is worried about delivery app algorithms that encourage their drivers to speed, US lawmakers are grappling with social media recommendation systems that have sent some users down dangerous rabbit-holes.
"Algorithms can be useful, of course, but many people simply aren't aware of just how much their experience on these platforms is being manipulated," John Thune, one of numerous US senators proposing new social media legislation, wrote in a CNN op-ed.
Facebook has faced fierce criticism after a whistleblower revealed that executives knew the site's algorithm systematically promoted inflammatory posts in people's newsfeeds, fuelling division and unrest from India to Ethiopia.
Frances Haugen, the ex-Facebook engineer behind the revelations, believes people deserve to know more about how the content they see is shaped by the jumble of data that gets fed into the social media machine.
"I think if we just say, 'we are going to regulate algorithms', that is so amorphous," Haugen told AFP in an interview last week.
"I think it's more powerful to say, 'Hey Facebook, you have a lot more transparency than we do'," and force the company to reveal more about how its systems work, she said.
- More boring social media? -
Campaigners and lawmakers may agree that tech giants' algorithms need more public oversight, but how to achieve that is a different matter.
"There are some really hard unanswered questions," said Daphne Keller, director of platform regulation at the Stanford Cyber Policy Center.
In the European Union, where lawmakers are debating two vast pieces of tech legislation, "some proposals say algorithms should prioritise authoritative sources of information, and others say they should prioritise diverse sources", Keller noted.
"How do you reconcile those two goals?"
The path forward is just as unclear in the US, where dozens of legal amendments have been proposed by lawmakers torn over what exactly it is about social media that needs fixing.
"On the left, people don't like all the harmful stuff like hate speech and misinformation; on the right, people think that their free speech is being taken away," summarised Noah Giansiracusa, author of "How Algorithms Create and Prevent Fake News".
Politicians and academics have suggested various means of limiting the harmful side-effects of social media algorithms -- none without their complications.
Some suggest platforms like Facebook and Twitter could be made legally liable for what they publish, which would discourage them from amplifying posts that spread hate or misinformation.
But in the US, where most social media giants are based, Giansiracusa said this would quickly face legal challenges from critics charging that it violates the right to freedom of speech.
Alternatively, governments could restrict social networks' ability to personalise what people see in their feeds.
YouTube and Facebook have been accused of unintentionally radicalising some people in this way, feeding them post after post of conspiracy theory-laden content.
Social media companies could be obliged to simply show people posts in chronological order -- but that risks making scrolling down a feed more boring.
The algorithms would no longer be able to calculate what a user will probably find interesting -- a photo of a close friend getting married, for instance -- while downgrading tedious posts about what an acquaintance had for lunch.
"There is no simple solution," Giansiracusa concluded.
- Garbage in, garbage out -
Beyond social media, the world's reliance on digital technology means algorithms increasingly affect real-world outcomes -- sometimes drastically.
China's cyberspace watchdog is mulling further regulation of tech firms' algorithms, not least after criticism of how food delivery apps like Meituan and Alibaba's Ele.me treat financially vulnerable gig workers.
Such apps have faced criticism for docking drivers' pay if they don't arrive fast enough, effectively encouraging reckless driving.
And studies have shown how artificial intelligence can prove racist or sexist, from resume-scanning tools that favour male candidates, to US risk assessment software that recommends white prisoners for parole more frequently than black counterparts.
Both are examples of a computing principle known as "garbage in, garbage out" -- the idea that algorithms can replicate human biases if they're fed data embedded with those biases.
Regulators are increasingly seeking ways of preventing these discriminatory outcomes, with the US Federal Trade Commission signalling it will penalise companies found to be selling biased algorithms.
"How algorithms shape our newsfeed is important," Keller said. "But when algorithms send people to jail or deny them employment -- that doesn't get enough attention."
kjl/cdw
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment