It questioned language models similar to the ones used in Google’s Search
By Kim Lyons Dec 5, 2020 THE VERGE
Photo by Kimberly White / Getty Images for TechCrunch
A paper co-authored by former Google AI ethicist Timnit Gebru raised some potentially thorny questions for Google about whether AI language models may be too big, and whether tech companies are doing enough to reduce potential risks, according to MIT Technology Review. The paper also questioned the environmental costs and inherent biases in large language models.
Google’s AI team created such a language model— BERT— in 2018, and it was so successful that the company incorporated BERT into its search engine. Search is a highly lucrative segment of Google’s business; in the third quarter of this year alone, it brought in revenue of $26.3 billion. “This year, including this quarter, showed how valuable Google’s founding product — search — has been to people,” CEO Sundar Pichai said on a call with investors in October.
Gebru and her team submitted their paper, titled “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” for a research conference. She said in a series of tweets on Wednesday that following an internal review, she was asked to retract the paper or remove Google employees’ names from it. She says she asked Google for conditions for taking her name off the paper, and if they couldn’t meet the conditions they could “work on a last date.” Gebru says she then received an email from Google informing her they were “accepting her resignation effective immediately.”
The head of Google AI, Jeff Dean, wrote in an email to employees that the paper “didn’t meet our bar for publication.” He wrote that one of Gebru’s conditions for continuing to work at Google was for the company to tell her who had reviewed the paper and their specific feedback, which it declined to do. “Timnit wrote that if we didn’t meet these demands, she would leave Google and work on an end date. We accept and respect her decision to resign from Google,” Dean wrote.
A paper co-authored by former Google AI ethicist Timnit Gebru raised some potentially thorny questions for Google about whether AI language models may be too big, and whether tech companies are doing enough to reduce potential risks, according to MIT Technology Review. The paper also questioned the environmental costs and inherent biases in large language models.
Google’s AI team created such a language model— BERT— in 2018, and it was so successful that the company incorporated BERT into its search engine. Search is a highly lucrative segment of Google’s business; in the third quarter of this year alone, it brought in revenue of $26.3 billion. “This year, including this quarter, showed how valuable Google’s founding product — search — has been to people,” CEO Sundar Pichai said on a call with investors in October.
Gebru and her team submitted their paper, titled “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” for a research conference. She said in a series of tweets on Wednesday that following an internal review, she was asked to retract the paper or remove Google employees’ names from it. She says she asked Google for conditions for taking her name off the paper, and if they couldn’t meet the conditions they could “work on a last date.” Gebru says she then received an email from Google informing her they were “accepting her resignation effective immediately.”
The head of Google AI, Jeff Dean, wrote in an email to employees that the paper “didn’t meet our bar for publication.” He wrote that one of Gebru’s conditions for continuing to work at Google was for the company to tell her who had reviewed the paper and their specific feedback, which it declined to do. “Timnit wrote that if we didn’t meet these demands, she would leave Google and work on an end date. We accept and respect her decision to resign from Google,” Dean wrote.
GEBRU IS KNOWN FOR HER WORK ON ALGORITHMIC BIAS, ESPECIALLY IN FACIAL RECOGNITION TECHNOLOGY
In his letter, Dean wrote that the paper “ignored too much relevant research,” a claim that the paper’s co-author Emily M. Bender, a professor of computational linguistics at the University of Washington, disputed. Bender told MIT Technology Review that the paper, which had six collaborators, was “the sort of work that no individual or even pair of authors can pull off,” noting it had a citation list of 128 references.
Gebru is known for her work on algorithmic bias, especially in facial recognition technology. In 2018, she co-authored a paper with Joy Buolamwini that showed error rates for identifying darker-skinned people were much higher than error rates for identifying lighter-skinned people, since the datasets used to train algorithms were overwhelmingly white.
Gebru told Wired in an interview published Thursday that she felt she was being censored. “You’re not going to have papers that make the company happy all the time and don’t point out problems,” she said. “That’s antithetical to what it means to be that kind of researcher.”
Since news of her termination became public, thousands of supporters, including more than 1,500 Google employees have signed a letter of protest. “We, the undersigned, stand in solidarity with Dr. Timnit Gebru, who was terminated from her position as Staff Research Scientist and Co-Lead of Ethical Artificial Intelligence (AI) team at Google, following unprecedented research censorship,” reads the petition, titled Standing with Dr. Timnit Gebru.
“We call on Google Research to strengthen its commitment to research integrity and to unequivocally commit to supporting research that honors the commitments made in Google’s AI Principles.”
The petitioners are demanding that Dean and others “who were involved with the decision to censor Dr. Gebru’s paper meet with the Ethical AI team to explain the process by which the paper was unilaterally rejected by leadership.”
Google did not immediately respond to a request for comment on Saturday.
No comments:
Post a Comment