Friday, December 18, 2020

   Left Foot Forward

Revealed: A handful of billionaires control a staggering share of UK media

The data shows that the rise in digital media is only marginally counterbalancing the fact that the overwhelming majority of UK media is controlled by a handful of owners.

Press

Three billionaire families control nearly 70% of the national newspaper industry, the latest estimate has shown. An analysis by the Press Gazette has revealed that the Murdochs, Barclays and Rothermeres control 68% of the UK national press.

The top three companies currently control a staggering 80% of the national newspaper circulation. The ownership oligopoly of DMG Media Ltd, News UK and Reach Plc has substantially increased from their 71% share in 2015.

DMG’s publications include the Daily Mail and the i paper, while Murdoch’s News UK publishes the Sun and both the Times and Sunday Times. The Sunday Mail meanwhile is under the control of Reach, which is also behind the Daily Mirror and Daily Express.

The estimate was based on imputed values News UK, JPIMedia, City A.M. and Telegraph Media Group publications as the publishers no longer share their circulation figures.

In contrast, the field of digital media is significantly less top heavy in terms of ownership concentration. However, legacy brands still dominate over native digital new sites, making up 17 of the top 20 UK digital news publishers.

Reach Plc whose websites include the Mirror Online and a plethora of regional news providers is on top of the list with 42million unique monthly visitors in September this year. This is closely followed by the BBC with 39million, and MailOnline and Metro owner DMG Media with 39million.

While billionaire dominance is not as pronounced as in print media, the Rothermeres, Murdochs and Lebedevs still control three of the top five digital news publishers.

In the magazine industry, the Bauer, Hearst and Burda families own the three largest publishers and control an estimated 31% of circulation, based on analysis of 2019 datas. But charities also own a significant chunk of the magazine market. Titles by RSPB, National Trust and Royal Horticultural Society have a combined circulation of over 3.7 million, or 11% of total circulation.

The data shows that the rise in digital media is only marginally counterbalancing the fact that the overwhelming majority of UK media is controlled by a handful of owners.

Sophia Dourou is a freelance journalist


Prem Sikka: Billionaires are a threat to democracy

Billionaires are using their wealth to subvert democracy

Taxation revenues are the basis of the modern state. Without them, it cannot perform its administrative and redistributive functions.

The revenues are also used to subsidise industries, bailout banks and clean-up the social and environmental damage inflicted by businesses in pursuit of private profits.

The level of taxation depends on the social settlement mandated to the state through the ballot-box. It can be changed by citizens.

Some wealthy elites resent making financial contributions towards the maintenance of social infrastructure and advancement of welfare rights even though healthcare, education, transport, security and other services enable their businesses to make substantial profits.

A good example of this is the statement by John Caudwell, the billionaire founder of Phones4u.

Caudwell said that he would leave the UK if the next Labour government introduced a wealth tax or higher marginal rates of income tax.

He said he will migrate to “and live in the south of France or Monaco”, though he would continue to profit from businesses in the UK.

In 2014, Phones4u and related companies entered administration.  As of 13 February 2019, unsecured creditors for the group of companies were owed £1.7bn and are likely to receive negligible amounts.

This figure includes £69m owed to the government’s tax collector HMRC. That loss has been borne by taxpayers.

What about Caudwell’s love for paying taxes which after all enabled his business empire to enrich him?

Caudwell and his business empire used novel tax avoidance schemes. In the 1990s, Caudwell and fellow directors “paid themselves mostly in gold bars and fine wine. The aim was to avoid National Insurance contributions”. This loophole was closed in 1997.

Caudwell looked for other ways of reducing his tax bills and a scheme was devised by Ernst & Young, auditors to Phones4u.

Under the scheme, businesses controlled by Caudwell made payments into an Employee Benefit Trust scheme in Jersey for the benefit of executives, including Caudwell and other directors, who received tax-free loans from the trust.

The payments were instead of salary or bonuses. The companies obtained immediate corporation tax relief on the money while directors avoided income tax and National Insurance contributions.

The scheme was contested by HMRC and the case eventually went to the House of Lords and was declared to be unlawful.

The expert opinion was that this judgment alone may have required Caudwell to pay £13m in taxes and penalties, and his businesses may have faced liabilities of £1.5bn in tax and interest.

Caudwell wants to be loved by people and says that he pays his taxes and will give away his fortune to charities, but this does not stop him from threatening people from making their democratic choices.

Fellow millionaire Lord Alan Sugar has also threatened to leave the UK if the next Labour government levies higher taxes on the rich.

Of course, before leaving they should return all public subsidies, loans and debts, pay taxes and make good environmental and social damage done by their intoxication to private wealth, status and power.

These wealthy elites are not campaigning for higher wages, an end to austerity, foodbanks or inequalities.

They have campaigned and secured tax cuts for corporations and reduction in the highest marginal rate of income tax for the rich from 50% to 45%, on incomes above £150,000 a year.

They are silent on the inequities of the present system which requires the poorest members of society to pay a higher proportion of their income in taxes compared to the richest.

The data published by the Office for National Statistics shows that the poorest 20% of households pay 29.7% of their income in indirect taxes and 12.7% in direct taxes. So they pay-a total of 42.4%.

On the other hand, the richest 20% pay 14.6% in indirect and 23.2% in direct taxes. This is a total of 37.8%.

So the rich are giving a smaller percentage of their income than the poor.

The class wars fought by the elites have left ordinary people with a stark choice – either pay more for a crumbling social infrastructure or give up their hard-won social rights, whilst billionaires heave their wealth about in tax havens.

The message from wealthy elites is that people can have any government they wish as long as it is the one that Caudwell, Sugar and their billionaire friends approve.

Put another way, people can mandate a government through the ballot-box but wealthy elites want to have the power to exercise the ultimate veto by withholding or dodging taxes and therefore ensuring that the government cannot deliver its electoral promises and improve welfare of the ordinary people.

If government are poor at delivering mandated redistribution and social infrastructure, then citizens can discipline those governments by kicking them out of office.

There is something very disturbing in that billionaires think that it is somehow their birth right to override citizen choices and discipline governments by threatening tax revenues.

Prem Sikka is a Professor of Accounting at University of Sheffield, and Emeritus Professor of Accounting at University of Essex. He is a Contributing Editor for Left Foot Forward and tweets here.

 

100,000 pupils’ education overseen by a handful of rich me

Huge powers over the education of thousands are in the hands of just a few wealthy men.

Evidence released today by Education Uncovered, shows that nearly 200 state-funded schools in England are now controlled by rich philanthropists.

The use of academies – started under Tony Blair, but vastly expanded under the coalition – means that controlling sponsors have a vast array of powers and little to no oversight from parents, teachers, pupils or local government.

Analysis of ten trusts – who are all run by men and between them educate over 113,000 pupils – shows how these sponsors are ultimately in charge. Through scrutiny of their constitution and accounts, Education Uncovered revealed that they have a huge say over the appointment of trustees. These, in turn, set the overall strategy for a school – including the curriculum. They also hold the management to account.

Mary Bousted, joint general secretary of the National Education Union, said: “The academies project has always been in the interests of the few, not the many. It has resulted in a fractured and confusing schools landscape, and a Wild West for those who wish to exploit it. Today’s research shows that the altruism and vocation of teachers is rarely reflected at the top of academy trusts.

 “There is a great deal at stake for education in this General Election. Voters must look at this research and ask themselves in whose interests schools should be run. At a critical time in our history, this is an opportunity for us to change course as a country and vote for education.”

Examples of these business leaders include Lord Harris of Peckham, whose family are in charge of 48 state-funded academies educating 32,500 pupil. The constitution of the trust gives him the right to appoint up to 32 trustees.

 Also the co-founders of the hedge fund firm Marshall Wace, who, alongside former Conservative treasurer Lord Fink, form three-quarters of the directors of a charity which is in charge of another London-based chain, Ark Schools. This has 38 schools and 26,000 students.



Modi offers talks to end India protests against farm reforms


By Reuters Staff

NEW DELHI (Reuters) - Prime Minister Narendra Modi on Friday defended India’s biggest farm reforms in decades but offered to “very humbly” hold further talks with farmers protesting against the laws they fear would erode their incomes.

Tens of thousands of farmers have blocked roads leading into New Delhi for the past three weeks demanding a repeal of laws that give them the option to sell directly to private companies.

The government says the change is necessary to boost farm returns and improve storage and other infrastructure.

But the farmers, mainly from the northern agrarian states of Punjab and Haryana, fear private companies would eventually dictate terms and the government would stop buying grains like wheat and rice from them at a minimum guaranteed price.

In online remarks to farmers of the country’s biggest wheat-producing state, Madhya Pradesh, Modi said there should be no cause for concern and repeated the government’s position that farmers would be assured of a price like earlier.

“The modern facilities available to the farmers of major nations should also be available for those from India, it cannot be delayed any longer,” he said.

“Still, if anyone has any apprehension, and in the interest of the farmers of the country and to address their concerns, we are very humbly ready to talk on every issue.”

His comments came as farmers, many in their sixties or above, brave north India’s harsh winter to camp out in the open on Delhi’s borders with their tractors and trailers parked bumper to bumper.

After a series of previous meetings with Modi’s ministers, the protesters have said that nothing short of an official annulment of the three laws will be enough to change their position.

Rakesh Tikait, a farmers’ leader, said after Modi’s speech that the premier was trying to privatise agriculture to benefit companies and not them.


Reporting by Krishna N. Das, Editing by William Maclean

From whistleblower laws to unions:
How Google’s AI ethics meltdown could shape policy
Khari Johnson@kharijohnson 
December 16, 2020 


NEW YORK, NEW YORK - OCTOBER 20: Google's offices stand in downtown Manhattan on October 20, 2020 in New York City. Accusing the company of using anticompetitive tactics to illegally monopolize the online search and search advertising markets, the Justice Department and 11 states Tuesday filed an antitrust case against Google.
Image Credit: Spencer Platt/Getty Images

It’s been two weeks since Google fired Timnit Gebru, a decision that still seems incomprehensible. Gebru is one of the most highly regarded AI ethics researchers in the world, a pioneer whose work has highlighted the ways tech fails marginalized communities when it comes to facial recognition and more recently large language models.

Of course, this incident didn’t happen in a vacuum. It’s part of an ongoing series of events at the intersection of AI ethics, power, and Big Tech. Case in point: Gebru was fired the same day the National Labor Review Board (NLRB) filed a complaint against Google for illegally spying on employees and the retaliatory firing of employees interested in unionizing. Gebru’s dismissal also calls into question issues of corporate influence in research, demonstrates the shortcomings of self-regulation, and highlights the poor treatment of Black people and women in tech in a year when Black Lives Matter sparked the largest protest movement in U.S. history.

VIDEO Ten states sue Google for abusing market power

In an interview with VentureBeat last week, Gebru called the way she was fired “disrespectful” and described a companywide memo sent by CEO Sundar Pichai as “dehumanizing.” To delve further into possible outcomes following Google’s AI ethics meltdown, VentureBeat spoke with experts across the fields of AI, tech policy, and law about Gebru’s dismissal and the issues it raises. They also shared thoughts on the policy changes needed across governments, corporations, and academia. The people I spoke with agree Google’s decision to fire Gebru was a mistake with far-reaching policy implications.

Rumman Chowdhury is CEO of Parity, a startup auditing algorithms for enterprise customers. She previously worked as global lead for responsible AI at Accenture, where she advised governments and corporations. In our conversation, Chowdhury expressed a sentiment echoed by many of the people interviewed for this article.

“I think just the collateral damage to literally everybody: Google, the industry of AI, of responsible AI … I don’t think they really understand what they’ve done. Otherwise, they wouldn’t have done it,” Chowdhury told VentureBeat.
Independent external algorithm audits

Christina Colclough is director of the Why Not lab and a member of the Global Partnership on AI (GPAI) steering committee. GPAI launched in June with 15 members, including the EU and the U.S., and Brazil and three additional countries joined earlier this month.

After asking “Who the hell is advising Google?” Colclough suggested independent external audits for assessing algorithms. “You can say for any new technology being developed we need an impact of risk assessment, a human rights assessment, we need to be able to go in and audit that and check for legal compliance,” she continued.

The idea of independent audits is in line with the environmental impact reports construction projects need to submit today. A paper published earlier this year about how businesses can turn ethics principles into practice suggested the creation of a third-party market for auditing algorithms and bias bounties akin to the bug bounties paid by cybersecurity firms. That paper included 60 authors from dozens of influential organizations from academia and industry.

Had California voters passed Prop 25 last month, the bill would have required independent external audits of risk assessment algorithms. In another development in public accountability for AI, the cities of Amsterdam and Helsinki have adopted algorithm registries.
Scrap self-regulation

Chowdhury said it’s now going to be tough for people to believe any ethics team within a Big Tech company is more than just an ethics-washing operation. She also suggested Gebru’s firing introduces a new level of fear when dealing with corporate entities: What are you building? What questions aren’t you asking?

What happened to Gebru, Chowdhury said, should also lead to higher levels of scrutiny or concern about industry interference in academic research. And she warned that Google’s decision to fire Gebru dealt a credibility hit to the broader AI ethics community.

If you’re a close follower of this space, you might have already reached the conclusion that self-regulation at Big Tech companies isn’t possible. You may have arrived at that point in the past few years, or maybe even a decade ago when European Union regulators first launched antitrust actions against Google.

Colclough agrees that the current situation is untenable and asserts that Big Tech companies are using participation in AI ethics research as a way to avoid actual regulation. “A lot of governments have let this self-regulation take place because it got them off the hook, because they are being lobbied big-time by Big Tech and they don’t want to take responsibility for putting new types of regulation in place,” Colclough said.

She has no doubt that firing Gebru was an act of censorship. “What is it that she has flagged that Google didn’t want to hear, and therefore silenced her?” Colclough asked. “I don’t know if they’ll ever silence her or her colleagues, but they have definitely shown to the world — and I think that’s a point that needs to be made a lot stronger — that self-regulation can’t be trusted.”

U.S. lawmakers and regulators were slow to challenge Big Tech, but there are now several ongoing antitrust actions in the U.S and other countries. Today, 10 U.S. states filed an antitrust lawsuit accusing Google of colluding with Facebook to dominate the online advertising industry. Prior to a Facebook antitrust lawsuit filed last week, Google faced a separate lawsuit from the Department of Justice and attorneys general last month, the first U.S. case against a major tech company since the 1990s. Alongside anticompetitive business practices, the 64-page indictment alleges that Google utilizes artificial intelligence and user data to maintain its dominance. Additional charges are expected in the coming days. This fall, a congressional investigation into Big Tech companies concluded that antitrust law reform is needed to protect competitive markets and democracy.

Collective action or tech worker unionization


J. Khadijah Abdurahman runs the public technology project We Be Imagining at Columbia University and recently helped organize the Resistance AI workshop at NeurIPS 2020. Not long after Google fired Gebru, Abdurahman penned a piece asserting the moral collapse of the AI ethics field. She called the response to Gebru’s firing a public display of institutional resistance immobilized. In the piece, she talks about ideas like the need for a social justice war room. She also says that when we radically shift the AI ethics conversation away from the idea of the lone researcher versus Goliath can open up space for a broader movement. And she believes collective action is required to address violence found in the tech supply chain, ranging from harms experienced by cobalt miners in central Africa to injustice accelerated by automation and misinformation in social media.

What’s needed, she said, is a movement that cuts across class and defines tech workers more broadly — including researchers and engineers, but also Uber drivers, Amazon warehouse workers, and content moderators. In an interview with VentureBeat, she said, “There should not be some lone martyr going toe-to-toe with [Big Tech]. You need a broader coalition of people who are funding and working together to do the work.”

The idea of collective action through unionizing came up at NeurIPS in a panel conversation on Friday that included Gebru. At the Resistance AI workshop for practitioners and researchers interested in AI that gives power to marginalized people, Gebru talked about why she still supports the idea of people working as researchers at corporations. She also likened the way she was treated to what happened to 2018 Google walkout organizers Meredith Whittaker and Claire Stapleton. On the panel, Gebru was asked whether she thinks unionization would protect ethical AI researchers.

“There’s two things we need to do: We need to look at the momentum that’s happening and figure out what we can achieve based on this momentum, what kind of change we can achieve,” she said. “But then we also need to take the time to think through what kinds of things we really need to change so that we don’t rush to have some sort of policy changes. But my short answer is yes, I think some sort of union has to happen, and I do believe there is a lot of hope.”

In an interview this fall, Whittaker called collective employee action and whistleblowing by departing Facebook employees part of a toolkit for tech workers.
Whistleblower protections for AI researchers

In the days before Google fired her, Gebru’s tweets indicated that all was not well. In one tweet, she asked whether any regulation to protect AI ethics researchers — similar to that afforded whistleblowers — was in the works.
 
Former Pinterest employee Ifeoma Ozoma recently completed a report for Omidyar Network about the needs of tech whistleblowers. That report is due out next month, an Omidyar Network spokesperson told VentureBeat. Like Gebru’s experience at Google, Ozoma describes incidents at Pinterest of disrespect, gaslighting, and racism.

As part of a project proposal stemming from that work, next year Ozoma said a guide for whistleblowers in tech will be released and a monetary fund will launch dedicated to paying for the physical and mental health needs of workers who are pushed out after whistleblowing. It’s not the sexiest part of the whistleblowing story, Ozoma told VentureBeat, but when a whistleblower is pushed out, they — and possibly their family — lose health care coverage.

“It’s not only a deterrent to people speaking up, but it’s a huge financial consequence of speaking up and sharing information that I believe is in the public interest,” she said.

UC Berkeley Center for Law and Technology codirector Sonia Katyal supports strengthening existing whistleblower laws for ethics researchers. “I would say very strongly that existing law is totally insufficient,” she told VentureBeat. “What we should be concerned about is a world where all of the most talented researchers like [Gebru] get hired at these places and then effectively muzzled from speaking. And when that happens, whistleblower protections become essential.”

In a paper published in the UCLA Law Review last year, Katyal wrote about whistleblower protections as part of a toolkit needed to address issues at the intersection of AI and civil rights. She argues that whistleblower protections may be particularly important in situations where companies rely on self-regulation and in order to combat algorithmic bias.

We know about some malicious uses of big data and AI — like the Cambridge Analytica scandal at Facebook — because of whistleblowers like Christopher Wylie. At the time, Katyal called accounts like Wylie’s the “tip of the iceberg regarding the potential impact of algorithmic bias on today’s society.”

“Given the issues of opacity, inscrutability, and the potential role of both trade secrecy and copyright law in serving as obstacles to disclosure, whistleblowing might be an appropriate avenue to consider in AI,” the UCLA Law Review paper reads.

One of the central obstacles to greater accountability and transparency in the age of big data are claims by corporations that algorithms are proprietary. Katyal is concerned about a clash between the rights of a business to not disclose information about an algorithm and the civil rights of an individual to live in a world free of discrimination. This will increasingly become a problem, she warned, as government agencies take data or AI service contracts from private companies.

Other researchers have also found that private companies are generally less likely to share code with papers at research conferences, in court, or with regulators.

There are a variety of existing whistleblower laws in the U.S., including the Whistleblower Protection Act, which offers workers some protection against retaliation. There’s also the Defend Trade Secrets Act (DTSA). Passed in 2016, the law includes a provision that provides protection against trade secret misappropriation claims made by an employer. But Katyal called that argument limited and said the DTSA provision is a small tool in a big, unregulated world of AI.

“The great concern that every company wields to any kind of employee that wants to come forward or share their information or concerns with the public — they know that using the explanation that this is confidential proprietary information is a very powerful way of silencing the employee,” she told VentureBeat.

Plenty of events in recent memory demonstrate why some form of whistleblower protection might be a good idea. A fall 2019 study in Nature found that an algorithm used in hospitals may have been involved in the discrimination against millions of Black people in the United States. A more recent story reveals how an algorithm prevented Black people from receiving kidney transplants.

For a variety of reasons, sources cited for this article cautiously supported additional whistleblower protections. Colclough supports some form of special protections like whistleblower laws but believes it should be part of a broader plan. Such laws may be particularly helpful when it comes to the potential deployment of AI likely to harm lives in areas where bias has already been found, like hiring, health care, and financial lending.

Another option Colclough raises: Give citizens the right to file grievances with government regulators. As a result of GDPR, EU citizens can report to a national data authority if they think a company is not in compliance with the law, and the national data authority is then obliged to investigate. Freedom from bias and a path toward redress are part of an algorithmic bill of rights proposed last year.

Chowdhury said she supports additional protections, but she cautioned that whistleblowing should be a last resort. She expressed reservations on the grounds that whistleblowers who go public may be painted by conservatives or white supremacists as “SJW lefties trying to get a dunk.”

Before whistleblowing is considered, she believes companies should establish avenues for employees wishing to express constructive dissent. Googlers are given an internal way to share complaints or concerns about a model, employees told VentureBeat and other news outlets during a press event this fall. A Google spokesperson subsequently declined to share which particular use cases or models had attracted the most criticism internally.

But Abdurahman questioned which workers such a law would protect and said “I think that line of inquiry is more defensive than what is required at this moment.”
Eliminate corporate funding of AI ethics research

In the days after Gebru was fired, more than 2,000 Googlers signed an open letter that alleges “unprecedented research censorship.” In the aftermath, some AI researchers said they refuse to review Google AI papers until the company addresses grievances raised by the incident. More broadly, what happened at Google calls into question the actual and perceived influence of industry over academic research.

At the NeurIPS Resistance AI workshop, Rediet Abebe, who begins as an associate professor at UC Berkeley next year, explained why she will not accept research funding from Google. She also said she thinks senior faculty in academia should speak up about Big Tech research funding.

“Maybe a single person can do a good job separating out funding sources from what they’re doing, but you have to admit that in aggregate there’s going to be an influence. If a bunch of us are taking money from the same source, there’s going to be a communal shift toward work that is serving that funding institution,” she said.

Jasmine McNealy is an attorney, associate professor of journalism at the University of Florida, and faculty associate with the Berkman Klein Center for Internet and Society at Harvard University.

McNealy recently accepted funding from Google for AI ethics research. She expressed skepticism about the idea that the present economic environment will allow public universities to turn down funding from tech or virtually any other source.

“Unless state legislators and governors say ‘We don’t necessarily like money coming from these kinds of organizations or people,’ I don’t think universities — particularly public universities — are going to stop taking money from organizations,” she said.

More public research funding could be on the way. The Biden administration platform has committed to a $300 billion investment in research and development funding in a number of areas, including artificial intelligence.

But accusations of research censorship at Google come at a time when AI researchers are calling into question corporate influence and drawing comparisons to Big Tobacco funding health research in decades past. Other AI researchers point to a compute divide and growing inequality between Big Tech, elite universities, and everybody else in the age of deep learning.

Google employs more tenure track academic AI talent than any other company and is the most prolific producer of AI research.

Tax Big Tech

Abdurahman, Colclough, and McNealy strongly support raising taxes for tech companies. Such taxes could fund academic research and enforcement agencies with regulatory oversight like the Federal Trade Commission (FTC), as well as supporting the public infrastructure and schools that companies rely upon.

“One of the reasons why it has been accepted that big companies paid all this money into research was that otherwise there’d be no research, and there’d be no research because there was no money. Now I think we should go back to basics and say ‘You pay into a general fund here, and we will make sure that universities get that money, but without you having influence over the conclusions made,'” Colclough said, adding that corporate taxation allows for greater enforcement of existing anti-discrimination laws.

Enforcement of existing law like the Civil Rights Act, particularly in matters involving public funding, also came up in an open letter signed by a group of Black professionals in AI and computing in June.

Taxation that funds enforcement could also draw some regulatory attention to up-and-coming startups, which McNealy said can sometimes do things with “just as bad impacts or implications” as their corporate counterparts.

There is some public support for the idea of revisiting big tech companies’ tax obligations. Biden promised in his campaign to make Amazon pay more income taxes, and the European Union is considering legislation that would impose a 10% sales tax on “gatekeeper” tech companies.

Taxation can also fund technology that does not rely on profitability as a measure of value. Abdurahman says the world needs public tools and that people need to broaden their imagination beyond having a handful of companies supply all the technology we use.

Though AI in the public sector is often talked about as an austerity measure, Abdurahman defines public interest technology as non-commercial, designed for the social good, and made with a coalition representative of society. She believes that shouldn’t include just researchers, but also the people most impacted by the technology.

“Public interest tech opens up a whole new world of possibilities, and that’s the line of inquiry that we need to pursue rather than figuring out ‘How do we fix this really screwed up calculus around the edges?'” Abdurahman said. “I think that if we are relying on private tech to police itself, we are doomed. And I think that lawmakers and policy developers have a responsibility to open up and fund a space for public interest technology.”

Some of that work might not be profitable, Chowdhury said, but profitability cannot be the only value by which AI is considered.

Require AI researchers to disclose financial ties

Abdurahman suggests that disclosure of financial ties become standard for AI researchers. “In any other field, like in pharmaceuticals, you would have to disclose that your research is being funded by those companies because that obviously affects what you’re willing to say and what you can say and what kind of information is available to you,” she said.

For the first time, this year organizers of the NeurIPS AI research conference required authors to state potential conflicts of interest and their work’s impact on society.
Separate AI ethics from computer science

A recent research paper comparing Big Tech and Big Tobacco suggests that academics consider making ethics research into a separate field, akin to the way bioethics is separated from medicine and biology. But Abdurahman expressed skepticism about that approach since industry and academia are already siloed.

“We need more critical ethical practice, not just this division of those who create and those who say what you created was bad,” she said.

Ethicists and researchers in some machine learning fields have encouraged the creation of interdisciplinary teams, such as AI and social workers, AI and climate change, and AI and oceanography, among other fields. In fact, Gebru was part of an effort to bring the first sociologists to the Google Research team, introducing frameworks like critical race theory when considering fairness.

Final thoughts


What Googlers called a retaliatory attack against Gebru follows a string of major AI ethics flashpoints at Google in recent years. When word got out in 2018 that Google was working with the Pentagon on Project Maven to develop computer vision for military drone footage, employees voiced their dissent in an open letter signed by thousands. Later that year, in a protest against Project Maven, sexual harassment, and other issues, tens of thousands of Google employees participated in a walkout at company offices around the world. Then there was Google’s troubled AI ethics board, which survived only a few days.

Two weeks after Gebru’s firing, things still appear to be percolating at the company. On Monday, Business Insider obtained a leaked memo that revealed Google AI chief Jeff Dean had canceled an all-hands end-of-year call. Since VentureBeat interviewed Gebru last week, she has spoken at length with BBC, Slate, and MIT Tech Review.

Members of Congress with a record of sponsoring bills related to algorithmic bias today sent a letter to Google CEO Sundar Pichai asking how Google mitigates bias in large language models and how Pichai plans to further investigate what happened with Gebru and advance diversity. Signatories include Rep. Yvette Clarke (D-NY) and Sen. Cory Booker (D-NJ). The two are cosponsors of the Algorithmic Accountability Act, a 2019 bill that would have required companies to assess algorithms for bias. Booker also cosponsored a federal facial recognition moratorium earlier this year. Sen. Elizabeth Warren (D-MA), who questioned bias in financial lending, and Sen. Ron Wyden (D-OR) who questioned use of tech like facial recognition at protests, also signed the letter.

Also today: Members of Google’s Ethical AI team sent additional demands to Pichai, calling for policy changes and for Gebru to get her job back, among other things.

Earlier this year, I wrote about a fight for the soul of machine learning. I talked about AI companies associated with surveillance, oppression, and white supremacy and others working to address harm caused by AI and build a more equitable world. Since then, we have seen multiple documented instances of, as AI Now Institute put it today, reasons to give us pause.

Gebru’s treatment highlights how a lack of investment in diversity can create a toxic work environment. It also leads to questions like how employees should alert the public to AI that harms human lives if company leadership refuses to address those concerns. And it casts a spotlight on the company’s failure to employ a diverse engineering workforce despite the fact that such diversity is widely considered essential to minimizing algorithmic bias.

The people I spoke with for this article seem to agree that we need to regulate tech that shapes human lives. They also call for stronger accountability and enforcement mechanisms and changes to institutional and government policy. Measures to address the cross-section of issues raised by Gebru’s treatment would need to cover a broad spectrum of policy concerns, ranging from steps to ensure the independence of academic research to unionization or larger coalitions among tech workers.

Updated December 17 at 10:18 a.m.: The initial version of this story stated that J. Khadijah Abdurahman works at We Be Imaging lab when it should have read We Be Imagining. Also, story text was modified to reflect that a reference to “institutional resistance immobilized” made by J. Khadijah Abdurahman refers to the response to Gebru’s firing, and to clarify initial wording of Abdurahman’s description of violence that found in the tech supply chain.

Updated December 17 at 7:40 a.m.: Added link to and brief description of Google antitrust lawsuit.

Updated December 16 at 9:54 p.m.: Added demands from Google employees. 6:58 p.m.: Linked to a letter members of Congress sent to Google CEO Sundar Pichai and added background information.


Cyberhack looks like act of war

Mike Allen, author of AM

Illustration: Sarah Grillo/Axios


A Trump administration official tells Axios that the cyberattack on the U.S. government and corporate America, apparently by Russia, is looking worse by the day — and secrets may still be being stolen in ways not yet discovered.

The big picture: "We still don't know the bottom of the well," the official said. Stunningly, the breach goes back to at least March, and continued all through the election. The U.S. government didn't sound the alarm until this Sunday. Damage assessment could take months.

Microsoft President Brad Smith told the N.Y. Times that at least 40 companies, government agencies and think tanks had been infiltrated.

The hack is known to have breached the departments of Defense, State, Homeland Security, Treasury, Commerce, and Energy and its National Nuclear Security Administration — plus the National Institutes of Health.

8 countries: Microsoft, which has helped respond to the breach, said in a statement that 80% of its 40 customers known to have been targeted are in the U.S., plus others in U.K., Israel, UAE, Canada, Mexico, Belgium and Spain.

In unusually vivid language for a bureaucracy, the U.S. Cybersecurity and Infrastructure Security Agency, part of Homeland Security, said yesterday that the intruder "demonstrated sophistication and complex tradecraft."

The agency said the breach "poses a grave risk to the Federal Government and state, local, tribal, and territorial governments as well as critical infrastructure entities and other private sector organizations."

If this had been a physical attack on America's secrets, we could be at war.

Imagine if during the Cold War, the Soviet Union had broken into a building in Washington and walked out with correspondence, budgets and more.

Sen. Chris Coons (D-Del.) told Andrea Mitchell on MSNBC: "It's pretty hard to distinguish this from an act of aggression that rises to the level of an attack that qualifies as war. ... [T]his is as destructive and broad scale an engagement with our military systems, our intelligence systems as has happened in my lifetime."

The gravity wasn't immediately apparent because this wasn't the "cyber Pearl Harbor" that experts have warned about: No one took out a power grid, or stole a bunch of money or destabilized the markets.

Instead, it's more like someone has been walking in and out of your house for months, and you don't really know what they took.

And they may have built a secret door. "For someone to have access that long, who's this sophisticated, it's pretty likely they built other ways to get in that are hard to find," one official told me.

What's next: President Trump has stayed silent on the hack, meaning that President-elect Biden's overflowing in-box now includes Russian reprisal, damage mitigation and future deterrence.

Promising to impose "substantial costs" on the perpetrator, Biden said in a statement that his administration "will make cybersecurity a top priority": "I will not stand idly by in the face of cyber assaults on our nation."


Microsoft president: Cyberattack "provides a moment of reckoning"

Jacob Knutson

Microsoft President Brad Smith speaking in the White House in May 2020
Photo: Mandel Ngan/AFP via Getty Images

Microsoft President Brad Smith said in a blog post on Thursday that the suspected Russian cyberattack on multiple government agencies and U.S. companies is effectively "an attack on the United States and its government and other critical institutions, including security firms."

Why it matters: Smith said that the attack "unfortunately represents a broad and successful espionage-based assault on both the confidential information of the U.S. Government and the tech tools used by firms to protect them."

He also said that "while investigations (and the attacks themselves) continue, Microsoft has identified and has been working this week to notify more than 40 customers that the attackers targeted more precisely and compromised through additional and sophisticated measures."

Context: The cybersecurity firm FireEye said last week that its systems had been hacked by nation-state actors and that its clients, which include the U.S. government, had been placed at risk.

SolarWinds, which provides software to the government and corporations, also discovered a breach in its systems this week, allowing hackers to access information from multiple agencies and companies — including the Treasury, Commerce and Homeland Security departments.

What he's saying: "As much as anything, this attack provides a moment of reckoning," Smith said.

"It requires that we look with clear eyes at the growing threats we face and commit to more effective and collaborative leadership by the government and the tech sector in the United States to spearhead a strong and coordinated global cybersecurity response," he added.
"This is not 'espionage as usual,' even in the digital age. Instead, it represents an act of recklessness that created a serious technological vulnerability for the United States and the world."

Smith said the hackers, by including private companies in their attack on government agencies, have "put at risk the technology supply chain for the broader economy" and have weakened the "reliability of the world’s critical infrastructure."

To respond to the attack, Smith said that governments and private companies should share analysis of threats more often and strengthen international rules to hold nation-states accountable for cyberattacks.

"It will be critical for the incoming Biden-Harris Administration to move quickly and decisively to address this situation."

Biden promises retaliation for cyberattack
on government agencies

Jacob Knutson

Joe Biden speaking in Atlanta on Dec. 15. Photo: Jim Watson/AFP via Getty Images



President-elect Biden on Thursday said that a suspected Russian cyberattack on multiple government agencies and U.S. companies "is a matter of great concern" and promised to impose "substantial costs" to those responsible for the attack.

Driving the news: Biden's statement came just hours after the Cybersecurity and Infrastructure Agency alerted that evidence suggested that additional malware was used in what it described as “a grave risk to the Federal Government and state, local, tribal, and territorial governments as well as critical infrastructure entities and other private sector organizations.”

Context: The cybersecurity firm FireEye said last week that its systems had been hacked by nation-state actors and that its clients, which include the U.S. government, had been placed at risk.

SolarWinds, which provides software to the government and corporations, also discovered a breach in its systems this week, allowing hackers to access information from multiple agencies and companies — including the Treasury, Commerce and Homeland Security departments.

What they're saying: "I have instructed my team to learn as much as we can about this breach, and Vice President-elect Harris and I are grateful to the career public servants who have briefed our team on their findings and who are working around-the-clock to respond to this attack," Biden said on Thursday.

"A good defense isn’t enough; we need to disrupt and deter our adversaries from undertaking significant cyberattacks in the first place."

"We will do that by, among other things, imposing substantial costs on those responsible for such malicious attacks, including in coordination with our allies and partners. Our adversaries should know that, as president, I will not stand idly by in the face of cyber assaults on our nation."


The big picture: President Trump has been largely silent about the attack, though the White House has held emergency meetings with officials across multiple agencies to address the breach, according to Bloomberg.

Thomas Bossert, Trump's former homeland security adviser, wrote in the New York Times on Wednesday, "The magnitude of this ongoing attack is hard to overstate."
"It will take years to know for certain which networks the Russians control and which ones they just occupy."

Go deeper: Russian hacking group is behind Treasury and Commerce email breach

Romney: White House should "say something aggressive" on Russian cyberattack

Shawna Chen



Photo: Tom Williams/CQ-Roll Call, Inc via Getty

Sen. Mitt Romney (R-Utah) called on the White House to “aggressively” condemn a suspected Russian cyberattack in an interview with SiriusXM on Thursday evening.

Why it matters: Since news broke that hackers tied to Russia penetrated U.S. government networks and companies, public officials including President-elect Biden have come forward with rebukes. President Trump has been largely silent, though the White House has held emergency meetings with officials across agencies to address the breach, per Bloomberg.

What he's saying: It’s “quite extraordinary” that the White House isn’t “aggressively speaking out and protesting and taking punitive action," Romney said.

“How could this possibly go on for so long?” Romney asked. “We clearly are not up to speed in defending our systems.”

The big picture: The Cybersecurity and Infrastructure Agency described the attack as “a grave risk to the Federal Government and state, local, tribal, and territorial governments as well as critical infrastructure entities and other private sector organizations.”

Biden released a statement shortly after, saying: “A good defense isn’t enough; we need to disrupt and deter our adversaries from undertaking significant cyberattacks in the first place."
"We will do that by, among other things, imposing substantial costs on those responsible for such malicious attacks,” Biden added.

The White House has not responded to Axios' request for comment.


https://www.axios.com/

Microsoft found malicious SolarWinds 
software in its systems
Reuters December 18, 2020 

Microsoft
Image Credit: Khari Johnson / VentureBeat

(Reuters) — Microsoft said on Thursday it found malicious software in its systems related to a massive hacking campaign disclosed by U.S. officials this week, adding a top technology target to a growing list of attacked government agencies.

The Redmond, Washington company is a user of Orion, the widely deployed networking management software from SolarWinds that was used in the suspected Russian attacks on vital U.S. agencies and others.

Microsoft also had its own products leveraged to attack victims, people familiar with the matter said. The U.S. National Security Agency issued a rare “cybersecurity advisory” Thursday detailing how certain Microsoft Azure cloud services may have been compromised by hackers and directing users to lock down their systems.

“Like other SolarWinds customers, we have been actively looking for indicators of this actor and can confirm that we detected malicious SolarWinds binaries in our environment, which we isolated and removed,” a Microsoft spokesperson said, adding that the company had found “no indications that our systems were used to attack others.”

One of the people familiar with the hacking spree said the hackers made use of Microsoft cloud offerings while avoiding the company’s corporate infrastructure.

Microsoft did not immediately respond to questions about the technique.

Still, another person familiar with the matter said the Department of Homeland Security (DHS) does not believe Microsoft was a key avenue of fresh infection.

Both Microsoft and the DHS, which earlier on Thursday said the hackers used multiple methods of entry, are continuing to investigate.

The FBI and other agencies have scheduled a classified briefing for members of Congress Friday.

The U.S. Energy Department also said it has evidence hackers gained access to its networks as part of the campaign. Politico had earlier reported the National Nuclear Security Administration (NNSA), which manages the country’s nuclear weapons stockpile, was targeted.

An Energy Department spokesperson said malware “has been isolated to business networks only” and has not impacted U.S. national security, including the NNSA.

The DHS said in a bulletin on Thursday the hackers had used other techniques alongside corrupting updates of SolarWind’s network management software, which is used by hundreds of thousands of companies and government agencies.

CISA urged investigators not to assume their organizations were safe just because they did not use recent versions of the SolarWinds software and also pointed out that the hackers did not exploit every network they gained access to.

CISA said it was continuing to analyze the other avenues used by the attackers. So far, the hackers are known to have at least monitored email or other data within the U.S. departments of Defense, State, Treasury, Homeland Security, and Commerce.

As many as 18,000 Orion customers downloaded the updates that contained a back door, SolarWinds has said. Since the campaign was discovered, software companies have cut off communication from those back doors to the computers maintained by the hackers.

But the attackers might have installed additional ways of maintaining access, CISA said, in what some have called the biggest hack in a decade.

The Department of Justice, FBI, and Defense Department, among others, have moved routine communication onto classified networks that are believed not to have been breached, according to two people briefed on the measures. They are assuming that the nonclassified networks have been accessed, the people said.

CISA and private companies — including FireEye, which was the first to discover and reveal it had been hacked — have released a series of clues for organizations to look for to see if they have been hit.

But the attackers are very careful and have deleted logs, or electronic footprints of which files they have accessed, security experts said. That makes it hard to know what has been taken.

Some major companies have said they have “no evidence” that they were penetrated, but in some cases that may only be because the evidence was removed.

In most networks, the attackers would also have been able to create false data, but so far it appears they were interested only in obtaining real data, people tracking the probes said.

Meanwhile, members of Congress are demanding more information about what may have been taken and how, along with who was behind it. The House Homeland Security Committee and Oversight Committee announced an investigation Thursday, while senators pressed to learn whether individual tax information was obtained.

In a statement, President-elect Joe Biden said he would “elevate cybersecurity as an imperative across the government” and “disrupt and deter our adversaries” from undertaking such major hacks.

(Reporting by Joseph Menn and Chris Bing. Editing by Chris Sanders and Christopher Cushing.)