‘The Age of AI’ advances a larger political and corporate agenda.
BY MEREDITH WHITTAKER, LUCY SUCHMAN
DECEMBER 8, 2021
DEPARTMENT OF DEFENSE
A Marine Corps unmanned aerial system, used as an intelligence-gathering asset
This article appears in the November/December 2021 issue of The American Prospect magazine.
The Age of AI: And Our Human Future
By Henry A. Kissinger, Eric Schmidt, and Daniel Huttenlocher
Little, Brown
The term “artificial intelligence” is widely recognized by researchers as less a technically precise descriptor than an aspirational project that comprises a growing collection of data-centric technologies. The recent AI trend kicked off around 2010, when a combination of increased computing power and massive troves of web data reanimated interest in decades-old techniques. It wasn’t the algorithms that were new as much as the concentrated resources and the surveillance business models capable of collecting, storing, and processing previously unfathomable amounts of data.
In other words, so-called “advances” in AI celebrated over the last decade are primarily the product of significantly concentrated data and computing resources that reside in the hands of a few large tech corporations like Amazon, Facebook, and Google. At the same time, AI technologies are increasingly shown to be brittle, systemically biased, and applied in ways that exacerbate racialized inequality.
The Age of AI works to take the debate about artificial intelligence off the table by obscuring the relevant technologies and the political economy behind them. Its title alone—The Age of AI: And Our Human Future—declares an epoch and aspires to speak on behalf of everyone. It presents AI as an entity, as superhuman, and as inevitable—while erasing a history of scholarship and critique of AI technologies that demonstrates their limits and inherent risks, the irreducible labor required to sustain them, and the financial incentives of tech companies that produce and profit from them.
While the book’s intellectual contribution is marginal, the political agenda of its authors merits careful consideration.
Henry Kissinger needs no introduction. Even at 98 years old, he remains an influential voice in foreign policy despite his sustained commitment to U.S. exceptionalism, military dominance, and entrenching the military-industrial complex.
The public is recognizing that it has a choice in whether AI is developed and widely adopted.
Eric Schmidt is the former chief executive of Google and former executive chairman of its parent company Alphabet. He has worked over the last decade to encourage investments by the military and intelligence establishments in Big Tech infrastructures and to market their products, including Google’s AI technologies, as indispensable to U.S. military prowess. He’s also a billionaire and a philanthropist, whose Schmidt Futures underwrites positions throughout the federal government, and many tech-related civil society organizations and initiatives. Over the last several years, he chaired the National Security Commission on Artificial Intelligence (NSCAI), an advisory board to Congress and the Pentagon comprising Big Tech executives, military and intelligence professionals, and academic elites.
Daniel Huttenlocher is the dean of MIT’s Schwarzman College of Computing, an AI-focused mega-lab that was launched thanks to a $350 million gift from foreclosure profiteer and longtime Trump supporter Stephen Schwarzman, the co-founder of the investment group Blackstone. Huttenlocher is also board chair of the MacArthur Foundation, which funds progressive nonprofits and initiatives focused on tech accountability.
This book provides Eric Schmidt and his co-authors a new occasion for a well-funded PR campaign, during which they will be given opportunities to present their views to large audiences, and likely to brief policymakers and other political actors.
In this way, The Age of AI should be understood as a companion to the work that the NSCAI has already done under Schmidt’s leadership. In March, the NSCAI issued a report that echoed Cold War rhetoric to recommend $40 billion in federal investments in AI, warning that the U.S. must maintain AI supremacy or risk being eclipsed by China. The NSCAI report and The Age of AI serve Big Tech’s agenda through three rhetorical strategies.
First, they position Big Tech’s AI and computing power as critical national infrastructure, across research and development environments, and military and government operations. Second, they propose “solutions” that serve to vastly enrich tech companies, helping them to meet their profit and growth projections, while also funding AI-focused research programs at top-tier universities. This serves to bring Big Tech and academia closer together, further merging their interests and deterring meaningful dissent by a new wave of researchers critical of Silicon Valley. Third, and most importantly, by providing arguments against curbing the power of Big Tech companies, the book frames these companies as too important to the American national interest to regulate or to break up. Those arguments could be read against the antitrust advocates and tech critics within the Biden administration who have committed to checking the concentrated power of Silicon Valley.
OVER THE LAST FIVE YEARS, a chorus of researchers, policy advocates, and tech workers have pushed a rejection of Big Tech into the mainstream. Movements calling for bans on facial recognition, worker surveillance and control, surveillance advertising, algorithmic content amplification, and other harmful applications of artificial intelligence have increased. Significant battles have been won in the process.
A complementary turn to tech antitrust and a growing willingness signaled by the Federal Trade Commission to crack down on concentrated power and deceptive practices is also opening questions about the future of ubiquitous AI deployment, and the surveillance business models and concentrated resources on which it relies.
This background is important to understanding why The Age of AI has as a central theme establishing artificial intelligence’s inevitability. Throughout, this refrain is relentless: AI is “already ubiquitous,” “undeniably, inevitably” set to “change both humans and the environments in which we live.” AI “may soon prove indispensable” and cannot be “uninvented.”
This recitation is necessary because AI is not inevitable. In fact, the public is recognizing that it has a choice in whether AI is developed and widely adopted, and this poses a threat to the Big Tech interests whose funding, revenue, and growth projections depend on ubiquitous AI.
Just as The Age of AI goes to great lengths to emphasize AI’s inevitability, it also warns of the dangers—even cowardice—of AI refusal. The authors assert that “[a]ttempts to halt its development will merely cede the future to the element of humanity courageous enough to face the implications of its own inventiveness,” while tech whistleblowers are “leakers and saboteurs.” Adopting AI is a moral imperative, such that “[o]nce AI’s performance outstrips that of humans for a given task, failing to apply that AI—at least as an adjunct to humans—may appear increasingly decadent, perverse, or even negligent.”
WITH ALL OF ITS SUPERLATIVES, this book describes something bordering on the divine, which bears no resemblance to the automated decision systems or even the large language models and other so-called cutting-edge approaches that are currently developed by AI companies. The reader is offered a false portrait of AI, described as a fundamental break in human history, one auguring a new epoch involving “the alteration of human identity and the human experience of reality at a level not experienced since the dawn of the modern age.” We are told that AI’s “functioning portends progress toward the essence of things—progress that philosophers, theologians, and scientists have sought for millennia.”
At the same time, The Age of AI sidesteps the vested interests responsible for AI, in the process eliding Big Tech’s monopoly over data and infrastructural resources. For the authors, Big Tech companies, as “network platform operators,” are providing a public service “on a scale that represents a civilizational event.” In contrast, government is painted as ill-equipped to regulate and oversee these companies. The message of the authors is clear: Regulation is dangerous, especially regulation that would hamper AI’s development.
The Age of AI is also, quite explicitly, offering product placement for Google’s AI products and capabilities. Of the examples presented, most are produced either by Google, its parent company, or companies that it has purchased: AlphaZero (an AI model developed by DeepMind, famous for its prowess at the games of chess and Go), BERT (a significant large language model developed at Google), Google Assistant, Google Translate, Google Search, AlphaFold (an AI model that predicts protein structures), DeepMind’s data center energy reduction accomplished using machine learning, and MuZero (derived from AlphaZero). AI efforts from Amazon, Apple, Microsoft, and Facebook get shout-outs, but in Facebook and Microsoft’s case the examples named are not particularly flattering: flawed content moderation AI in Facebook’s case, and the racist chatbot Tay in Microsoft’s.
TOMASCASTELAZO/WIKIMEDIA COMMONS
A surveillance tower is positioned beyond the barrier at the U.S.-Mexico border between San Diego and Tijuana, Mexico.
TO CLAIM, AS THE AGE OF AI DOES, that this book fills a “gap” in “basic vocabulary and concepts for an informed debate about this technology” requires erasure of an extensive journalistic and academic literature. Acknowledging these writings would undermine the authors’ grand prognostications, the hazy image of AI as all-powerful and (largely) beneficial, and the Big Tech–friendly political agenda this book is working to bolster.
Selling this agenda, in other words, requires some willful ignorance. References to race, gender, and labor are largely absent even as the co-authors explore historical terrain where racism, patriarchal power, and colonialism are central. For example, the authors celebrate the Dutch East India company and the stock exchange where its shares were traded as an example of a positive network effect, without remarking on its genocidal colonial practices, or its role in the Dutch slave trade.
The book’s erasure of white supremacy, colonialism, and slavery from its historical overview is mirrored in the minimal engagement with the extensive research that has exposed how AI replicates and amplifies racialized, gendered, and other forms of inequality. There’s no mention of the AI-powered wall at the United States’ southern border, or police and law enforcement use of AI to hunt and track protesters, or the exploitative use of AI to control workers by companies like Uber and Amazon, even though these harmful and oppressive applications of AI are by now well documented.
The book also fails to mention climate change, or the significant climate costs of large-scale AI systems. To acknowledge climate would tear a hole in its narrative, suggesting an existential threat not coming from China and the mythical specter of Chinese dominance.
ERIC SCHMIDT’S LATEST ENDEAVOR, the Special Competitive Studies Project (SCSP), launched in early October 2021, just in time to be central to a press tour arranged around the book. Described in quasi-governmental language and with a “bipartisan board of national security leaders,” SCSP is, in fact, a self-funded, shadow lobbying organization created to advance the interests of the tech industry. By filling the project’s board and leadership positions with many of the same cast that constituted the National Security Commission on Artificial Intelligence, this initiative inherits the patina of an official government endeavor whose work deserves serious consideration.
Schmidt says that the project is modeled on the Rockefeller Special Studies Project (SSP), which Henry Kissinger led in the 1950s and used to advocate for the vast expansion in U.S. military spending. SSP was also privately funded by one of the most powerful men in the world, Nelson Rockefeller. That program advocated for a resource-intensive Cold War arms race, based on the premise that the alternative was apocalypse at the hands of the Soviet Union.
Schmidt and his associates could be read as trying for a repeat of the SSP, drawing on the version of AI presented in The Age of AI and reheated Cold War urgency that focuses on China as the looming threat. This time, however, we need to call the bluff, rejecting the mystified portrait of AI that is central to this agenda, and naming related influence campaigns for what they are.
A more rigorous treatment of AI that included problems of discrimination and the climate and labor costs of producing AI would suggest very different trade-offs. It would suggest, as well, answers to questions of security that look more like international solidarity and equitable resource distribution, and less like technological brinkmanship and a mindset premised on a new Cold War
MEREDITH WHITTAKER is the Minderoo Research Professor at New York University
and faculty director of the AI Now Institute.
LUCY SUCHMAN is professor emerita of the anthropology of science and technology at Lancaster University in the United Kingdom.
LUCY SUCHMAN is professor emerita of the anthropology of science and technology at Lancaster University in the United Kingdom.
The EU should refocus the AI Act on workers and people
Proposed EU legislation on AI is driven by a desire for growth, with few provisions for safeguarding the rights of individuals, particularly workers
Aida Ponce Del Castillo
17 December 2021,
In April this year, the European Commission proposed the Artificial Intelligence Act
Proposed EU legislation on AI is driven by a desire for growth, with few provisions for safeguarding the rights of individuals, particularly workers
Aida Ponce Del Castillo
17 December 2021,
In April this year, the European Commission proposed the Artificial Intelligence Act
Martin Bertrand / Alamy Stock Photo. All rights reserved
In April this year, the European Commission proposed the Artificial Intelligence Act, which aims to regulate the use of AI-driven products, services and systems within the EU. But the market-driven draft legislation, aimed at creating and developing a competitive European AI sector, failed to meet the expectations of civil society, which had been hoping that the act would prioritise the protection of people.
The EU Council presidency also criticised the text, pitching substantial changes to the proposal and suggesting, in particular, further restrictions on the possible use of a ‘social credit system’ and facial recognition technology. Critical negotiations are ongoing, but there is no guarantee that they will result in the draft law becoming more protective of individuals.
Employment and workers’ rights are particular areas of concern in the context of the AI Act, and measures must be taken to ensure that workers are protected.
AI in the workplace
When they work, AI systems do what we ask them to do: achieve an objective. But, it is easy to give an AI system the wrong problem to solve, or for it to produce solutions that are useless, wrong or biased. What’s not so easy is identifying issues like this before something goes wrong. As AI systems are being entrusted with increasing ‘authority’ in the workplace and in hiring processes, for example screening resumes and estimating the ‘risk level’ of workers, we need to ask whether developers can build AI systems that are protective of workers’ rights when the system’s objectives have nothing to do with these rights.
How can we set limits on a system that may have a negative impact on workers and their rights? Given these questions and the particular risks that AI raises in the context of employment, the European Commission should produce an additional ad hoc legislative proposal dedicated to protecting workers’ rights when they are exposed to, interact with or work with AI systems.
With some amendments, the final AI Act could – and should – become a less permissive tool of market regulation, compared to its current version, one that addresses issues related to the impact of AI on workers, who are particularly at risk given their subordinate position in the employment relationship.
As it stands, the proposed AI Act does not regulate AI. What it does is establish rules concerning the placing of “AI systems” on the market and putting them into service and use.
It is clear from the draft that the primary objective of the European Commission is to foster the development and uptake of AI for economic growth. Meanwhile, protecting the public interest, in particular the health, safety and fundamental rights and freedoms of individuals, is only a side objective, and the draft includes only a limited number of concrete provisions to achieve these objectives.
Issues the draft must address
Updating and modernising the understanding of work-related risks to include data-driven or AI-related risks. This would involve conducting an extensive anticipatory exercise involving both employers and workers. The new way of thinking about workplace risks should be broad and go beyond occupational health and safety. It should include the risks to privacy, data protection and fundamental rights, and the possible abuses of managerial power stemming from the employment relationship.
The ‘transparency paradox’ at work must be addressed. Transparency does not work in the workplace as it is a unilateral requirement that does not provide workers with actionable rights. As AI applications involve processing personal data, including workers’ data, reaffirming the relevance of GDPR rights and making them explicit in the context of employment is essential. AI systems use workers’ data for predictive analysis, performance evaluation or task distribution, and workers must be able to fully exercise their rights under GDPR with their employers.
In line with GDPR, Workers must more easily be able to exercise their right to explanation (GDPR Art 22) of how their data is being used. If workers don’t understand how their data is being used to manage and evaluate them, they can’t take action to protect their rights.
Equally important is that workers can exercise their right to be consulted. This can imply an obligation for employers to consult workers before AI systems are implemented and to provide a mechanism to monitor the outcomes.
From a labour perspective, trying to fit protective provisions into the AI Act proposal may turn out to be like trying to push square pegs into round holes
Affirming the ‘human-in-command’ principle as a component of work organisation, and preserving worker autonomy in human-machine interactions. In the workplace, humans and machines act together. Managers, IT support team or external experts cannot be the only humans actively interacting and intervening with the AI systems. When joint (human- machine) problem solving takes place, employers should ensure that the AI systems they deploy always require worker interaction, for example by using feedback loops or validation systems, and by incorporating workers knowledge and understanding of their own roles and tasks within their jobs.
A total ban of algorithmic worker surveillance. AI systems can bring worker monitoring to a new level, which can be defined as ‘algorithmic worker surveillance’. Advanced analytics can be used to measure biology, behaviours, concentration and emotions. One can compare this to switching from radar, which scans the surface of the sea, to sonar, which builds a 3D image of everything under the surface. Such surveillance is extremely intrusive, as it does not passively scan but ‘scrapes’ the personal lives of workers, actively building an image and then making judgements and decisions about individuals. It must be banned.
From a labour perspective, trying to fit protective provisions into the AI Act proposal may turn out to be like trying to push square pegs into round holes. The time is right to ask fundamental questions on how best to address the governance of AI, including the necessary ability to assess risks before implementing AI systems and those risks materialise into reality.
The AI Act was designed by the European Commission to ensure the development of a competitive and ‘deregulated’ European AI market. But a more balanced approach is needed and the European Parliament and Council should listen to the many voices asking for a regulatory framework that also protects citizens’ and workers’ rights.
In April this year, the European Commission proposed the Artificial Intelligence Act, which aims to regulate the use of AI-driven products, services and systems within the EU. But the market-driven draft legislation, aimed at creating and developing a competitive European AI sector, failed to meet the expectations of civil society, which had been hoping that the act would prioritise the protection of people.
The EU Council presidency also criticised the text, pitching substantial changes to the proposal and suggesting, in particular, further restrictions on the possible use of a ‘social credit system’ and facial recognition technology. Critical negotiations are ongoing, but there is no guarantee that they will result in the draft law becoming more protective of individuals.
Employment and workers’ rights are particular areas of concern in the context of the AI Act, and measures must be taken to ensure that workers are protected.
AI in the workplace
When they work, AI systems do what we ask them to do: achieve an objective. But, it is easy to give an AI system the wrong problem to solve, or for it to produce solutions that are useless, wrong or biased. What’s not so easy is identifying issues like this before something goes wrong. As AI systems are being entrusted with increasing ‘authority’ in the workplace and in hiring processes, for example screening resumes and estimating the ‘risk level’ of workers, we need to ask whether developers can build AI systems that are protective of workers’ rights when the system’s objectives have nothing to do with these rights.
How can we set limits on a system that may have a negative impact on workers and their rights? Given these questions and the particular risks that AI raises in the context of employment, the European Commission should produce an additional ad hoc legislative proposal dedicated to protecting workers’ rights when they are exposed to, interact with or work with AI systems.
With some amendments, the final AI Act could – and should – become a less permissive tool of market regulation, compared to its current version, one that addresses issues related to the impact of AI on workers, who are particularly at risk given their subordinate position in the employment relationship.
As it stands, the proposed AI Act does not regulate AI. What it does is establish rules concerning the placing of “AI systems” on the market and putting them into service and use.
It is clear from the draft that the primary objective of the European Commission is to foster the development and uptake of AI for economic growth. Meanwhile, protecting the public interest, in particular the health, safety and fundamental rights and freedoms of individuals, is only a side objective, and the draft includes only a limited number of concrete provisions to achieve these objectives.
Issues the draft must address
Updating and modernising the understanding of work-related risks to include data-driven or AI-related risks. This would involve conducting an extensive anticipatory exercise involving both employers and workers. The new way of thinking about workplace risks should be broad and go beyond occupational health and safety. It should include the risks to privacy, data protection and fundamental rights, and the possible abuses of managerial power stemming from the employment relationship.
The ‘transparency paradox’ at work must be addressed. Transparency does not work in the workplace as it is a unilateral requirement that does not provide workers with actionable rights. As AI applications involve processing personal data, including workers’ data, reaffirming the relevance of GDPR rights and making them explicit in the context of employment is essential. AI systems use workers’ data for predictive analysis, performance evaluation or task distribution, and workers must be able to fully exercise their rights under GDPR with their employers.
In line with GDPR, Workers must more easily be able to exercise their right to explanation (GDPR Art 22) of how their data is being used. If workers don’t understand how their data is being used to manage and evaluate them, they can’t take action to protect their rights.
Equally important is that workers can exercise their right to be consulted. This can imply an obligation for employers to consult workers before AI systems are implemented and to provide a mechanism to monitor the outcomes.
From a labour perspective, trying to fit protective provisions into the AI Act proposal may turn out to be like trying to push square pegs into round holes
Affirming the ‘human-in-command’ principle as a component of work organisation, and preserving worker autonomy in human-machine interactions. In the workplace, humans and machines act together. Managers, IT support team or external experts cannot be the only humans actively interacting and intervening with the AI systems. When joint (human- machine) problem solving takes place, employers should ensure that the AI systems they deploy always require worker interaction, for example by using feedback loops or validation systems, and by incorporating workers knowledge and understanding of their own roles and tasks within their jobs.
A total ban of algorithmic worker surveillance. AI systems can bring worker monitoring to a new level, which can be defined as ‘algorithmic worker surveillance’. Advanced analytics can be used to measure biology, behaviours, concentration and emotions. One can compare this to switching from radar, which scans the surface of the sea, to sonar, which builds a 3D image of everything under the surface. Such surveillance is extremely intrusive, as it does not passively scan but ‘scrapes’ the personal lives of workers, actively building an image and then making judgements and decisions about individuals. It must be banned.
From a labour perspective, trying to fit protective provisions into the AI Act proposal may turn out to be like trying to push square pegs into round holes. The time is right to ask fundamental questions on how best to address the governance of AI, including the necessary ability to assess risks before implementing AI systems and those risks materialise into reality.
The AI Act was designed by the European Commission to ensure the development of a competitive and ‘deregulated’ European AI market. But a more balanced approach is needed and the European Parliament and Council should listen to the many voices asking for a regulatory framework that also protects citizens’ and workers’ rights.
No comments:
Post a Comment