Wequian Lin/Getty Images
Dec 26, 2022
Dec 26, 2022
GABRIELA RAMOS and MARIANA MAZZUCATO
Public policies and institutions should be designed to ensure that innovations are improving the world; but as matters stand, many technologies are being deployed in a vacuum, with advances in artificial intelligence raising one red flag after another.
Public policies and institutions should be designed to ensure that innovations are improving the world; but as matters stand, many technologies are being deployed in a vacuum, with advances in artificial intelligence raising one red flag after another.
The era of light-touch self-regulation must end.
LONDON – The tech world has generated a fresh abundance of front-page news in 2022. In October, Elon Musk bought Twitter – one of the main public communication platforms used by journalists, academics, businesses, and policymakers – and proceeded to fire most of its content-moderation staff, indicating that the company would rely instead on artificial intelligence.
Then, in November, a group of Meta employees revealed that they had devised an AI program capable of beating most humans in the strategy game Diplomacy. In Shenzhen, China, officials are using “digital twins” of thousands of 5G-connected mobile devices to monitor and manage flows of people, traffic, and energy consumption in real time. And with the latest iteration of ChatGPT’s language-prediction model, many are declaring the end of the college essay.
In short, it was a year in which already serious concerns about how technologies are being designed and used deepened into even more urgent misgivings. Who is in charge here? Who should be in charge? Public policies and institutions should be designed to ensure that innovations are improving the world, yet many technologies are currently being deployed in a vacuum. We need inclusive mission-oriented governance structures that are centered around a true common good. Capable governments can shape this technological revolution to serve the public interest.
Consider AI, which the Oxford English Dictionary defines broadly as “the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.” AI can make our lives better in many ways. It can enhance food production and management, by making farming more efficient and improving food safety. It can help us bolster resilience against natural disasters, design energy-efficient buildings, improve power storage, and optimize renewable energy deployment. And it can enhance the accuracy of medical diagnostics when combined with doctors’ own assessments.
These applications would make our lives better in many ways. But with no effective rules in place, AI is likely to create new inequalities and amplify pre-existing ones. One need not look far to find examples of AI-powered systems reproducing unfair social biases. In one recent experiment, robots powered by a machine-learning algorithm became overtly racist and sexist. Without better oversight, algorithms that are supposed to help the public sector manage welfare benefits may discriminate against families that are in real need. Equally worrying, public authorities in some countries are already using AI-powered facial-recognition technology to monitor political dissent and subject citizens to mass-surveillance regimes.
Market concentration is also a major concern. AI development – and control of the underlying data – is dominated by just a few powerful players in just a few locales. Between 2013 and 2021, China and the United States accounted for 80% of private AI investment globally. There is now a massive power imbalance between the private owners of these technologies and the rest of us.
But AI is being boosted by massive public investment as well. Such financing should be governed for the common good, not in the interest of the few. We need a digital architecture that shares the rewards of collective value creation more equitably. The era of light-touch self-regulation must end. When we allow market fundamentalism to prevail, the state and taxpayers are condemned to come to the rescue after the fact (as we have seen in the context of the 2008 financial crisis and the COVID-19 pandemic), usually at great financial cost and with long-lasting social scarring. Worse, with AI, we do not even know if an ex post intervention will be enough. As The Economist recently pointed out, AI developers themselves are often surprised by the power of their creations.
Fortunately, we already know how to avert another laissez-faire-induced crisis. We need an “ethical by design” AI mission that is underpinned by sound regulation and capable governments working to shape this technological revolution in the common interest, rather than in shareholders’ interest alone. With these pillars in place, the private sector can and will join the broader effort to make technologies safer and fairer.
Effective public oversight should ensure that digitalization and AI are creating opportunities for public value creation. This principle is integral to UNESCO’s Recommendation on the Ethics of AI, a normative framework that was adopted by 193 member states in November 2021. Moreover, key players are now taking responsibility for reframing the debate, with US President Joe Biden’s administration proposing an AI Bill of Rights, and the European Union developing a holistic framework for governing AI.
Still, we also must keep the public sector’s own uses of AI on a sound ethical footing. With AI supporting more and more decision-making, it is important to ensure that AI systems are not used in ways that subvert democracy or violate human rights.
We also must address the lack of investment in the public sector’s own innovative and governance capacities. COVID-19 has underscored the need for more dynamic public-sector capabilities. Without robust terms and conditions governing public-private partnerships, for example, companies can easily capture the agenda.
The problem, however, is that the outsourcing of public contracts has increasingly become a barrier to building public-sector capabilities. Governments need to be able to develop AI in ways that they are not reliant on the private sector for sensitive systems, so that they can maintain control over important products and ensure that ethical standards are upheld. Likewise, they must be able to support information sharing and interoperable protocols and metrics across departments and ministries. This will all require public investments in government capabilities, following a mission-oriented approach.
Given that so much knowledge and experience is now centered in the private sector, synergies between the public and private sectors are both inevitable and desirable. Mission-orientation is about picking the willing – by co-investing with partners that recognize the potential of government-led missions. The key is to equip the state with the ability to manage how AI systems are deployed and used, rather than always playing catch-up. To share the risks and rewards of public investment, policymakers can attach conditions to public funding. They also can, and should, require Big Tech to be more open and transparent.
Our societies’ future is at stake. We must not only fix the problems and control the downside risks of AI, but also shape the direction of the digital transformation and technological innovation more broadly. At the start of a new year, there is no better time to begin laying the foundation for limitless innovation in the interest of all.
LONDON – The tech world has generated a fresh abundance of front-page news in 2022. In October, Elon Musk bought Twitter – one of the main public communication platforms used by journalists, academics, businesses, and policymakers – and proceeded to fire most of its content-moderation staff, indicating that the company would rely instead on artificial intelligence.
Then, in November, a group of Meta employees revealed that they had devised an AI program capable of beating most humans in the strategy game Diplomacy. In Shenzhen, China, officials are using “digital twins” of thousands of 5G-connected mobile devices to monitor and manage flows of people, traffic, and energy consumption in real time. And with the latest iteration of ChatGPT’s language-prediction model, many are declaring the end of the college essay.
In short, it was a year in which already serious concerns about how technologies are being designed and used deepened into even more urgent misgivings. Who is in charge here? Who should be in charge? Public policies and institutions should be designed to ensure that innovations are improving the world, yet many technologies are currently being deployed in a vacuum. We need inclusive mission-oriented governance structures that are centered around a true common good. Capable governments can shape this technological revolution to serve the public interest.
Consider AI, which the Oxford English Dictionary defines broadly as “the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.” AI can make our lives better in many ways. It can enhance food production and management, by making farming more efficient and improving food safety. It can help us bolster resilience against natural disasters, design energy-efficient buildings, improve power storage, and optimize renewable energy deployment. And it can enhance the accuracy of medical diagnostics when combined with doctors’ own assessments.
These applications would make our lives better in many ways. But with no effective rules in place, AI is likely to create new inequalities and amplify pre-existing ones. One need not look far to find examples of AI-powered systems reproducing unfair social biases. In one recent experiment, robots powered by a machine-learning algorithm became overtly racist and sexist. Without better oversight, algorithms that are supposed to help the public sector manage welfare benefits may discriminate against families that are in real need. Equally worrying, public authorities in some countries are already using AI-powered facial-recognition technology to monitor political dissent and subject citizens to mass-surveillance regimes.
Market concentration is also a major concern. AI development – and control of the underlying data – is dominated by just a few powerful players in just a few locales. Between 2013 and 2021, China and the United States accounted for 80% of private AI investment globally. There is now a massive power imbalance between the private owners of these technologies and the rest of us.
But AI is being boosted by massive public investment as well. Such financing should be governed for the common good, not in the interest of the few. We need a digital architecture that shares the rewards of collective value creation more equitably. The era of light-touch self-regulation must end. When we allow market fundamentalism to prevail, the state and taxpayers are condemned to come to the rescue after the fact (as we have seen in the context of the 2008 financial crisis and the COVID-19 pandemic), usually at great financial cost and with long-lasting social scarring. Worse, with AI, we do not even know if an ex post intervention will be enough. As The Economist recently pointed out, AI developers themselves are often surprised by the power of their creations.
Fortunately, we already know how to avert another laissez-faire-induced crisis. We need an “ethical by design” AI mission that is underpinned by sound regulation and capable governments working to shape this technological revolution in the common interest, rather than in shareholders’ interest alone. With these pillars in place, the private sector can and will join the broader effort to make technologies safer and fairer.
Effective public oversight should ensure that digitalization and AI are creating opportunities for public value creation. This principle is integral to UNESCO’s Recommendation on the Ethics of AI, a normative framework that was adopted by 193 member states in November 2021. Moreover, key players are now taking responsibility for reframing the debate, with US President Joe Biden’s administration proposing an AI Bill of Rights, and the European Union developing a holistic framework for governing AI.
Still, we also must keep the public sector’s own uses of AI on a sound ethical footing. With AI supporting more and more decision-making, it is important to ensure that AI systems are not used in ways that subvert democracy or violate human rights.
We also must address the lack of investment in the public sector’s own innovative and governance capacities. COVID-19 has underscored the need for more dynamic public-sector capabilities. Without robust terms and conditions governing public-private partnerships, for example, companies can easily capture the agenda.
The problem, however, is that the outsourcing of public contracts has increasingly become a barrier to building public-sector capabilities. Governments need to be able to develop AI in ways that they are not reliant on the private sector for sensitive systems, so that they can maintain control over important products and ensure that ethical standards are upheld. Likewise, they must be able to support information sharing and interoperable protocols and metrics across departments and ministries. This will all require public investments in government capabilities, following a mission-oriented approach.
Given that so much knowledge and experience is now centered in the private sector, synergies between the public and private sectors are both inevitable and desirable. Mission-orientation is about picking the willing – by co-investing with partners that recognize the potential of government-led missions. The key is to equip the state with the ability to manage how AI systems are deployed and used, rather than always playing catch-up. To share the risks and rewards of public investment, policymakers can attach conditions to public funding. They also can, and should, require Big Tech to be more open and transparent.
Our societies’ future is at stake. We must not only fix the problems and control the downside risks of AI, but also shape the direction of the digital transformation and technological innovation more broadly. At the start of a new year, there is no better time to begin laying the foundation for limitless innovation in the interest of all.
GABRIELA RAMOS
Writing for PS since 2019
2 Commentaries
Gabriela Ramos is Assistant Director-General for Social and Human Sciences at UNESCO.
MARIANA MAZZUCATO
Writing for PS since 2015
Mariana Mazzucato, Professor in the Economics of Innovation and Public Value at University College London, is Founding Director of the UCL Institute for Innovation and Public Purpose, Chair of the World Health Organization’s Council on the Economics of Health For All, and a co-chair of the Global Commission on the Economics of Water. She is the author of The Value of Everything: Making and Taking in the Global Economy (Penguin Books, 2019), The Entrepreneurial State: Debunking Public vs. Private Sector Myths (Penguin Books, 2018), and, most recently, Mission Economy: A Moonshot Guide to Changing Capitalism (Penguin Books, 2022).
No comments:
Post a Comment