AI
How an “AI-tocracy” emerges
In China, the use of AI-driven facial recognition helps the regime repress dissent while enhancing the technology, researchers report
Peer-Reviewed Publication
CAMBRIDGE, MA -- Many scholars, analysts, and other observers have suggested that resistance to innovation is an Achilles’ heel of authoritarian regimes. Such governments can fail to keep up with technological changes that help their opponents; they may also, by stifling rights, inhibit innovative economic activity and weaken the long-term condition of the country.
But a new study co-led by an MIT professor suggests something quite different. In China, the research finds, the government has increasingly deployed AI-driven facial-recognition technology to suppress dissent; has been successful at limiting protest; and in the process, has spurred the development of better AI-based facial-recognition tools and other forms of software.
“What we found is that in regions of China where there is more unrest, that leads to greater government procurement of facial-recognition AI, subsequently, by local government units such as municipal police departments,” says MIT economist Martin Beraja, who is co-author of a new paper detailing the findings.
What follows, as the paper notes, is that “AI innovation entrenches the regime, and the regime’s investment in AI for political control stimulates further frontier innovation.”
The scholars call this state of affairs an “AI-tocracy,” describing the connected cycle in which increased deployment of the AI-driven technology quells dissent while also boosting the country’s innovation capacity.
The open-access paper, also called “AI-tocracy,” appears in the August issue of the Quarterly Journal of Economics. An abstract of the uncorrected proof was first posted online in March. The co-authors are Beraja, who is the Pentti Kouri Career Development Associate Professor of Economics at MIT; Andrew Kao, a doctoral candidate in economics at Harvard University; David Yang, a professor of economics at Harvard; and Noam Yuchtman, a professor of management at the London School of Economics.
To conduct the study, the scholars drew on multiple kinds of evidence spanning much of the last decade. To catalogue instances of political unrest in China, they used data from the Global Database of Events, Language, and Tone (GDELT) Project, which records news feeds globally. The team turned up 9,267 incidents of unrest between 2014 and 2020.
The researchers then examined records of almost 3 million procurementcontracts issued by the Chinese government between 2013 and 2019, from a database maintained by China’s Ministry of Finance. They found that local governments’ procurement of facial-recognition AI services and complementary public security tools — high-resolution video cameras — jumped significantly in the quarter following an episode of public unrest in that area.
Given that Chinese government officials were clearly responding to public dissent activities by ramping up on facial-recognition technology, the researchers then examined a follow-up question: Did this approach work to suppress dissent?
The scholars believe that it did, although as they note in the paper, they “cannot directly estimate the effect” of the technology on political unrest. But as one way of getting at that question, they studied the relationship between weather and political unrest in different areas of China. Certain weather conditions are conducive to political unrest. But in prefectures in China that had already invested heavily in facial-recognition technology, such weather conditions are less conducive to unrest compared to prefectures that had not made the same investments.
In so doing, the researchers also accounted for issues such as whether or not greater relative wealth levels in some areas might have produced larger investments in AI-driven technologies regardless of protest patterns. However, the scholars still reached the same conclusion: Facial-recognition technology was being deployed in response to past protests, and then reducing further protest levels.
“It suggests that the technology is effective in chilling unrest,” Beraja says.
Finally, the research team studied the effects of increased AI demand on China’s technology sector and found the government’s greater use of facial-recognition tools appears to be driving the country’s tech sector forward. For instance, firms that are granted procurement contracts for facial-recognition technologies subsequently produce about 49 percent more software products in the two years after gaining the government contract than they had beforehand.
“We examine if this leads to greater innovation by facial-recognition AI firms, and indeed it does,” Beraja says.
Such data — from China’s Ministry of Industry and Information Technology — also indicates that AI-driven tools are not necessarily “crowding out” other kinds of high-tech innovation.
Adding it all up, the case of China indicates how autocratic governments can potentially reach a near-equilibrium state in which their political power is enhanced, rather than upended, when they harness technological advances.
“In this age of AI, when the technologies not only generate growth but are also technologies of repression, they can be very useful” to authoritarian regimes, Beraja says.
The finding also bears on larger questions about forms of government and economic growth. A significant body of scholarly research shows that rights-granting democratic institutions do generate greater economic growth over time, in part by creating better conditions for technological innovation. Beraja notes that the current study does not contradict those earlier findings, but in examining the effects of AI in use, it does identify one avenue through which authoritarian governments can generate more growth than they otherwise would have.
“This may lead to cases where more autocratic institutions develop side by side with growth,” Beraja adds.
Other experts in the societal applications of AI say the paper makes a valuable contribution to the field.
“This is an excellent and important paper that improves our understanding of the interaction between technology, economic success, and political power,” says Avi Goldfarb, the Rotman Chair in Artificial Intelligence and Healthcare and a professor of marketing at the Rotman School of Management at the University of Toronto. “The paper documents a positive feedback loop between the use of AI facial-recognition technology to monitor suppress local unrest in China and the development and training of AI models. This paper is pioneering research in AI and political economy. As AI diffuses, I expect this research area to grow in importance.”
For their part, the scholars are continuing to work on related aspects of this issue. One forthcoming paper of theirs examines the extent to which China is exporting advanced facial-recognition technologies around the world — highlighting a mechanism through which government repression could grow globally.
###
Support for the research was provided in part by the U.S. National Science Foundation Graduate Research Fellowship Program; the Harvard Data Science Initiative; and the British Academy’s Global Professorships program.
JOURNAL
The Quarterly Journal of Economics
ARTICLE TITLE
AI-tocracy
ARTICLE PUBLICATION DATE
13-Jul-2023
Displacement or complement? HKUST researchers reveal mixed-bag responses in human interaction study with AI
Peer-Reviewed PublicationArtificial intelligence (AI) is all the rage lately in the public eye. How AI is being incorporated to the advantage of our everyday life despite its rapid development, however, remains an elusive topic that deserves the attention of many scientists. While in theory, AI can replace, or even displace, human beings from their positions, the challenge remains on how different industries and institutions can take advantage of this technological advancement and not drown in it.
Recently, a team of researchers at the Hong Kong University of Science and Technology (HKUST) conducted an ambitious study of AI applications on the education front, examining how AI could enhance grading while observing human participants’ behavior in the presence of a computerized companion. They found that teachers were generally receptive to AI’s input - until both sides came to an argument on who should reign supreme. This very much resembles how human beings interact with one another when a new member forays into existing territory.
The research was conducted by HKUST Department of Computer Science and Engineering Ph.D. candidate Chengbo Zheng and four of his teammates under the supervision of Associate Professor Prof. Xiaojuan MA. They developed an AI group member named AESER (Automated Essay ScorER) and separated twenty English teachers into ten groups to investigate the impact of AESER in a group discussion setting, where the AI would contribute in opinion deliberation, asking and answering questions and even voting for the final decision. In this study, designed akin to the controlled “Wizard of Oz” research method, a deep learning model and a human researcher would form joint input to AESER, which would then exchange views and conduct discussions with other participants in an online meeting room.
While the team expected AESER to promote objectivity and provide novel perspectives that would otherwise be overlooked, potential challenges were soon revealed. First, there was the risk of conformity, where the engagement of AI would soon create a majority to thwart discussions. Second, views provided by AESER were found to be rigid and even stubborn, which frustrated the participants when they found that an argument could never be “won”. Many also did not think AI’s input should be given equal weight and are more fit to play the role of an assistant to actual human work.
"At this stage, AI is deemed somewhat 'stubborn' by human collaborators, for good and bad,” noted Prof. Ma. “On the one hand, AI is stubborn so it does not fear to express its opinions frankly and openly. However, human collaborators feel disengaged when they could not meaningfully persuade AI to change its view. Humans varying attitudes towards AI. Some consider it to be a single intelligent entity while others regard AI as the voice of collective intelligence that emerges from big data. Discussions about issues such as authority and bias thus arise.”
The immediate next step for the team involves expanding its scope to gather more quantitative data, which will provide more measurable and precise insights into how AI impacts group decision-making. They are also looking to explore large language models (LLMs) such as ChatGPT into the study, which could potentially bring new insights and perspectives to group discussions.
Their study was published at the ACM Conference on Human Factors in Computing Systems in April 2023.
METHOD OF RESEARCH
Experimental study
SUBJECT OF RESEARCH
People
ARTICLE TITLE
Competent but Rigid: Identifying the Gap in Empowering AI to Participate Equally in Group Decision-Making
Could AI-powered robot “companions” combat human loneliness?
Companion robots may help socially isolated people avoid the health risks of being alone
Peer-Reviewed PublicationAUKLAND, NZ and DURHAM, N.C. – Companion robots enhanced with artificial intelligence may one day help alleviate the loneliness epidemic, suggests a new report from researchers at Auckland, Duke, and Cornell Universities.
Their report, appearing in the July 12 issue of Science Robotics, maps some of the ethical considerations for governments, policy makers, technologists, and clinicians, and urges stakeholders to come together to rapidly develop guidelines for trust, agency, engagement, and real-world efficacy.
It also proposes a new way to measure whether a companion robot is helping someone.
“Right now, all the evidence points to having a real friend as the best solution,” said Murali Doraiswamy, MBBS, FRCP, professor of Psychiatry and Geriatrics at Duke University and member of the Duke Institute for Brain Sciences. “But until society prioritizes social connectedness and eldercare, robots are a solution for the millions of isolated people who have no other solutions.”
The number of Americans with no close friends has quadrupled since 1990, according to the Survey Center on American Life. Increased loneliness and social isolation may affect a third of the world population, and come with serious health consequences, such as increased risk for mental illness, obesity, dementia, and early death. Loneliness may even be as pernicious a health factor as smoking cigarettes, according to the U.S. Surgeon General Vivek H. Murthy, M.D.
While it is increasingly difficult to make new friends as an adult to help offset loneliness, making a companion robot to support socially isolated older adults may prove to be a promising solution.
“AI presents exciting opportunities to give companion robots greater skills to build social connection,” said Elizabeth Broadbent, Ph.D., professor of Psychological Medicine at Waipapa Taumata Rau, University of Auckland. “But we need to be careful to build in rules to ensure they are moral and trustworthy.”
Social robots like the ElliQ have had thousands of interactions with human users, nearly half related to simple companionship, including company over a cup of tea or coffee. A growing body of research on companion robots suggests they can reduce stress and loneliness and can help older people remain healthy and active in their homes.
Newer robots embedded with advanced AI programs may foster stronger social connections with humans than earlier generations of robots. Generative AI like ChatGPT, which is based on large language models, allows robots to engage in more spontaneous conversations, and even mimic the voices of old friends and loved ones who have passed away.
Doctors are mostly on board, too, the authors point out. A Sermo survey of 307 care providers across Europe and the United States showed that 69% of physicians agreed that social robots could provide companionship, relieve isolation, and potentially improve patients’ mental health. Seventy percent of doctors also felt insurance companies should cover the cost of companion robots if they prove to be effective friendship supplement. How to measure a robot’s impact, though, remains tricky.
This lack of measurability highlights the need to develop patient-rated outcome measures, such as the one being developed by the authors. The “Companion Robot Impact Scale” (Co-Bot-I-7) aims to establish the impact on physical health and loneliness, and is showing that companion machines might already be proving effective.
Early results from Broadbent’s lab, for example, find that amiable androids help reduce stress and even promote skin healing after a minor wound.
“With the right ethical guidelines,” the authors conclude in their report, “we may be able to build on current work to use robots to create a healthier society.”
In addition to Dr. Doraiswamy and Professor Broadbent, study authors include Mark Billinghurst, Ph.D., and Samantha Boardman, M.D.
Professor Broadbent and Dr. Doraiswamy have served as advisors to Sermo and technology companies. Dr. Doraiswamy, Professor Broadbent, and Dr. Boardman are co-developers of the Co-Bot-I-7 scale.
CITATION: “Enhancing Social Connectedness With Companion Robots Employing AI,” Elizabeth Broadbent, Mark Billinghurst, Samantha G. Boardman, P. Murali Doraiswamy. Science Robotics, July 12, 2023. DOI: 10.1126/scirobotics.adi6347
JOURNAL
Science Robotics
METHOD OF RESEARCH
Commentary/editorial
SUBJECT OF RESEARCH
People
ARTICLE TITLE
Enhancing Social Connectedness With Companion Robots Employing AI
ARTICLE PUBLICATION DATE
12-Jul-2023
COI STATEMENT
Professor Broadbent and Dr. Doraiswamy have served as advisors to Sermo and technology companies. Dr. Doraiswamy, Professor Broadbent, and Dr. Boardman are co-developers of the Co-Bot-I-7 scale.
Use of ChatGPT improves productivity, with particular benefits to those with weaker skills
The use of ChatGPT – a chatbot that can generate human-like text – raises productivity in professional writing tasks and reduces productivity inequality in those who use it, according to a new study involving over 400 college-educated professionals. Although the findings reveal direct and immediate effects of ChatGPT on worker productivity, study authors Shakked Noy and Whitney Zhang note that longer-term impacts on complex labor market dynamics, which will likely arise as firms and workers adapt to ChatGPT, remain unknown. “Overall, the arrival of ChatGPT ushers in an era of vast uncertainty about the economic and labor market effects of AI technologies,” write the authors. Our experiment takes the first step toward answering the many questions that have arisen.” The recent and rapid advancements in generative AI systems, particularly platforms like ChatGPT or DALL-E, are unique compared to most historical automation technologies. In the past, automation has affected more routine tasks consisting of explicit sequences or steps, like manufacturing or bookkeeping tasks. However, generative AI technologies are becoming quite adept at performing more creative and difficult-to-codify tasks like writing or image generation, which have long relied on specialized and educated workers. According to Noy and Zhang, like other forms of automation, a potent writing tool such as ChatGPT can potentially enhance workers’ productivity, offering particular benefits to those with weaker skills. It could also make some kinds of writers obsolete, replacing them entirely. Here, Noy and Zhang evaluated these outcomes in the context of diverse professional writing tasks. In a pre-registered online experiment, the authors assigned incentivized, occupation-specific writing tasks to 453 college-educated professionals, half of whom were allowed to use ChatGPT. The findings show that 80% of those allowed to use ChatGPT did and that the writers in this group were substantially more productive than the control group. Not only did the time taken to complete tasks decrease by 40%, but the output quality also rose by 18%. What’s more, the authors found that participants with weaker skills benefited the most from the use of ChatGPT, illustrating a reduction in overall inequality among workers.
JOURNAL
Science
ARTICLE TITLE
Experimental evidence on the productivity effects of generative artificial intelligence
Special Issue: A machine-intelligent world
Reports and Proceedings
In this special issue of Science, nine pieces – including Perspectives, Policy Forums, and Reviews – highlight recent advancements in artificial intelligence (AI) technologies and how they’re being used to answer novel questions in topics ranging from human health to animal behavior. However, the recent widespread adoption of AI in these areas is not without unique ethical concerns and policy challenges. “By looking to the forefront of how AI is being used in science and society, many grand challenges and benefits appear,” writes Gemma Alderton, deputy editor at Science.
AI-predicted race variables from medical images pose risks and opportunities for studying health disparities, say James Zou and colleagues in a Perspective. Hundreds of AI-assisted medical devices are currently used in diverse medical tasks, such as assessing health risks and diagnosing diseases like cancer. Some studies have shown that AI models can infer race variables – albeit in crude, simplistic categories – directly from medical images like chest x-rays and cardiac ultrasounds, despite no known human-readable race correlates in the images. “Although race variables are not a generally meaningful category in medicine, the ability of AI to predict race variables from medical images could be useful for monitoring health care disparity and ensuring that algorithms work well across diverse populations,” Zou et al. write. In a second Perspective, Matthew DeCamp and Charlotta Lindvall highlight how examination of bias in AI and healthcare has tended toward removing bias from datasets, analyses, or in AI development teams. However, DeCamp and Lindvall argue that it will also require reducing biases in how clinicians and patients use AI-based algorithms, which could be more challenging than reducing biases in the algorithms themselves.
AI technologies also show great promise in expanding our understanding of animal behaviors. In a third Perspective, Christian Rutz and colleagues review how machine learning (ML) methods are being used to decode animal communication systems. Understanding how animals communicate presents a host of challenges – animals use a wide range of communication adaptations, including visual, acoustic, tactile, chemical, and electrical signals, often in ways beyond humans’ perceptive abilities. Here, Rutz et al. review ways in which increasingly powerful ML tools are being used to reveal previously hidden complexity in animals’ communicative behavior, with insights that could lead to potential benefits for animal welfare and conservation. “…it is essential that future advances are used to benefit the animals being studied,” write Rutz et al.
In a fourth Perspective, Peter Wurman and colleagues highlight how games provide controlled opportunities to isolate and practice many problem-solving skills that are more broadly transferable to real-world applications, which makes them valuable training grounds for intelligent machines. While the recent dominance of AI in classic strategy games has largely been achieved, Wurman and colleagues argue that video games pose new types of challenges for AI to conquer. Making progress in these arenas will represent a substantial step toward much more capable and flexible AI systems that operate in the physical world.
Generative AI – a type of AI technology that can produce a wide variety of content such as images, videos, audio, and text – has rapidly become widely adopted by the general public, scientists, and technologists. However, a growing number of professional artists, writers, and musicians have raised objections to the use of their creations as training data for these systems. In a Policy Forum, Pamela Samuelson highlights this emerging issue and discusses how several copyright lawsuits, now underway in the U.S., could have substantial implications for the future of generative AI systems. If the plaintiffs in these cases prevail, the only material generative AI systems could lawfully be trained on would be public domain works or those under licenses, which would affect everyone who uses the technology, including for scientific research. In a second Policy Forum, Ajay Agrawal and colleagues discuss how task automation via AI innovations could reverse current trends of increasing income inequality. Given the rapid development of AI technologies that enable automation of cognitive and creative endeavors once reserved for humans with specialized education and experience, some economists have raised concerns that that AI has the potential to substantially disrupt the labor market and further increase inequality, albeit with little benefit to productivity and standard of living. Here, Agrawal et al. argue that, by considering how tasks can be automated, AI developers could create tools that enhance the overall productivity of workers. What’s more, AI automation could also reduce income inequality by offering innovations that allow lower-wage and less skilled workers to perform at levels that would previously require specialized training.
In one Review, Felix Wong and colleagues discuss how advances in AI are empowering medical and biotechnological research in the fight against infectious disease. According to Wong et al., AI technologies, like ML, have led to rapid advancements in anti-infective drug discovery, our understanding of infection biology, and the development of new diagnostics. Further applications could also improve our ability to forecast and control infectious disease outbreaks and pandemics. A second Review by Bing Huang and colleagues focuses on the crucial role “Destiny Functional Theory” – pivotal in chemical and materials science because of its relatively high predictive power – has played in the development of ML-based models used to navigate chemical compound space. Huang et al. argue that continued advancements in this space pave the way toward software control solutions that can routinely handle exotic chemistries and formulations within self-driving laboratories.
Lastly, a series of Vignettes by various authors highlight AI’s applications in advanced medical robots. AI technologies used in these devices, including computer vision, medical imaging analysis, precise manipulation, and ML, could enable autonomous robots to perform diagnostic imaging and assist in complex surgical procedures. Furthermore, AI in wearable rehabilitation devices and advanced prosthetics could enable more personalized patient care and even AI-powered prosthetics that operate seamlessly with the human user.
In this special issue of Science, nine pieces – including Perspectives, Policy Forums, and Reviews – highlight recent advancements in artificial intelligence (AI) technologies and how they’re being used to answer novel questions in topics ranging from human health to animal behavior. However, the recent widespread adoption of AI in these areas is not without unique ethical concerns and policy challenges. “By looking to the forefront of how AI is being used in science and society, many grand challenges and benefits appear,” writes Gemma Alderton, deputy editor at Science.
AI-predicted race variables from medical images pose risks and opportunities for studying health disparities, say James Zou and colleagues in a Perspective. Hundreds of AI-assisted medical devices are currently used in diverse medical tasks, such as assessing health risks and diagnosing diseases like cancer. Some studies have shown that AI models can infer race variables – albeit in crude, simplistic categories – directly from medical images like chest x-rays and cardiac ultrasounds, despite no known human-readable race correlates in the images. “Although race variables are not a generally meaningful category in medicine, the ability of AI to predict race variables from medical images could be useful for monitoring health care disparity and ensuring that algorithms work well across diverse populations,” Zou et al. write. In a second Perspective, Matthew DeCamp and Charlotta Lindvall highlight how examination of bias in AI and healthcare has tended toward removing bias from datasets, analyses, or in AI development teams. However, DeCamp and Lindvall argue that it will also require reducing biases in how clinicians and patients use AI-based algorithms, which could be more challenging than reducing biases in the algorithms themselves.
AI technologies also show great promise in expanding our understanding of animal behaviors. In a third Perspective, Christian Rutz and colleagues review how machine learning (ML) methods are being used to decode animal communication systems. Understanding how animals communicate presents a host of challenges – animals use a wide range of communication adaptations, including visual, acoustic, tactile, chemical, and electrical signals, often in ways beyond humans’ perceptive abilities. Here, Rutz et al. review ways in which increasingly powerful ML tools are being used to reveal previously hidden complexity in animals’ communicative behavior, with insights that could lead to potential benefits for animal welfare and conservation. “…it is essential that future advances are used to benefit the animals being studied,” write Rutz et al.
In a fourth Perspective, Peter Wurman and colleagues highlight how games provide controlled opportunities to isolate and practice many problem-solving skills that are more broadly transferable to real-world applications, which makes them valuable training grounds for intelligent machines. While the recent dominance of AI in classic strategy games has largely been achieved, Wurman and colleagues argue that video games pose new types of challenges for AI to conquer. Making progress in these arenas will represent a substantial step toward much more capable and flexible AI systems that operate in the physical world.
Generative AI – a type of AI technology that can produce a wide variety of content such as images, videos, audio, and text – has rapidly become widely adopted by the general public, scientists, and technologists. However, a growing number of professional artists, writers, and musicians have raised objections to the use of their creations as training data for these systems. In a Policy Forum, Pamela Samuelson highlights this emerging issue and discusses how several copyright lawsuits, now underway in the U.S., could have substantial implications for the future of generative AI systems. If the plaintiffs in these cases prevail, the only material generative AI systems could lawfully be trained on would be public domain works or those under licenses, which would affect everyone who uses the technology, including for scientific research. In a second Policy Forum, Ajay Agrawal and colleagues discuss how task automation via AI innovations could reverse current trends of increasing income inequality. Given the rapid development of AI technologies that enable automation of cognitive and creative endeavors once reserved for humans with specialized education and experience, some economists have raised concerns that that AI has the potential to substantially disrupt the labor market and further increase inequality, albeit with little benefit to productivity and standard of living. Here, Agrawal et al. argue that, by considering how tasks can be automated, AI developers could create tools that enhance the overall productivity of workers. What’s more, AI automation could also reduce income inequality by offering innovations that allow lower-wage and less skilled workers to perform at levels that would previously require specialized training.
In one Review, Felix Wong and colleagues discuss how advances in AI are empowering medical and biotechnological research in the fight against infectious disease. According to Wong et al., AI technologies, like ML, have led to rapid advancements in anti-infective drug discovery, our understanding of infection biology, and the development of new diagnostics. Further applications could also improve our ability to forecast and control infectious disease outbreaks and pandemics. A second Review by Bing Huang and colleagues focuses on the crucial role “Destiny Functional Theory” – pivotal in chemical and materials science because of its relatively high predictive power – has played in the development of ML-based models used to navigate chemical compound space. Huang et al. argue that continued advancements in this space pave the way toward software control solutions that can routinely handle exotic chemistries and formulations within self-driving laboratories.
Lastly, a series of Vignettes by various authors highlight AI’s applications in advanced medical robots. AI technologies used in these devices, including computer vision, medical imaging analysis, precise manipulation, and ML, could enable autonomous robots to perform diagnostic imaging and assist in complex surgical procedures. Furthermore, AI in wearable rehabilitation devices and advanced prosthetics could enable more personalized patient care and even AI-powered prosthetics that operate seamlessly with the human user.
JOURNAL
Science
Science
DOI
ARTICLE TITLE
Finding a place for AI in science and society
Finding a place for AI in science and society
ARTICLE PUBLICATION DATE
14-Jul-2023
14-Jul-2023