Should we tax robots?
Study suggests a robot levy — but only a modest one — could help combat the effects of automation on income inequality in the U.S.
Peer-Reviewed PublicationWhat if the U.S. placed a tax on robots? The concept has been publicly discussed by policy analysts, scholars, and Bill Gates (who favors the notion). Because robots can replace jobs, the idea goes, a stiff tax on them would give firms incentive to help retain workers, while also compensating for a dropoff in payroll taxes when robots are used. Thus far, South Korea has reduced incentives for firms to deploy robots; European Union policymakers, on the other hand, considered a robot tax but did not enact it.
Now a study by MIT economists scrutinizes the existing evidence and suggests the optimal policy in this situation would indeed include a tax on robots, but only a modest one. The same applies to taxes on foreign trade that would also reduce U.S. jobs, the research finds.
“Our finding suggests that taxes on either robots or imported goods should be pretty small,” says Arnaud Costinot, an MIT economist, and co-author of a published paper detailing the findings. “Although robots have an effect on income inequality … they still lead to optimal taxes that are modest.”
Specifically, the study finds that a tax on robots should range from 1 percent to 3.7 percent of their value, while trade taxes would be from 0.03 percent to 0.11 percent, given current U.S. income taxes.
“We came in to this not knowing what would happen,” says Iván Werning, an MIT economist and the other co-author of the study. “We had all the potential ingredients for this to be a big tax, so that by stopping technology or trade you would have less inequality, but … for now, we find a tax in the one-digit range, and for trade, even smaller taxes.”
The paper, “Robots, Trade, and Luddism: A Sufficient Statistic Approach to Optimal Technology Regulation,” appears in advance online form in The Review of Economic Studies. Costinot is a professor of economics and associate head of the MIT Department of Economics; Werning is the department’s Robert M. Solow Professor of Economics.
A sufficient statistic: Wages
A key to the study is that the scholars did not start with an a priori idea about whether or not taxes on robots and trade were merited. Rather, they applied a “sufficient statistic” approach, examining empirical evidence on the subject.
For instance, one study by MIT economist Daron Acemoglu and Boston University economist Pascual Restrepo found that in the U.S. from 1990 to 2007, adding one robot per 1,000 workers reduced the employment-to-population ratio by about 0.2 percent; each robot added in manufacturing replaced about 3.3 workers, while the increase in workplace robots lowered wages about 0.4 percent.
In conducting their policy analysis, Costinot and Werning drew upon that empirical study and others. They built a model to evaluate a few different scenarios, and included levers like income taxes as other means of addressing income inequality.
“We do have these other tools, though they’re not perfect, for dealing with inequality,” Werning says. “We think it’s incorrect to discuss this taxes on robots and trade as if they are our only tools for redistribution.”
Still more specifically, the scholars used wage distribution data across all five income quintiles in the U.S. — the top 20 percent, the next 20 percent, and so on — to evaluate the need for robot and trade taxes. Where empirical data indicates technology and trade have changed that wage distribution, the magnitude of that change helped produce the robot and trade tax estimates Costinot and Werning suggest. This has the benefit of simplicity; the overall wage numbers help the economists avoid making a model with too many assumptions about, say, the exact role automation might play in a workplace.
“I think where we are methodologically breaking ground, we’re able to make that connection between wages and taxes without making super-particular assumptions about technology and about the way production works,” Werning says. “It’s all encoded in that distributional effect. We’re asking a lot from that empirical work. But we’re not making assumptions we cannot test about the rest of the economy.”
Costinot adds: “If you are at peace with some high-level assumptions about the way markets operate, we can tell you that the only objects of interest driving the optimal policy on robots or Chinese goods should be these responses of wages across quantiles of the income distribution, which, luckily for us, people have tried to estimate.”
Beyond robots, an approach for climate and more
Apart from its bottom-line tax numbers, the study contains some additional conclusions about technology and income trends. Perhaps counterintuitively, the research concludes that after many more robots are added to the economy, the impact that each additional robot has on wages may actually decline. At a future point, robot taxes could then be reduced even further.
“You could have a situation where we deeply care about redistribution, we have more robots, we have more trade, but taxes are actually going down,” Costinot says. If the economy is relatively saturated with robots, he adds, “That marginal robot you are getting in the economy matters less and less for inequality.”
The study’s approach could also be applied to subjects besides automation and trade. There is increasing empirical work on, for instance, the impact of climate change on income inequality, as well as similar studies about how migration, education, and other things affect wages. Given the increasing empirical data in those fields, the kind of modeling Costinot and Werning perform in this paper could be applied to determine, say, the right level for carbon taxes, if the goal is to sustain a reasonable income distribution.
“There are a lot of other applications,” Werning says. “There is a similar logic to those issues, where this methodology would carry through.” That suggests several other future avenues of research related to the current paper.
In the meantime, for people who have envisioned a steep tax on robots, however, they are “qualitatively right, but quantitatively off,” Werning concludes.
###
Written by Peter Dizikes, MIT News
Additional background
Paper: “Robots, Trade, and Luddism: A Sufficient Statistic Approach to Optimal Technology Regulation”
JOURNAL
The Review of Economic Studies
ARTICLE TITLE
Robots, Trade, and Luddism: A Sufficient Statistic Approach to Optimal Technology Regulation
Words prove their worth as teaching tools for robots
Reports and ProceedingsExploring a new way to teach robots, Princeton researchers have found that human-language descriptions of tools can accelerate the learning of a simulated robotic arm lifting and using a variety of tools.
The results build on evidence that providing richer information during artificial intelligence (AI) training can make autonomous robots more adaptive to new situations, improving their safety and effectiveness.
Adding descriptions of a tool’s form and function to the training process for the robot improved the robot’s ability to manipulate newly encountered tools that were not in the original training set. A team of mechanical engineers and computer scientists presented the new method, Accelerated Learning of Tool Manipulation with LAnguage, or ATLA, at the Conference on Robot Learning on Dec. 14.
Robotic arms have great potential to help with repetitive or challenging tasks, but training robots to manipulate tools effectively is difficult: Tools have a wide variety of shapes, and a robot’s dexterity and vision are no match for a human’s.
“Extra information in the form of language can help a robot learn to use the tools more quickly,” said study coauthor Anirudha Majumdar, an assistant professor of mechanical and aerospace engineering at Princeton who leads the Intelligent Robot Motion Lab.
The team obtained tool descriptions by querying GPT-3, a large language model released by OpenAI in 2020 that uses a form of AI called deep learning to generate text in response to a prompt. After experimenting with various prompts, they settled on using “Describe the [feature] of [tool] in a detailed and scientific response,” where the feature was the shape or purpose of the tool.
“Because these language models have been trained on the internet, in some sense you can think of this as a different way of retrieving that information,” more efficiently and comprehensively than using crowdsourcing or scraping specific websites for tool descriptions, said Karthik Narasimhan, an assistant professor of computer science and coauthor of the study. Narasimhan is a lead faculty member in Princeton’s natural language processing (NLP) group, and contributed to the original GPT language model as a visiting research scientist at OpenAI.
This work is the first collaboration between Narasimhan’s and Majumdar’s research groups. Majumdar focuses on developing AI-based policies to help robots — including flying and walking robots — generalize their functions to new settings, and he was curious about the potential of recent “massive progress in natural language processing” to benefit robot learning, he said.
For their simulated robot learning experiments, the team selected a training set of 27 tools, ranging from an axe to a squeegee. They gave the robotic arm four different tasks: push the tool, lift the tool, use it to sweep a cylinder along a table, or hammer a peg into a hole. The researchers developed a suite of policies using machine learning training approaches with and without language information, and then compared the policies’ performance on a separate test set of nine tools with paired descriptions.
This approach is known as meta-learning, since the robot improves its ability to learn with each successive task. It’s not only learning to use each tool, but also “trying to learn to understand the descriptions of each of these hundred different tools, so when it sees the 101st tool it’s faster in learning to use the new tool,” said Narasimhan. “We’re doing two things: We’re teaching the robot how to use the tools, but we’re also teaching it English.”
The researchers measured the success of the robot in pushing, lifting, sweeping and hammering with the nine test tools, comparing the results achieved with the policies that used language in the machine learning process to those that did not use language information. In most cases, the language information offered significant advantages for the robot’s ability to use new tools.
One task that showed notable differences between the policies was using a crowbar to sweep a cylinder, or bottle, along a table, said Allen Z. Ren, a Ph.D. student in Majumdar’s group and lead author of the research paper.
“With the language training, it learns to grasp at the long end of the crowbar and use the curved surface to better constrain the movement of the bottle,” said Ren. “Without the language, it grasped the crowbar close to the curved surface and it was harder to control.”
The research was supported in part by the Toyota Research Institute (TRI), and is part of a larger TRI-funded project in Majumdar’s research group aimed at improving robots’ ability to function in novel situations that differ from their training environments.
“The broad goal is to get robotic systems — specifically, ones that are trained using machine learning — to generalize to new environments,” said Majumdar. Other TRI-supported work by his group has addressed failure prediction for vision-based robot control, and used an “adversarial environment generation” approach to help robot policies function better in conditions outside their initial training.
The article, Leveraging language for accelerated learning of tool manipulation, was presented Dec. 14 at the Conference on Robot Learning. Besides Majumdar, Narasimhan and Ren, coauthors include Bharat Govil, Princeton Class of 2022, and Tsung-Yen Yang, who completed a Ph.D. in electrical engineering at Princeton this year and is now a machine learning scientist at Meta Platforms Inc.
In addition to TRI, support for the research was provided by the U.S. National Science Foundation, the Office of Naval Research, and the School of Engineering and Applied Science at Princeton University through the generosity of William Addy ’82.
METHOD OF RESEARCH
Computational simulation/modeling
SUBJECT OF RESEARCH
Not applicable
ARTICLE TITLE
Leveraging Language for Accelerated Learning of Tool Manipulation
No comments:
Post a Comment