"President Biden should call for, and Congress should legislate, a moratorium on the deployment of new generative AI technologies," Public Citizen's Robert Weissman argued.
A growing number of experts are calling for a pause on advanced AI development and deployment.
(Photo: Monsitj/Getty Images)
BRETT WILKINS
May 04, 2023
As the White House on Thursday unveiled a plan meant to promote "responsible American innovation in artificial intelligence," a leading U.S. consumer advocate added his voice to the growing number of experts calling for a moratorium on the development and deployment of advanced AI technology.
"Today's announcement from the White House is a useful step forward, but much more is needed to address the threats of runaway corporate AI," Robert Weissman, president of the consumer advocacy group Public Citizen, said in a statement.
"But we also need more aggressive measures," Weissman asserted. "President Biden should call for, and Congress should legislate, a moratorium on the deployment of new generative AI technologies, to remain in effect until there is a robust regulatory framework in place to address generative AI's enormous risks."
The White House says its AI plan builds on steps the Biden administration has taken "to promote responsible innovation."
"These include the landmark Blueprint for an AI Bill of Rights and related executive actions announced last fall, as well as the AI Risk Management Framework and a roadmap for standing up a National AI Research Resource released earlier this year," the administration said.
The White House plan includes $140 million in National Science Foundation funding for seven new national AI research institutes—there are already 25 such facilities—that "catalyze collaborative efforts across institutions of higher education, federal agencies, industry, and others to pursue transformative AI advances that are ethical, trustworthy, responsible, and serve the public good."
The new plan also includes "an independent commitment from leading AI developers, including Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI, and Stability AI, to participate in a public evaluation of AI systems."
Representatives of some of those companies including Google, Microsoft, Anthropic, and OpenAI—creator of the popular ChatGPT chatbot—met with Vice President Kamala Harris and other administration officials at the White House on Thursday. According toThe New York Times, President Joe Biden "briefly" dropped in on the meeting.
"AI is one of today's most powerful technologies, with the potential to improve people's lives and tackle some of society's biggest challenges. At the same time, AI has the potential to dramatically increase threats to safety and security, infringe civil rights and privacy, and erode public trust and faith in democracy," Harris said in a statement.
"The private sector has an ethical, moral, and legal responsibility to ensure the safety and security of their products," she added.
Thursday's White House meeting and plan come amid mounting concerns over the potential dangers posed by artificial intelligence on a range of issues, including military applications, life-and-death healthcare decisions, and impacts on the labor force.
In late March, tech leaders and researchers led an open letter signed by more than 27,000 experts, scholars, and others urging "all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4."
Noting that AI developers are "locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control," the letter asks:
Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us? Should we risk loss of control of our civilization?
"Such decisions must not be delegated to unelected tech leaders," the signers asserted. "Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable."
Last month, Public Citizen argued that "until meaningful government safeguards are in place to protect the public from the harms of generative AI, we need a pause."
"These systems demonstrate capabilities in question answering, and the generation of text, image, and code unimagined a decade ago, and they outperform the state of the art on many benchmarks, old and new," the group said in a report. "However, they are prone to hallucination, routinely biased, and can be tricked into serving nefarious aims, highlighting the complicated ethical challenges associated with their deployment."
According to the annual AI Index Report published last month by the Stanford Institute for Human-Centered Artificial Intelligence, nearly three-quarters of researchers believe artificial intelligence "could soon lead to revolutionary social change," while 36% worry that AI decisions "could cause nuclear-level catastrophe."
No comments:
Post a Comment