32Aditi Ganguly
Wed, August 23, 2023
The U.S. is lagging when it comes to regulating artificial intelligence (AI).
Several industry leaders rallied for the need to regulate the technology after the viral success of the generative AI platform ChatGPT. Some speculated this was due to their incredible lead in the AI space. Other worries also persist that AI could lead to job displacement and that unscrupulous actors might unlawfully exploit the intellectual property of businesses, artists and others. These concerns resulted in a series of legal actions.
Europe passed the AI Act in June. It is deemed to be the "world's first comprehensive AI law" and one of the toughest regulatory guidelines for space. Shortly after releasing the guidelines for the AI Act, more than 150 executives signed an open letter to the European Commission to address the aggressive policies.
"In our assessment, the draft legislation would jeopardize Europe's competitiveness and technological sovereignty without effectively tackling the challenges we are and will be facing," the letter stated.
White House Joins The Game
The White House has been urging companies to pledge to develop AI in a responsible manner amid widespread concerns regarding the potential for the technology to amplify misinformation and cybercrime, presenting a national security threat.
Key figures in artificial intelligence, including Microsoft Corp., Alphabet Inc.'s Google and OpenAI, were scheduled to convene at the White House in mid-July. Pioneered by the federal government, industry leaders pledged to incorporate protective measures into their advancements in a technology that has garnered substantial attention on Wall Street and caused concern among numerous global leaders.
Billionaire polymath Elon Musk was a co-founder of ChatGPT maker OpenAI but later parted ways with the company. Has been a long-standing advocate of AI regulations, stating that a "Terminator-like" outcome awaits if development continues unchecked. He hosted a Twitter Spaces event last month, alongside two notable members of the U.S. House of Representatives, with a primary focus on artificial intelligence. Despite this, AI continues to advance and thrive. Startups like AvaWatz have already raised millions from retail investors for cooperative AI drone teams that work together, and companies like Microsoft continue to invest billions into the space.
"I've known Elon for years," U.S. Rep. Ro Khanna (D-Calif.) said. "We will be examining the potential benefits and downsides of AI."
Rep. Mike Gallagher (R-Wis.) touted Musk's knowledge base in the field before the event, stating that he is "the foremost figure aligned with the AI cautious approach, representing those who harbor concerns about the existential threats and advocate for a temporary halt."
During a White House gathering, Biden tackled mounting apprehensions surrounding the possible exploitation of artificial intelligence for disruptive intentions. He emphasized the need for a discerning and watchful stance toward the risks posed by emerging technologies to the integrity of U.S. democracy.
Regulating AI: Next Steps And Challenges
"We'll see more technology change in the next 10 years, or even in the next few years, than we've seen in the last 50 years. That has been an astounding revelation to me, quite frankly," President Joe Biden said about the robust adoption of artificial intelligence.
As part of the White House's ongoing efforts to regulate AI, Biden convened a meeting with executives from the seven companies at the White House on July 21. Notable attendees included pioneers such as OpenAI, Microsoft, Meta Platforms Inc., Amazon.com Inc., Inflection AI Inc. and Anthropic.
As a component of this initiative, the companies pledged to establish a mechanism to "watermark" various types of content, encompassing text, images, audio and AI-generated videos. The embedded watermark, implemented through technical means, is anticipated to facilitate the identification of content manipulated by AI. Its purpose is to assist users in detecting instances of deep-fake visuals or audio, which might falsely depict violence, enhance fraudulent schemes or manipulate images of politicians to portray them negatively.
The companies also made a commitment to prioritize safeguarding users' privacy during the advancement of AI. They expressed their dedication to creating AI-driven solutions.
No comments:
Post a Comment