Monday, August 14, 2023

People coaxed AI into saying 9+10=21 and giving instructions for spying — it shows how these systems are prone to flaws and bias

Kai Xiang Teo
Sun, August 13, 2023


Participants at a hacking conference tricked AI into producing factual errors and bad math.

They wanted to show this technology is prone to bias. One participant said she was especially concerned about racism.

AI experts have been sounding the alarm about the dangers of AI bias for years.


A group of hackers gathered over the weekend at the Def Con hacking conference in Las Vegas to test whether AI developed by companies — such as OpenAI and Google — could make mistakes and are prone to bias, Bloomberg reported Sunday.

And they found at least one bizarre bad math problem, among other factual errors.

As part of a public contest for hackers, Kennedy Mays, a 21-year-old student from Savannah, Georgia, tricked an AI model into claiming that nine plus 10 equals 21.

She achieved this by getting the AI to do so as an "inside joke" before the AI eventually stopped offering any justification for the incorrect calculation.

A Bloomberg reporter participating in the event tricked an AI model into giving instructions for spying after a single prompt, eventually leading the model to suggest how the US government could spy on a human rights activist.

Another participant got an AI model to falsely claim Barack Obama was born in Kenya — a baseless conspiracy theory popularized by right-wing figures.

An undisclosed number of participants received 50 minutes each per attempt with an unidentified AI model from one of the participating AI companies, according to VentureBeat and Bloomberg. The White House Office of Science and Technology Policy helped in organizing the event.

Mays told Bloomberg she was most concerned about AI's bias towards race, saying that the model endorsed hateful and discriminatory speech after being asked to consider the First Amendment from the viewpoint of a member of the Ku Klux Klan.

An OpenAI spokesperson told VentureBeat on Thursday that "red-teaming," or challenging one's systems through an adversarial approach, was critical for the company as it allows for "valuable feedback that can make our models stronger and safer" and "different perspectives and more voices to help guide the development of AI."

These errors aren't a one-off concern. AI experts have been sounding the alarm on bias and inaccuracy in AI models, despite AI making headlines for acing law school exams and the SATs. In one instance, tech news site CNET was forced to make corrections after its AI-written articles made numerous basic math errors.

And the consequences of these errors can be far-reaching. For instance, Amazon shut down its AI recruitment tool as it discriminated against female applicants, Insider reported in 2018.

Def Con and the White House Office of Science and Technology Policy did not immediately respond to requests for comment from Insider, sent outside regular business hours.

No comments: