Tuesday, April 25, 2023

Google’s Bard AI exposed as a “pathological liar” by the company’s own employees

Story by MobileSyrup • Friday

Google employees recently fired shots at the company’s AI chatbot through internal messages, calling Bard “a pathological liar” and pleading with the company not to launch it.



A report from Bloomberg has uncovered discussions from 18 current and former Google workers as well as screenshots from internal messages. Among them include an employee stating that Bard would regularly give users dangerous advice. Another worker said Bard is “worse than useless: please do not launch.”

If employee complaints weren’t enough to warrant genuine concern, an internal safety team submitted a risk evaluation for the project, stating that the system was not ready for general use.

Google overruled the request for a risk evaluation, opening up early access to the experimental chatbot in March instead.

Related video: Google Bard Can Now Generate and Debug Code (Cover Video)
Duration 1:30   View on Watch

The report from Bloomberg sheds light on the company’s decision to override moral sense in favour of competing with rival AI projects such as OpenAI.

The decision looks especially bad for Google if you go back to early 2021, when the company fired two researchers after they authored a research paper that showed flaws in the same AI language systems that support chatbots like Bard.

Although some would argue that public testing is necessary for projects of this nature, there’s no denying that with multiple cases of the tech giant cutting corners on its AI chatbox, a public launch was a risky choice.

Brian Gabriel, a spokesperson for Google, told Bloomberg that AI ethics are still a top priority for the company. Google Bard received received an update page that details new changes and additions to the chat service.

Source: Bloomberg Via: The Verge

No comments: