A.I.
Google’s AI Search Feature Faces Criticisms After Giving Dangerous Advice; Users Are Told To Glue Pizza and Eat Rocks
Social media users have received strange and dangerous search responses, which appear to have been provided by Google's new AI Overview feature.
Issues With Google's New AI Tool
On May 14, Google launched a new feature for its long-standing search business. It has updated its search engine with an AI tool, known as AI Overviews, which was designed to help users grasp a topic quickly by combining information from different sources.
However, the new feature is currently facing criticisms after providing erratic, inaccurate responses. According to different social media and news reports, the said AI has reportedly told users to add glue to their pizzas, eat rocks, and clean their washing machines with chlorine. In another instance, the AI suggests jumping off the Golden Gate Bridge when a user searched: "I'm feeling depressed."
The experimental tool summarizes search results using the Gemini AI model. It has been rolled out to some users in the U.S. before the planned worldwide release for later this year.
AI Overview has already caused widespread dismay across social media. Users claim that, on some occasions, the AI tool generated summaries using articles from comedic Reddit posts and satirical website The Onion as its sources.
According to a screenshot posted on X, one user made a query about pizza, which received the response: "You can also add about ⅛ cup of non-toxic glue to the sauce to give it more tackiness." After tracing the answer back, it appears to be based on a decade-old joke which was posted as a comment on Reddit.
Other inaccurate responses include claims that former US President John Adams graduated from the University of Wisconsin 21 times, that Barack Obama is a muslim, that users should eat a rock a day to aid their digestion, and that a dog played in the NHL, NBA, and NFL.
In response to the erroneous results, Google representatives claimed that the examples were not common queries and are not representative of most people's experiences. The company also stated that they performed extensive testing before launching the new AI tool. They also claim to take action against violations of their policies as they continue to refine their overall systems.
How Does Google Overviews Work?
Google AI Overviews refer to a combination of search results, which are summarized by AI. The combined information is taken from web pages in search results as well as Google's own knowledge base.
Formerly known as Google Search Generative Experience (SGE), AI Overviews is powered by the Gemini language model. It aims to give a quick understanding of a search topic by presenting information so the user does not need to scan through articles to find the answers they are looking for.
Responses from AI Overview are placed at the top of the search results page before human-written results. The information provided by the new tool is scoured from the web pages below them, and are cited as sources in the overview. Under the AI-generated summary, the page displays the links to all the resources used, which can be clicked to check where the information is pulled from.
As new tools flourish, AI 'fingerprints' on scientific papers could damage trust in vital research
Copyright CanvaExperts are warning that the "fingerprints" of generative artificial intelligence (GenAI) can be found in scientific papers, including peer-reviewed ones.
Are some researchers using too much artificial intelligence (AI) in their scientific papers? Experts say that "fingerprints" of generative AI (GenAI) can be found in an increasing number of studies.
A recent preprint paper, which hasn’t been peer-reviewed yet, estimated that at least 60,000 papers were probably "polished" using AI in some way by analysing the writing style.
"It's not to say that we knew how much LLM [large language model] work was involved in them, but certainly, these are immensely high shifts overnight," Andrew Gray, a librarian at University College London, told Euronews Next, adding that these types of "fingerprints" can be expected even if the tools were used for mere copyediting.
While certain shifts can be linked to changes in how people write, the evolution of some words is "staggering".
"Based on what we're seeing, those numbers look like they're going steadily up," Gray said.
It has already started causing waves. A peer-reviewed study with AI-generated pictures that the authors openly credited to the Midjourney tool was published in the journal Frontiers in Cell Development and Biology and went viral on social media in February.
The journal has since retracted the study and apologised "to the scientific community".
"There's very few that explicitly mention the use of ChatGPT and similar tools," Gray said about the papers he analysed.
New tools pose trust issues
While GenAI may help speed up the editing process, such as when an author is not a native speaker of the language they are writing in, a lack of transparency regarding the use of these tools is concerning, according to experts.
"There is concern that experiments, for example, are not being carried out properly, that there is cheating at all levels," Guillaume Cabanac, a professor of computer science at the University of Toulouse, told Euronews Next.
Nicknamed a "deception sleuth" by Nature, Cabanac tracks fake science and dubious papers.
"Society gives credit to science but this credit can be withdrawn at any time," he added, explaining that misusing AI tools could damage the public’s trust in scientific research.
With colleagues, Cabanac developed a tool called the Problematic Paper Screener to detect "tortured phrases" – those that are found when a paraphrasing tool is used, for example, to avoid plagiarism detection.
But since the GenAI tools went public, Cabanac started noticing a trend of new fingerprints appearing in papers such as the term "regenerate," a button appearing at the end of AI chatbots’ answers, or sentences beginning with "As an AI language model".
They are telltale signs of text that was taken from an AI tool.
“I only detect a tiny fraction of what I assume to be produced today, but it's enough to establish a proof of concept,” Cabanac said.
One of the issues is that AI-generated content will likely be increasingly difficult to spot as the technology progresses.
“It's very easy for these tools to subtly change things, or to change things in a way that maybe you didn't quite anticipate with a secondary meaning. So, if you're not checking it carefully after it's gone through the tool, there's a real risk of errors creeping in,” Gray said.
Harder to spot in the future
The peer-reviewed process is meant to prevent any blatant mistakes from appearing in the journals, but it’s not often the case as Cabanac points out on social media.
Some publishers have released guidelines regarding the use of AI in submitted publications.
The journal Nature said in 2023 that an AI tool could not be a credited author on a research paper, and that any researchers using AI tools must document their use.
Gray fears that these papers will be harder to spot in the future.
"As the tools get better, we would expect fewer really obvious [cases]," he said, adding that publishers should give "serious thought" to the guidelines and expected disclosure.
Both Gray and Cabanac urged authors to be cautious, with Cabanac calling to flag suspicious papers and regularly check for retracted ones.
"We can't allow ourselves to quote, for example, a study or a scientific article that has been retracted," Cabanac said.
"You always have to double-check what you're basing your work on".
He also questioned the soundness of the peer-reviewing process which proved deficient in some cases.
"Making assessments badly, too quickly or helped by ChatGPT without rereading, that's not good for science," he said.
Published: 25 May 2024
AUTHOR INFO
Elizabeth Seger
This week’s election announcement has set all political parties firmly into campaign mode and over the next 40 days the public will be weighing up who will get their vote on 4th July.
This democratic moment, however, will take place against the backdrop of a new and largely untested threat; generative-AI. In the lead up to the election, the strength of our electoral integrity is likely to be tested by the spread of AI-generated content and deepfakes – an issue that over 60% of the public are concerned about, according to recent Demos and Full Fact polling.
Our new paper takes a look at the near and long-term solutions at our disposal for bolstering the resilience of our democratic institutions amidst the modern technological age. We explore the top four pressing mechanisms by which generative-AI challenges the stability of democracy, and how to mitigate them.
Last month, Demos, alongside key partners, issued an Open Letter calling on all UK political parties to form a cross-party agreement on their responsible use of generative AI ahead of the election. The open letter is backed by trusted organisations such as Full Fact and the Electoral Reform Society, leading universities, and key figures including Martin Lewis, Founder and Chair of Money Saving Expert and the Money and Mental Policy Institute (MMHPI), and Wikipedia Founder, Jimmy Wales.
Read the full Open Letter here.
No comments:
Post a Comment