Tuesday, October 10, 2023

UK data watchdog issues Snapchat enforcement notice over AI chatbot

Henry Saker-Clark, PA Deputy Business Editor
Fri, 6 October 2023 


The UK’s information watchdog has said Snapchat may be required to “stop processing data” related to its AI chatbot after issuing a preliminary enforcement notice against the technology business.

UK Information Commissioner John Edwards said the provisional findings of a probe into the company suggested a “worrying failure” by Snap, the app’s parent business, over potential privacy risks.

The Information Commissioner’s Office (ICO) said it issued Snap with a “preliminary enforcement notice over potential failure to properly assess the privacy risks” posed by its generative AI chatbot My AI, particularly to children using it.

The regulator stressed that findings are “provisional” and conclusions should not yet be drawn.

However, it said that if a final enforcement notice were to be adopted, Snap might not be able to offer the My AI function to UK users until the company carries out “an adequate risk assessment”.

Mr Edwards said: “The provisional findings of our investigation suggest a worrying failure by Snap to adequately identify and assess the privacy risks to children and other users before launching My AI.

“We have been clear that organisations must consider the risks associated with AI, alongside the benefits.

“Today’s preliminary enforcement notice shows we will take action in order to protect UK consumers’ privacy rights.”

A Snap spokeswoman said: “We are closely reviewing the ICO’s provisional decision.

“Like the ICO, we are committed to protecting the privacy of our users.

“In line with our standard approach to product development, My AI went through a robust legal and privacy review process before being made publicly available.

“We will continue to work constructively with the ICO to ensure they’re comfortable with our risk assessment procedures.”

BBC blocks ChatGPT maker from using its content over AI copyright concerns


James Warrington
Fri, 6 October 2023 

bbc office

The BBC has blocked ChatGPT from using its content amid growing fears that AI tools are breaching copyright.

The public service broadcaster said it has taken steps to prevent companies such as ChatGPT maker OpenAI from trawling its websites to train their software.

In a blog post, Rhodri Talfan Davies, the BBC’s director of nations, said the so-called scraping of BBC websites without permission was not in the interest of licence fee payers.

He said that the corporation was also examining other threats from AI, including how it could impact website traffic and lead to a surge in disinformation.

Mr Talfan Davies said: “It is already clear that Generative AI is likely to introduce new and significant risks if not harnessed properly.

“These include ethical issues, legal and copyright challenges, and significant risks around misinformation and bias.

“These risks are real and cannot be underestimated. This wave of innovation will demand vision and vigilance in equal measure.”

Rhodri Talfan Davies, the BBC’s director of nations, said that both ‘vision and vigilance’ would be needed to manage AI risks - BBC

It follows similar moves by The Guardian, New York Times and CNN, which have all blocked ChatGPT from accessing their websites.

BBC Good Food, which is operated under licence by magazine publisher Immediate Media, has also rolled out a ban.

News publishers are becoming increasingly concerned that tech giants have scraped data from their websites to help train AI software without permission.

The Daily Mail is currently gearing up for a legal battle with Google over claims the company used hundreds of thousands of its online news stories to train the Bard chatbot.

Meanwhile, the News Media Association (NMA), which represents titles including The Times, The Guardian and The Telegraph, has warned a flood of fake news generated by AI risks “polluting human knowledge”.

Despite the concerns, some parts of the news industry are hoping to establish landmark deals that would see tech giants pay for the use of content.

The BBC said it wanted to agree a “more structured and sustainable approach” with AI firms, though it is understood licensing deals are not yet being discussed.

The broadcaster will also start rolling out a number of small projects to experiment with AI. These could be deployed in areas such as news headlines, as well as in archive footage and to help the production process.

The threat from AI poses a unique challenge to the BBC given its licence fee funding model.

Leo Kelion, a former BBC tech editor, questioned whether there was a tension between the decision to block ChatGPT and the broadcaster’s remit.

He said: “I get there’s a desire to control who uses it and maybe get a fresh source of income. But it seems a shame to deprive AI models of a source of trustworthy output that strives to be impartial. There’s soft power for the BBC and the UK in its inclusion.”

The BBC insisted that AI could provide a significant opportunity for the organisation to “deepen and amplify our mission”, provided it was used responsibly.

Bosses said the organisation would always act in the best interests of the public, would prioritise talent and creativity over technology and would be transparent about its use of AI.

Mr Talfan Davies said: “We believe a responsible approach to using this technology can help mitigate some of these risks and enable experimentation.”

No comments: