Thursday, May 23, 2024

AI relies on mass surveillance, warns Signal boss


By AFP
May 23, 2024


Meredith Whittaker said concerns about surveillance and those about AI were 'two framings of the same thing'
 - Copyright AFP/File Mazen Mahdi


Daxia ROJAS

The AI tools that crunch numbers, generate text and videos and find patterns in data rely on mass surveillance and exercise concerning control over our lives, the boss of encrypted messaging app Signal told AFP on Thursday.

Pushing back against the unquestioning enthusiasm at VivaTech in Paris, Europe’s top startup conference where industry players vaunt the merits of their products, Meredith Whittaker said concerns about surveillance and those about AI were “two framings of the same thing”.

“The AI technologies we’re talking about today are reliant on mass surveillance,” she said.

“They require huge amounts of data that are the derivatives of this mass surveillance business model that grew out of the 90s in the US, and has become the economic engine of the tech industry.”

Whittaker, who spent years working for Google before helping to organise a staff walkout in 2018 over working conditions, established the AI Now Institute at New York University in 2017.

She now campaigns for privacy and rails against the business models built on the extraction of personal data.

And she is clear that she has no confidence that the AI industry is developing in the right direction.

– Power imbalances –


AI systems have a hunger for data to input but they produce vast amounts of data too.

Even if it is incorrect, she said, this output “has power to classify, order and direct our lives in ways that we should be equally concerned about”.

And she pointed to the power imbalances created by an industry controlled by “a handful of surveillance giants” that are “largely unaccountable”.

“Most of us are not the users of AI,” she said.

“Most of us are subjected to its use by our employers, by law enforcement, by governments, by whoever it is.

“They have their own goals but they may not be goals that benefit us or benefit society.”

She said a striking example was the way AI firms liked to say that they were helping to find solutions to the climate crisis.

In fact, she said, they were taking money from fossil fuel companies and their technology was being used to find new resources to extract.

“Because, of course, where is the revenue? It’s not in saving the climate,” she said.

“It is in massive contracts with BP, with Exxon, with other large oil and gas companies.”

Ultimately she argued that Europeans should not be thinking in terms of competing with bigger American AI firms.

Another option could be “to reimagine tech that can serve more democratic and more rights-preserving or pluralistic societies”.





















OpenAI says AI is 'safe enough' as scandals raise concerns

Seattle (AFP) – OpenAI CEO Sam Altman defended his company's AI technology as safe for widespread use, as concerns mount over potential risks and lack of proper safeguards for ChatGPT-style AI systems.


Issued on: 21/05/2024 
OpenAI CEO Sam Altman insisted that OpenAI had put in 'a huge amount of work' to ensure the safety of its models © Jason Redmond / AFP

Altman's remarks came at a Microsoft event in Seattle, where he spoke to developers just as a new controversy erupted over an OpenAI AI voice that closely resembled that of the actress Scarlett Johansson.

The CEO, who rose to global prominence after OpenAI released ChatGPT in 2022, is also grappling with questions about the safety of the company's AI following the departure of the team responsible for mitigating long-term AI risks.

"My biggest piece of advice is this is a special time and take advantage of it," Altman told the audience of developers seeking to build new products using OpenAI's technology.

"This is not the time to delay what you're planning to do or wait for the next thing," he added.

OpenAI is a close partner of Microsoft and provides the foundational technology, primarily the GPT-4 large language model, for building AI tools.

Microsoft has jumped on the AI bandwagon, pushing out new products and urging users to embrace generative AI's capabilities.

"We kind of take for granted" that GPT-4, while "far from perfect...is generally considered robust enough and safe enough for a wide variety of uses," Altman said.

Altman insisted that OpenAI had put in "a huge amount of work" to ensure the safety of its models.

"When you take a medicine, you want to know what's going to be safe, and with our model, you want to know it's going to be robust to behave the way you want it to," he added.

However, questions about OpenAI's commitment to safety resurfaced last week when the company dissolved its "superalignment" group, a team dedicated to mitigating the long-term dangers of AI.

In announcing his departure, team co-leader Jan Leike criticized OpenAI for prioritizing "shiny new products" over safety in a series of posts on X (formerly Twitter).

"Over the past few months, my team has been sailing against the wind," Leike said.

"These problems are quite hard to get right, and I am concerned we aren't on a trajectory to get there."

This controversy was swiftly followed by a public statement from Johansson, who expressed outrage over a voice used by OpenAI's ChatGPT that sounded similar to her voice in the 2013 film "Her."

The voice in question, called "Sky," was featured last week in the release of OpenAI's more human-like GPT-4o model.

In a short statement on Tuesday, Altman apologized to Johansson but insisted the voice was not based on hers.

© 2024 AFP

No comments: