Friday, June 23, 2023


Get a clue, says panel about buzzy AI tech: It's being 'deployed as surveillance'


Image Credits: PATRICIA DE MELO MOREIRA/AFP / Getty Images

Connie Loizos
Thu, June 22, 2023 

Earlier today at a Bloomberg conference in San Francisco, some of the biggest names in AI turned up, including, briefly, Sam Altman of OpenAI, who just ended his two-month world tour, and Stability AI founder Emad Mostaque. Still, one of the most compelling conversations happened later in the afternoon, in a panel discussion about AI ethics.

Featuring Meredith Whittaker (pictured above), the president of the secure messaging app Signal; Credo AI co-founder and CEO Navrina Singh; and Alex Hanna, the director of Research at the Distributed AI Research Institute, the three had a unified message for the audience, which was: Don't get so distracted by the promise and threats associated with the future of AI. It is not magic, it's not fully automated and -- per Whittaker -- it's already intrusive beyond anything that most Americans seemingly comprehend.

Hanna, for example, pointed to the many people around the world who are helping to train today's large language models, suggesting that these individuals are getting short shrift in some of the breathless coverage about generative AI in part because the work is unglamorous and partly because it doesn't fit the current narrative about AI.

Said Hanna: "We know from reporting . . .that there is an army of workers who are doing annotation behind the scenes to even make this stuff work to any degree -- workers who work with Amazon Mechanical Turk, people who work with [the training data company] Sama -- in Venezuela, Kenya, the U.S., actually all over the world . . .They are actually doing the labeling, whereas Sam [Altman] and Emad [Mostaque] and all these other people who are going to say these things are magic -- no. There's humans. . . .These things need to appear as autonomous and it has this veneer, but there's so much human labor underneath it."

The comments made separately by Whittaker -- who previously worked at Google, co-founded NYU’s AI Now Institute and was an adviser to the Federal Trade Commission -- were even more pointed (and also impactful based on the audience's enthusiastic reaction to them). Her message was that, enchanted as the world may be now by chatbots like ChatGPT and Bard, the technology underpinning them is dangerous, especially as power grows more concentrated by those at the top of the advanced AI pyramid.

Said Whittaker, "I would say maybe some of the people in this audience are the users of AI, but the majority of the population is the subject of AI . . .This is not a matter of individual choice. Most of the ways that AI interpolates our life makes determinations that shape our access to resources to opportunity are made behind the scenes in ways we probably don't even know."

Announcing the Security Stage agenda at TechCrunch Disrupt

Whittaker gave an example of someone who walks into a bank and asks for a loan. That person can be denied and have "no idea that there's a system in [the] back probably powered by some Microsoft API that determined, based on scraped social media, that I wasn't creditworthy. I'm never going to know [because] there's no mechanism for me to know this." There are ways to change this, she continued, but overcoming the current power hierarchy in order to do so is next to impossible, she suggested. "I've been at the table for like, 15 years, 20 years. I've been at the table. Being at the table with no power is nothing."

Certainly, a lot of powerless people might agree with Whittaker, including current and former OpenAI and Google employees who've reportedly been leery at times of their companies' approach to launching AI products.

Indeed, Bloomberg moderator Sarah Frier asked the panel how concerned employees can speak up without fear of losing their jobs, to which Singh -- whose startup helps companies with AI governance -- answered: "I think a lot of that depends upon the leadership and the company values, to be honest. . . . We've seen instance after instance in the past year of responsible AI teams being let go."

In the meantime, there's much more that everyday people don't understand about what's happening, Whittaker suggested, calling AI "a surveillance technology." Facing the crowd, she elaborated, noting that AI "requires surveillance in the form of these massive datasets that entrench and expand the need for more and more data, and more and more intimate collection. The solution to everything is more data, more knowledge pooled in the hands of these companies. But these systems are also deployed as surveillance devices. And I think it's really important to recognize that it doesn't matter whether an output from an AI system is produced through some probabilistic statistical guesstimate, or whether it's data from a cell tower that's triangulating my location. That data becomes data about me. It doesn't need to be correct. It doesn't need to be reflective of who I am or where I am. But it has power over my life that is significant, and that power is being put in the hands of these companies."

Added Whittaker, the "Venn diagram of AI concerns and privacy concerns is a circle."

Whittaker obviously has her own agenda up to a point. As she said herself at the event, "there is a world where Signal and other legitimate privacy preserving technologies persevere" because people grow less and less comfortable with this concentration of power.

But also, if there isn't enough pushback and soon -- as progress in AI accelerates, the societal impacts also accelerate -- we'll continue heading down a "hype-filled road toward AI," she said, "where that power is entrenched and naturalized under the guise of intelligence and we are surveilled to the point [of having] very, very little agency over our individual and collective lives."

This "concern is existential, and it's much bigger than the AI framing that is often given."

We found the discussion captivating; if you'd like to see the whole thing, Bloomberg has since posted it here.








After jobs warning, Germany's Axel Springer says AI can liberate journalists

Reuters
Thu, June 22, 2023

FILE PHOTO: Illustration shows Artificial Intelligence words


BERLIN (Reuters) - A senior executive at German media giant Axel Springer on Thursday said artificial intelligence would free journalists to devote more time to core reporting, days after an internal email warned the technology would lead to significant job losses.

"For newsrooms, AI opens up new paths and freedoms. Journalists can outsource tedious work to AI and devote more time and energy to their core tasks," Chief Information Officer Samir Fadlallah told Reuters on the sidelines of a media conference in Berlin.

The company will address challenges around the technology "constructively," he said.

In an email seen by Reuters earlier this week, the publisher outlined a "digital only" roadmap for its mass-circulation Bild tabloid to be implemented by the beginning of 2024, saying that its "AI offensive" meant that many jobs would become redundant.

"The functions of editorial managers, page editors, proofreaders, secretaries and photo editors will no longer exist as they do today," the editorial leadership team wrote to Bild staff.

While Springer did not comment on the number of jobs at risk, company sources told Reuters a low three-digit number of employees would ultimately have to leave.

Fadlallah said his focus was on regulatory challenges and the opportunities the technology opens up for consumers.

"We see great potential in Generative AI to provide our readers and users with even more attractive and individually tailored products," he said, adding that it offers users "completely new opportunities for interaction."

"The focus is certainly on topics such as necessary regulation, remuneration for the use of our content as training material for large language models, and data protection," he said.

The company has said it aims to improve earnings at its flagship Bild and Welt publications by 100 million euros ($109.14 million) by 2025 through revenue increases and cost savings, and intends to fully stop producing print edition newspapers in the medium term.

Axel Springer is active in more than 40 countries and employs more than 18,000 people worldwide. Alongside its German titles, the firm owns English-language news website Politico, U.S. media company Insider and classified portals StepStone and AVIV.

($1 = 0.9163 euros)

(Reporting by Klaus Lauer, Writing by Anna Mackenzie)


OpenAI CEO Sam Altman Says AI Is ‘Most Important Step Yet’ For Humans and Tech

Priya Anand and Emily Chang
Thu, June 22, 2023 

OpenAI CEO Sam Altman

(Bloomberg) -- Sam Altman, chief executive officer of artificial intelligence startup OpenAI Inc., said there are many ways that rapidly progressing AI technology “could go wrong.” But he argued that the benefits outweigh the costs, “We work with dangerous technology that could be used in dangerous ways very frequently.”

Altman addressed growing concern about the rapid progress of AI in an interview onstage at the Bloomberg Technology Summit in San Francisco. Altman has also publicly pushed for increased regulation of artificial intelligence in recent months, speaking frequently with officials around the world about responsible stewardship of AI.

Despite the potential dangers of what he called an exponential technological shift, Altman spoke about several areas where AI could be beneficial, including medicine, science and education.

“I think it’d be good to end poverty,” he said. “But we’re going to have to manage the risk to get there.”

OpenAI has been valued at more than $27 billion, putting it at the forefront of the booming field of venture-backed AI companies. Addressing whether he would financially benefit from OpenAI’s success, Altman said, “I have enough money,” and stressed that his motivations were not financial. “This concept of having enough money is not something that is easy to get across to other people,” he said, adding that it’s human nature to want to be useful and work on “something that matters.”

“I think this will be the most important step yet that humanity has to get through with technology,” Altman said. “And I really care about that.”

Elon Musk, who helped Altman start OpenAI, has subsequently been critical of the organization and its potential to do harm. Altman said that Musk “really cares about AI safety a lot,” and that his criticism was “coming from a good place.” Asked about the theoretical “cage match” between Musk and his fellow billionaire Mark Zuckerberg, Altman joked: “I would go watch if he and Zuck actually did that.”

OpenAI’s products — including the chatbot ChatGPT and image generator Dall-E — have dazzled audiences. They’ve also helped spark a multilbillion-dollar frenzy among venture capital investors and entrepreneurs who are vying to help lay the foundation of a new era of technology.

To generate revenue, OpenAI is giving companies access to the application programming interfaces needed to create their own software that makes use of its AI models. The company is also selling access to a premium version of its chatbot, called ChatGPT Plus. OpenAI doesn’t release information about total sales.


Microsoft Corp. has invested a total of $13 billion in the company, people familiar with the matter have said. Much of that will be used to pay Microsoft back for using its Azure cloud network to train and run OpenAI’s models.

The speed and power of the fast-growing AI industry has spurred governments and regulators to try to set guardrails around its development. Altman was among the artificial intelligence experts who met with President Joe Biden this week in San Francisco. The CEO has been traveling widely and speaking about AI, including in Washington, where he told US senators that, “if this technology goes wrong, it can go quite wrong.”

Major AI companies, including Microsoft and Alphabet Inc.’s Google, have committed to participating in an independent public evaluation of their systems. But the US is also seeking a broader regulatory push. The Commerce Department said earlier this year that it was considering rules that could require AI models to go through a certification process before being released.

Read more: Regulate AI? Here’s What That Might Mean in the US: QuickTake

Last month, Altman signed onto a brief statement that included support from more than 350 executives and researchers saying “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.”

Despite dire warnings from technology leaders, some AI researchers contend that artificial intelligence isn’t advanced enough to justify fears that it will destroy humanity, and that focusing on doomsday scenarios is only a distraction from issues like algorithmic bias, racism and the risk of rampant disinformation.

OpenAI’s ChatGPT and Dall-E, both released last year, have inspired startups to incorporate AI into a vast array of fields, including financial services, consumer goods, health care and entertainment. Bloomberg Intelligence analyst Mandeep Singh estimates the generative AI market could grow by 42% to reach $1.3 trillion by 2032.

--With assistance from Dina Bass.


Amazon Is Spending $100 Million to Teach Cloud Customers About AI
Dina Bass
Thu, June 22, 2023 


AWS Chief Executive Officer Adam Selipsky 

(Bloomberg) -- Amazon.com Inc.’s cloud unit is building a program to help customers develop and deploy new kinds of artificial intelligence products as the biggest seller of cloud services tries to match Microsoft and Google in the market for so-called generative AI.

Amazon Web Services is investing $100 million to set up the AWS Generative AI Innovation Center, which will link customers with company experts in AI and machine learning. They’ll help a range of clients in health care, financial services and manufacturing build customized applications using the new technology. Highspot, Twilio, Ryanair and Lonely Planet will be early users of the innovation center, Amazon said.

The goal is to help sell more cloud services, convincing clients to turn to AWS as they build new generative AI applications rather than Microsoft Corp.’s Azure, which has seized an early lead owing to its partnership with ChatGPT maker OpenAI, or Alphabet Inc.’s Google, which pioneered much of the early technology underpinning this new frontier.

“We will bring our internal AWS experts free-of-charge to a whole bunch of AWS customers, focusing on folks with significant AWS presence, and go help them turbocharge their efforts to get real with generative AI, get beyond the talk,” AWS Chief Executive Officer Adam Selipsky said Thursday at Bloomberg’s technology conference in San Francisco.

Amazon unveiled its own generative AI tools earlier this year, but longtime employees and customers deemed the announcement uncharacteristically vague, Bloomberg reported in May. One customer who tested the tools awarded the technology an “incomplete” grade, while people familiar with AWS product launches wondered if Amazon released the AI tools to counter perceptions it has fallen behind Microsoft and Google.

Amazon has denied its generative AI tools were rushed or incomplete and said the technology is ready for customers to test and provide feedback. Asked about Amazon’s position in the AI race, Selipsky said: “Are we really going to have a conversation about three steps into a 10k race? Amazon has always taken a much more long-term view of the world than any other company.”

With the viral releases of OpenAI’s Dall-E image-generation software and the ChatGPT chatbot over the past year, companies are rushing to incorporate the technology into their products and services, and the cloud giants are positioning themselves to cash in. Bloomberg Intelligence analyst Mandeep Singh estimates the market for generative AI, in which AI models analyze volumes of data and use it to generate new images, texts, audio and video, could grow by 42% to reach $1.3 trillion by 2032.


--With assistance from Brad Stone and Natalie Lung.

(Updated with customers with early access to the innovation center.)




The 'Black effect': Overcoming the challenge of making AI more inclusive to tap new consumers

Damon Embling
Fri, 23 June 2023


Marketers and advertisers have gathered on the French Riviera for this year’s Cannes Lions Festival of Creativity to hear how to be more authentic and inclusive in an increasingly tech-driven marketplace.

Bombarded with advertising messages online every day, consumers are becoming increasingly savvy when it comes to deciding what kind of content and brands to engage with.

Going beyond prices and offers, many are scrutinising companies not only for their sustainability credentials, but their commitment to diversity and inclusion too.

Will AI re-write creativity? Here's what to expect from Cannes Lions 2023

'Untapped' Black consumers

At Cannes Lions, creative minds have been told to wake up to the "untapped" potential of the Black consumer market, during a session titled 'Harness the Black Effect: Diversity as a Game Changer for Brands'.

The audience heard that Black consumers represent more than €1.6 trillion in annual buying power in the US alone. Yet Black audiences make up less than two per cent of American advertising and marketing spend

When you talk about AI and some of these new-age kind of approaches, we want to make sure that AI is being mindful of how it’s implementing across diverse consumers.

"In 2023, investment is still really low towards dedicated Black-facing campaigns and there's quite a bit of an opportunity that’s left on the table thereby," Brianne Boles-Marshall, Diverse Media Strategy and Investment Lead at American automaker General Motors (GM) told Euronews Next in Cannes.

"It's just like anything else, if you're not speaking directly with an audience, they may feel like your message is not for them. So, if you want to change the game of your brands, if you want to let [the consumer] know that yes, our products are for you, you speak with them, not at them".

Boles-Marshall added that research has shown that the "Black Effect," the impact of Black consumers, influences mainstream consumer behaviours to a greater degree.
Making AI inclusive

Advertisers, marketers, and the media are tackling inclusivity in an increasingly tech-driven environment, with the rise of things like generative AI (artificial intelligence) creating content.

It’s an extra challenge for businesses as they strive to be more diverse and representative.

"We have to evolve with the technology, and we have to make sure the technology, as it evolves, is inclusive," said Boles-Marshall. “When you talk about AI and some of these new-age kind of approaches, we want to make sure that AI is being mindful of how it’s implementing across diverse consumers".

She continued: "We’re not trying to usurp Black consumers; we’re trying to leverage the power they already have. So as long as tech obeys that rule too, I think we’ll be in good shape".

Changemakers: Leo Burnett’s Chaka Sobhani on why workplace diversity really matters

Being in tune with consumers

Younger people, in the 18-25 age group, are taking greater notice of inclusive advertising when making purchase decisions, according to consumer research carried out by Deloitte in 2021.

But the consulting firm highlights that it is not enough to just market inclusiveness or diversity, stressing its data shows 57 per cent of consumers are more loyal to brands who commit to addressing social inequities in their actions.

Deloitte also points out that getting future customers onboard involves brands demonstrating a range of equitable outcomes, including in recruitment and retention, as well as marketing products for users of differing abilities.

“Words alone don’t cut it. If you don't pass the smell test, it's actually going to cost you more money to better your reputation in the marketplace than if you had just followed through with what you had said in the first place,” Boles-Marshall, from GM, told Euronews Next.

“Consumers are savvy, they are investigating things and they're sharing what they've investigated with other consumers. It's wildfire".

Cannes Lions: How the metaverse and gamification has become a gamechanger for the skin care industry

Electrifying inclusivity

GM is going through a transformation of its own right now, setting itself a goal of electrifying most of the vehicles it manufactures by 2035. The company also wants to be carbon neutral five years later. A journey it says will be fully inclusive.

“At General Motors, we have dedicated media budgets towards Black consumers. We've targeted Black-owned media in consumer segments, but we have also taken note of how we're delivering against all diverse segments,” said Boles-Marshall.

“For us, it's all about everybody in, we want to make sure that everybody sees themselves in our products and everybody's locking arms to drive our products into the all-electric future".

For more from this interview at the 2023 Cannes Lions Festival of Creativity, watch the video in the media player 




No comments: