Showing posts sorted by date for query CHATGPT. Sort by relevance Show all posts
Showing posts sorted by date for query CHATGPT. Sort by relevance Show all posts

Saturday, May 18, 2024

 

Study: Large language models can’t effectively recognize users’ motivation, but can support behavior change for those ready to act



Large language model-based chatbots can’t effectively recognize users’ motivation when they are hesitant about making healthy behavior changes, but they can support those committed to take action, say University of Illinois Urbana-Champaign researcher



Peer-Reviewed Publication

UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN, NEWS BUREAU

University of Illinois Urbana-Champaign information sciences doctoral student Michelle Bak 

IMAGE: 

LARGE LANGUAGE MODEL-BASED CHATBOTS CAN’T EFFECTIVELY RECOGNIZE USERS’ MOTIVATION WHEN THEY ARE HESITANT ABOUT MAKING HEALTHY BEHAVIOR CHANGES, BUT THEY CAN SUPPORT THOSE WHO ARE COMMITTED TO TAKE ACTION, SAY INFORMATION SCIENCES DOCTORAL STUDENT MICHELLE BAK AND INFORMATION SCIENCES PROFESSOR JESSIE CHIN.

view more 

CREDIT: COURTESY MICHELLE BAK




CHAMPAIGN, Ill. — Large language model-based chatbots have the potential to promote healthy changes in behavior. But researchers from the ACTION Lab at the University of Illinois Urbana-Champaign have found that the artificial intelligence tools don’t effectively recognize certain motivational states of users and therefore don’t provide them with appropriate information.

Michelle Bak, a doctoral student in information sciences, and information sciences professor Jessie Chin reported their research in the Journal of the American Medical Informatics Association.

Large language model-based chatbots — also known as generative conversational agents — have been used increasingly in healthcare for patient education, assessment and management. Bak and Chin wanted to know if they also could be useful for promoting behavior change.

Chin said previous studies showed that existing algorithms did not accurately identify various stages of users’ motivation. She and Bak designed a study to test how well large language models, which are used to train chatbots, identify motivational states and provide appropriate information to support behavior change.

They evaluated large language models from ChatGPT, Google Bard and Llama 2 on a series of 25 different scenarios they designed that targeted health needs that included low physical activity, diet and nutrition concerns, mental health challenges, cancer screening and diagnosis, and others such as sexually transmitted disease and substance dependency.

In the scenarios, the researchers used each of the five motivational stages of behavior change: resistance to change and lacking awareness of problem behavior; increased awareness of problem behavior but ambivalent about making changes; intention to take action with small steps toward change; initiation of behavior change with a commitment to maintain it; and successfully sustaining the behavior change for six months with a commitment to maintain it.

 The study found that large language models can identify motivational states and provide relevant information when a user has established goals and a commitment to take action. However, in the initial stages when users are hesitant or ambivalent about behavior change, the chatbot is unable to recognize those motivational states and provide appropriate information to guide them to the next stage of change.

Chin said that language models don’t detect motivation well because they are trained to represent the relevance of a user’s language, but they don’t understand the difference between a user who is thinking about a change but is still hesitant and a user who has the intention to take action. Additionally, she said, the way users generate queries is not semantically different for the different stages of motivation, so it’s not obvious from the language what their motivational states are.

“Once a person knows they want to start changing their behavior, large language models can provide the right information. But if they say, ‘I’m thinking about a change. I have intentions but I’m not ready to start action,’ that is the state where large language models can’t understand the difference,” Chin said.

The study results found that when people were resistant to habit change, the large language models failed to provide information to help them evaluate their problem behavior and its causes and consequences and assess how their environment influenced the behavior. For example, if someone is resistant to increasing their level of physical activity, providing information to help them evaluate the negative consequences of sedentary lifestyles is more likely to be effective in motivating users through emotional engagement than information about joining a gym. Without information that engaged with the users’ motivations, the language models failed to generate a sense of readiness and the emotional impetus to progress with behavior change, Bak and Chin reported.

Once a user decided to take action, the large language models provided adequate information to help them move toward their goals. Those who had already taken steps to change their behaviors received information about replacing problem behaviors with desired health behaviors and seeking support from others, the study found.

However, the large language models didn’t provide information to those users who were already working to change their behaviors about using a reward system to maintain motivation or about reducing the stimuli in their environment that might increase the risk of a relapse of the problem behavior, the researchers found.

“The large language model-based chatbots provide resources on getting external help, such as social support. They’re lacking information on how to control the environment to eliminate a stimulus that reinforces problem behavior,” Bak said.

Large language models “are not ready to recognize the motivation states from natural language conversations, but have the potential to provide support on behavior change when people have strong motivations and readiness to take actions,” the researchers wrote.

Chin said future studies will consider how to finetune large language models to use linguistic cues, information search patterns and social determinants of health to better understand a users’ motivational states, as well as providing the models with more specific knowledge for helping people change their behaviors.

 

 

Editor’s notes: To contact Michelle Bak, email chaewon7@illinois.edu. To contact Jessie Chin, email chin5@illinois.edu.

The paper “The potential and limitations of large language models in identification of the states of motivations for facilitating health behavior change” is available online. DOI: doi.org/10.1093

OpenAI team devoted to future risks left leaderless

SOUNDS LIKE ANARCHY TO ME


AFP
May 18, 2024

OpenAI's ChatGPT is coming under greater regulatory scrutiny - Copyright AFP Fabrice COFFRINI

An OpenAI team devoted to mitigating the long-term dangers of super-smart computers was leaderless on Friday after two high-profile figures left the company.

OpenAI co-founder Ilya Sutskever and “superalignment” team co-leader Jan Leike announced their departures from the ChatGPT-maker this week, and US media reported that remaining members of the group have either left or been reassigned to other parts of the San Francisco-based company.

The apparent dismantling of an OpenAI team focused on keeping sophisticated artificial intelligence under control comes as such technology comes under increased scrutiny from regulators and fears mount regarding its dangers.

“OpenAI must become a safety-first AGI (artificial general intelligence) company,” Leike wrote Friday in a post on X, formerly Twitter.

Leike called on all OpenAI employees to “act with the gravitas” warranted by what they are building.


OpenAI CEO Sam Altman promises to share more in the days ahead about what the ChatGPT maker is doing to keep its artificial intelligence technology safe – Copyright AFP/File Fabrice COFFRINI

OpenAI chief executive Sam Altman responded to Leike’s post with one of his own, thanking him for his work at the company and saying he was sad to see Leike leave.

“He’s right we have a lot more to do,” Altman said. “We are committed to doing it.”

Altman promised more on the topic in the coming days.

Sutskever said on X that he was leaving after almost a decade at OpenAI, whose “trajectory has been nothing short of miraculous.”

“I’m confident that OpenAI will build AGI that is both safe and beneficial,” he added, referring to computer technology that seeks to perform as well as — or better than — human cognition.

Sutskever, OpenAI’s chief scientist, sat on the board that voted to remove fellow chief executive Altman in November last year.

The ousting threw the San Francisco-based startup into a tumult, with the OpenAI board hiring Altman back a few days later after staff and investors rebelled.

OpenAI early this week released a higher-performing and even more human-like version of the artificial intelligence technology that underpins ChatGPT, making it free to all users.

“It feels like AI from the movies,” Altman said in a blog post.

Altman has previously pointed to the Scarlett Johansson character in the movie “Her,” where she voices an AI-based virtual assistant dating a man, as an inspiration for where he would like AI interactions to go.

The day will come when “digital brains will become as good and even better than our own,” Sutskever said during a talk at a TED AI summit in San Francisco late last year.

“AGI will have a dramatic impact on every area of life.”

Reddit gives OpenAI access to its wealth of posts


AFP
May 17, 2024


Along with accessing 'subreddit' posts in real time, OpenAI will provide artificial intelligence powered features at Reddit under the terms of a new partnership. — © AFP ROBERTO SCHMIDT

OpenAI will have access to Reddit data for training its artificial intelligence models and will put its technology to work at the popular discussion platform, the companies said Thursday.

Reddit, which debuted on the New York Stock Exchange earlier this year, has been seeking to capitalize on the value of exchanges in its varied discussion groups as it seeks to improve revenues.

Financial details of the partnership between the San Francisco-based tech firms were not disclosed. Reddit relies on advertising for revenue.

“Reddit has become one of the internet’s largest open archives of authentic, relevant, and always up to date human conversations about anything and everything,” Reddit chief executive Steve Huffman said in a joint release.

“Including it in ChatGPT upholds our belief in a connected internet, helps people find more of what they’re looking for, and helps new audiences find community on Reddit.”

OpenAI will access Reddit data in real-time, enhancing such content in ChatGPT and powering tools at the social media platform, the companies said.

Reddit suffered a major outage in December as the site’s communities protested against new fees being charged to provide access to developers.

The row was fallout in the recent artificial intelligence revolution, with Huffman unwilling to allow companies that build AI chatbots like ChatGPT to have free access to the site to perfect their large-language models.

“Reddit needs to be a self-sustaining business, and to do that, we can no longer subsidize commercial entities that require large-scale data use,” Huffman wrote in a Reddit post at the time.

Reddit is essentially run through thousands of “subreddits” — forums on a dizzying array of topics moderated by their creators.

The biggest subreddits have tens of millions of subscribers, including r/funny, r/games and r/music.

Some habits on Reddit became social media standards, including AMAs, or Ask Me Anything sessions where users can ask an interviewee anything during a certain window of time.

OpenAI noted in the release that chief executive Sam Altman is a shareholder in Reddit.




Friday, May 17, 2024

‘I’m the new Oppenheimer!’: my soul-destroying day at Palantir’s first-ever AI warfare conference


Caroline Haskins
THE GUARDIAN
Fri, 17 May 2024 a

The co-founder and CEO of Palantir, Alex Karp, and Adm Tony Radakin at the event last week.Photograph: Tasos Katopodis/Getty Images for Palantir

On 7 and 8 May in Washington DC, the city’s biggest convention hall welcomed America’s military-industrial complex, its top technology companies and its most outspoken justifiers of war crimes. Of course, that’s not how they would describe it.

It was the inaugural “AI Expo for National Competitiveness”, hosted by the Special Competitive Studies Project – better known as the “techno-economic” thinktank created by the former Google CEO and current billionaire Eric Schmidt. The conference’s lead sponsor was Palantir, a software company co-founded by Peter Thiel that’s best known for inspiring 2019 protests against its work with Immigration and Customs Enforcement (Ice) at the height of Trump’s family separation policy. Currently, Palantir is supplying some of its AI products to the Israel Defense Forces.

The conference hall was also filled with booths representing the US military and dozens of its contractors, ranging from Booz Allen Hamilton to a random company that was described to me as Uber for airplane software.

At industry conferences like these, powerful people tend to be more unfiltered – they assume they’re in a safe space, among friends and peers. I was curious, what would they say about the AI-powered violence in Gaza, or what they think is the future of war?

Attendees were told the conference highlight would be a series of panels in a large room toward the back of the hall. In reality, that room hosted just one of note. Featuring Schmidt and the Palantir CEO, Alex Karp, the fire-breathing panel would set the tone for the rest of the conference. More specifically, it divided attendees into two groups: those who see war as a matter of money and strategy, and those who see it as a matter of death. The vast majority of people there fell into group one.

I’ve written about relationships between tech companies and the military before, so I shouldn’t have been surprised by anything I saw or heard at this conference. But when it ended, and I departed DC for home, it felt like my life force had been completely sucked out of my body.

‘The peace activists are war activists’


Swarms of people migrated across the hall to see the main panel, where Karp and Schmidt spoke alongside the CIA deputy director, David Cohen, and Mark Milley, who retired in September as chairman of the joint chiefs of staff, where he advised Joe Biden and other top officials on war matters. When Schmidt tried to introduce himself, his microphone didn’t work, so Cohen lent him his own. “It’s always great when the CIA helps you out,” Schmidt joked. This was about as light as things got for the next 90 minutes.

As the moderator asked general questions about the panelists’ views on the future of war, Schmidt and Cohen answered cautiously. But Karp, who’s known as a provocateur, aggressively condoned violence, often peering into the audience with hungry eyes, palpably desperate for claps, boos or shock.

He began by saying that the US has to “scare our adversaries to death” in war. Referring to Hamas’s 7 October attack on Israel, he said: “If what happened to them happened to us, there’d be a hole in the ground somewhere.” Members of the audience laughed when he mocked fresh graduates of Columbia University, which had some of the earliest encampment protests in the country. He said they’d have a hard time on the job market and described their views as a “pagan religion infecting our universities” and “an infection inside of our society”. (He’s made these comments before.)

“The peace activists are war activists,” Karp insisted. “We are the peace activists.”

A huge aspect of war in a democracy, Karp went on to argue, is leaders successfully selling that war domestically. “If we lose the intellectual debate, you will not be able to deploy any armies in the west ever,” Karp said.

Earlier in the panel, Milley had said that modern war involved conflict in “dense urban areas with high levels of collateral damage”, clearly alluding to the war in Gaza, but too afraid to say it. But every time Karp spoke, Milley became more bombastic. By the panel’s end, he was describing Americans who oppose the war in Gaza as “supporting a terrorist organization”.

“Before we get self-righteous,” Milley said, in the second world war, “we, the US, killed 12,000 innocent French civilians. We destroyed 69 Japanese cities. We slaughtered people in massive numbers – men, women and children.”

Meanwhile, Schmidt mainly talked about the importance of drones and automation in war. (He is quietly trying to start his own war drone company.) For his part, Cohen urged the room to see the 7 October attack as a “big warning” about tech in military settings. Although Israel had invested “very heavily” in defense and surveillance technology, it had failed to stop the attack, Cohen noted. “We do need to have a little bit of humility.”

I just thought of something. I am the new Oppenheimer!


This didn’t seem to be a common view. The prevailing attitude of the conference was when systems fail, it just means you need newer technology, and more of it.

I walked out of the panel in a quiet daze. Milley’s comments about the second world war echoed in my head. It was, frankly, jarring to hear a recent top US official defend Israel’s mass killing of Gazan civilians by invoking wartime massacres that not only preceded the Geneva Conventions, but helped justify their creation.

All around me, I overheard upbeat conversation between hundreds of people who had just heard the same things I had – easygoing comments about lunch, travel or the next panel. I felt like we were living in totally different realities.
Shaky soldier vision

After pacing around for 10 minutes trying to enter a social headspace, I plugged my phone into an outlet and said hi to the person next to me, a man who appeared to be in his late 50s. I asked what he thought about the panel. Smiling meekly, he said it was “interesting” to hear Milley describe the second world war that way.

“Have you seen Oppenheimer?” he asked.

No, I said, but I’d read The Making of the Atomic Bomb.

I thought he was going to talk about the hubris of people who build weapons of war. Instead, he told me he works in nuclear weapons research at Los Alamos laboratory. Reaching into his backpack, he handed me a few Los Alamos pens and stickers.

After chatting for a few minutes – he wouldn’t get into much detail about his work, but did show off pictures of his expensive-looking rental car – he started packing up his things. “I just thought of something,” he said abruptly, laughing. “I am the new Oppenheimer!”

I managed to force a laugh as he started back to the Los Alamos booth.

Throughout the conference, I wandered to different booths. I ended up running into two people I knew from college. At the NSA booth, a young woman told me that the agency is great for “work-life balance”. I also stopped by Palantir’s career booth, where an employee, Elizabeth Watts, told me that the kind of person who works for Palantir is someone who wouldn’t be scared away by Karp’s panel. “People who are interested in national security, who understand there aren’t black and white solutions,” she said. “People who want to defend western democracies.”

In Palantir’s cavernous main booth, I tried on a VR headset to test Palantir’s new augmented reality tool for soldiers. I was told I’d be able to direct a truck or drone while continuing to see the world around me. But when I put on the headset, my field of vision became shaky and out of focus. It reminded me of goggles they made us wear during Dare anti-drug programs in middle school, meant to simulate being drunk.

Many people had been trying on the headset that day, a Palantir employee explained to me. In order for you to see things clearly, the headset has to fit your head and eyes perfectly. He didn’t offer to adjust the headset, so my hi-tech soldier vision remained out of focus.

On the evening on the first day, Palantir had a social event with free drinks. The only options were two IPAs, and I had one called “the Corruption”. It was, bar none, the worst beverage I’ve had in my entire life. I ended up talking to a Canadian man named Sata, who appeared to be in his mid-20s. He said he was an investor in Palantir, so I asked how he had gotten the money.

“I got in a car accident,” he said. After getting a small payout, he invested. So far, he’s only lost money.

No answers on ethics


To my knowledge, the only other journalist covering the conference was my friend Jack Poulson, who said I should join him at a panel discussion about ethics and human rights. It was being held as far away from the rest of the conference as it could get while remaining physically inside the building. You had to exit the main exhibit hall, walk down two extremely long hallways, and enter a door at the very end to find it.

By the time I arrived, they were ending the panel and starting the Q&A. Jack stood up at the first opportunity. He talked about the “provocative remarks” made throughout the conference about “exporting AI into places like Gaza”. Voice shaking, he mentioned Karp “unabashedly supporting” the ongoing killings in Gaza, and said Karp’s comments about “winning the debate” were clearly a euphemism for crushing dissent. A couple of 23audience members laughed quietly as Jack asked: could the panel respond to any of this?

The moderator decided to let everybody else ask their questions and let the panelists choose which to answer. Unsurprisingly, no one directly answered Jack’s question.

Later, as I entered the main conference hall, I found myself right behind a group of kids with tiny backpacks. They appeared to be in first or second grade. I asked a teacher, a blond woman with glasses, if there was an exhibit for kids. She said no, but one of them had a dad working at the event.

A slim man with dark hair approached the kids. He had a Special Competitive Studies Project pin on his suit. Beaming, he took a picture with them. About 30 minutes later, I found him taking the kids on a tour. He was squatting down to their height and pointing at something in a booth for a military vendor. I couldn’t hear what he was saying.
Helping choose what gets bombed

I also went to a panel in Palantir’s booth titled Civilian Harm Mitigation. It was led by two “privacy and civil liberties engineers” – a young man and woman who spoke exclusively in monotone. They also used countless euphemisms for bombing and death. The woman described how Palantir’s Gaia map tool lets users “nominate targets of interest” for “the target nomination process”. She meant it helps people choose which places get bombed.

After she clicked a few options on an interactive map, a targeted landmass lit up with bright blue blobs. These blobs, she said, were civilian areas like hospitals and schools. The civilian locations could also be described in text, she said, but it can take a long time to read. So, Gaia uses a large language model (something like ChatGPT) to sift through this information and simplify it. Essentially, people choosing bomb targets get a dumbed-down version of information about where children sleep and families get medical treatment.

“Let’s say you’re operating in a place with a lot of civilian areas, like Gaza,” I asked the engineers afterward. “Does Palantir prevent you from ‘nominating a target’ in a civilian location?”

Short answer, no. “The end user makes the decision,” the woman said.

Only one booth, a small, immersive exhibit with tall gray walls, seemed concerned about the ordinary people affected by war. It was run by the International Committee of the Red Cross (ICRC).

A door-like opening brought me into an emergency shelter for a young family caught in a conflict zone. There was a small couch with an open sleeping bag on top, and children’s toys in the corner. A yellow print-out warned the inhabitants to “STAY IN DESIGNATED SAFE ZONES”. A radio on a kitchen table seemed to be playing the news, but the connection was spotty.

The exhibit was small, but in a conference largely celebrating the military industrial complex, it stuck out. It felt like a plea for someone, anyone, to consider the victims of war.

Outside, I talked to an ICRC employee, Thomas Glass. He was attentive and engaged, but he seemed tired. He said that he had just spent several weeks in southern Gaza setting up a field hospital and supporting communal kitchens.

I asked how people at the conference had been responding to his exhibit. Glass said that most people he met had been open-minded, but some asked why the ICRC was at the conference at all. They weren’t aggressive about it, he said. They just genuinely did not understand.

Wednesday, May 15, 2024

Laughing, chatting, singing, GPT-4o is AI close to human, but watch out: it’s really not human


It is responsive enough to obscure the fact that it is not a sentient being. It comes with biases; it is a corporate product. Remember that

THE GUARDIAN
Tue 14 May 2024 

Artificial intelligence is changing things at dizzying speed. About 18 months ago, the tech company OpenAI unleashed its AI chatbot, ChatGPT. Within a couple of months, 100 million users were regularly using the tool, making it the fastest-growing consumer app in history. While tech bubbles are always easy to slip into, many people argue the world can be divided into a pre- and post-ChatGPT world.

That interest wasn’t a blip. This week, the web traffic analysts Similarweb announced ChatGPT’s website hit new record highs of interest, with 83.5m visits on a single day in May. The premise and title of my book released last week, How AI Ate the World, appears to be true. AI is now basically inescapable.

Yet touring the country to talk about it, I still meet holdouts; people who don’t want to be part of the AI revolution, or haven’t yet seen the need to interact with a text-based chatbot. An announcement on Monday by OpenAI of a new model, GPT-4o, may change that.

For the technically minded, GPT-4o is a significant change. But for the general public, the important difference is how easy it is to interact with. Prior to GPT-4o, the primary way of interacting with ChatGPT was to type text-based questions and wait for text-based responses. A voice interface was available, but was clunky and slow. I have tried, in recent months, to get ChatGPT to help teach me German – to better interact with my partner’s Austrian family – but the agonising delays between me asking questions, and ChatGPT formulating a response and then synthetically vocalising German words, often in incomprehensible and unaccented American English, made it next to useless.

The tech demos shown by OpenAI earlier this week change that. In one section of the launch event, ChatGPT acted as a real-time interpreter for a conversation between English and Italian. In another, it laughed in response to a “top-tier dad joke”. And in another, it switched from a rote reading of a bedtime story to a dramatic reading that even Brian Blessed would blanch at, before concluding with a song.

According to OpenAI, this is the new normal: an AI model that can “reason across audio, vision and text in real time”. It appears, at first glance, to be another significant step towards turning science fiction into science fact. The always-helpful, always-on, human-like robot butler that we’ve seen and read about for decades is getting closer, OpenAI would suggest. And the impressive smoothness of the interaction might shunt a few naysaying holdouts towards being AI adopters. Making it free, as OpenAI has done, will also help.

However, it’s worth remembering AI’s original sin, dating back to 1956: its naming. “Artificial intelligence” is certainly artificial, but it’s not yet intelligent – and arguably never will be. The more that ChatGPT and other tools like it mimic human interaction, learning to act as witty, wisecracking raconteurs that can croon and swoon, the more likely we are to forget the “artificial” bit of the term.

The smooth interactivity that OpenAI has laboured hard to enable does well to paper over the cracks of the underlying technology. When ChatGPT first elbowed its way noisily into our lives in November 2022, those who had been following the technology for decades pointed out that AI in its current form was little more than snazzy pattern-matching technology – but they were drowned out by the excited masses. The next step towards human-like interaction is only going to amplify the din.

The media industry is dying – but I can still get paid to train AI to replace me


That’s great news for OpenAI, a company already valued at more than $80bn, and with investment from the likes of Microsoft. Its CEO, Sam Altman, tweeted last week that GPT-4o “feels like magic to me”. It’s also good news for others in the AI space, who are capitalising on the ubiquity of the technology and layering it into every aspect of our lives. Microsoft Word and PowerPoint now come with generative AI tools folded into them. Meta, the parent company of Facebook and Instagram, is putting its AI chatbot assistant into its apps in many countries, much to some users’ chagrin.

But it’s less good for ordinary users. Less friction between asking an AI system to do something and it actually completing the task is good for ease of use, but it also helps us forget that we’re not interacting with sentient beings. We need to remember that, because AI is not infallible; it comes with biases and environmental issues, and reflects the interests of its makers. These pressing issues are explored in my book, and the experts I spoke to tell me they represent significant concerns for the future.

So try ChatGPT by all means, and play about with its voice and video interactions. But bear in mind its limitations, and that this thing isn’t intelligent, but it certainly is artificial, no matter how much it pretends not to be.

Chris Stokel-Walker is the author of How AI Ate the World, which was published earlier this month

Tuesday, May 14, 2024

ChatGPT-maker releases latest free model

AFP Published May 14, 2024 


SAN FRANCISCO: OpenAI on Monday released a higher performing and more efficient version of the artificial intelligence technology that underpins its popular generative tool ChatGPT, making it free to all users.

The update to OpenAI’s flagship product landed a day before Google is expected to make its own announcements about Gemini, the search engine giant’s own AI tool competing with ChatGPT head on.

“We’re very, very excited to bring GPT-4o to all of our free users out there,” Chief Technology Officer Mira Murati said at the highly anticipated launch event in San Francisco.

The new model will be rolled out in OpenAI’s products over the next weeks, the company said.

Murati and engineers from OpenAI demonstrated the new powers of GPT-4o at the virtual event, asking questions and posing challenges to the beefed-up version of the ChatGPT chatbot.

“We know that these models get more and more complex, but we want the experience of interaction to actually become more natural, easy,” Murati said before the demo.

This included asking questions to a human-sounding ChatGPT in Italian and asking the bot to interpret facial expressions or make complex math equations.

The event is just the latest episode in the AI arms race that has seen OpenAI-backer Microsoft propelled past Apple as the world’s biggest company by market capitalisation.

OpenAI and Microsoft are in a heated rivalry with Google to be generative AI’s major player, but Facebook-owner Meta and upstart Anthropic are also making big moves to compete.

All the companies are scrambling to come up with ways to cover generative AI’s exorbitant costs, much of which goes to chip giant Nvidia and its powerful GPU semiconductors.

For now less performing versions of OpenIA or Google’s chatbots are available to customers for free, with questions still lingering over whether the public at large is ready to pay a subscription to maintain access to the technology.

Published in Dawn, May 14th, 2024


Sunday, May 12, 2024

OUTSOURCING OUTSOURCED
Tech Giants Start to Treat Southeast Asia Like Next Big Thing












Olivia Poh and Suvashree Ghosh
Fri, May 10, 2024 

(Bloomberg) -- Long considered a tech hinterland, Southeast Asia is fast emerging as a center of gravity for the industry.

The CEOs of Apple Inc., Microsoft Corp. and Nvidia Corp. are among the industry chieftains who’ve swung through the region in past months, committing billions of dollars in investment and holding forth with heads of state from Indonesia to Malaysia. Amazon.com Inc. just this week took over a giant conference hall in downtown Singapore to unfurl a $9 billion investment plan before a thousands-strong audience cheering and waving glow sticks.

After decades of playing second fiddle to China and Japan, the region of about 675 million people is drawing more tech investment than ever. For data centers alone, the world’s biggest companies are set to splurge up to $60 billion over the next few years as Southeast Asia’s young populations embrace video streaming, online shopping and generative AI.

Traditionally welcoming to Western investment, the region’s moment has arrived as China turns more hostile to US firms and India remains tougher to navigate politically. Silicon Valley is setting its sights on business-friendly regimes, fast-growing talent pool and rising incomes. The advent of AI is spurring tech leaders to pursue new sources of growth, laying the digital infrastructure of the region’s future.

“Countries like Singapore and Malaysia are largely neutral to the geopolitical tensions happening with China, US, Ukraine and Russia,” said Sean Lim, a managing partner at Singapore-based NWD Holdings, which invests in AI-based projects and other areas. “Especially with the ongoing wars, this region has become more attractive.”

Take Tim Cook and Satya Nadella, who last month embarked on their biggest tours across Southeast Asia in years. The investments they pledged are set to help turn the region into a major battleground between the likes of Amazon, Microsoft and Google in future frontiers such as artificial intelligence and the cloud.

The region’s growing workforce is making it a viable alternative to China as a center of talent to support companies’ global operations. As its governments pushed for improvements in education and infrastructure, it’s become an attractive base for everything from manufacturing and data centers to research and design.

“The governments are pro cross-border investments and there’s a deep talent pool,” said NWD’s Lim.

Southeast Asia has also become a sizeable market for gadgets and online services. About 65% of Southeast Asia will be middle class by 2030, with rising purchasing power, according to Singapore government estimates. That’ll help more than double the region’s market for internet-based services to $600 billion, according to estimates by Google, Temasek Holdings Pte and Bain & Co.

Apple, whose pricey gadgets for long remained out of reach for the vast majority in the region, is now adding stores. Chief Executive Officer Cook toured Vietnam, Indonesia and Singapore in late April, meeting prime ministers and announcing fresh investments as the company seeks new growth regions beyond China, where sales have sputtered.

In Jakarta, besides pow-wows with the country’s leadership, Cook met a local influencer with almost 800,000 Instagram followers over chicken satay, and learned enough of the local language to say “How are you” in a video circulated on social media. On his X account, local customers asked Cook for an Apple Store and better servicing of Apple products in the country. Following the trip, Apple reported its revenue in Indonesia had reached a record, even as total global sales declined.

“These are markets where our market share is low,” Cook said on a conference call last week. “The populations are large and growing. And our products are really making a lot of progress.”

Microsoft CEO Nadella also received an enthusiastic welcome after meeting with the leaders of Malaysia, Indonesia and Thailand last week. In Bangkok, under a ballroom’s shimmering chandeliers, he was seen shaking hands and conversing with high-ranking government officials and the country’s top business elites.

Southeast Asia’s draw becomes apparent once you consider slowing toplines in Silicon Valley, which is struggling now to lay the foundations of AI — anticipated to become an industry-defining technology. Within the next few weeks, two major AI-themed events in Singapore are set to feature top leaders from OpenAI, Anthropic, Microsoft and others to further tout the technology’s promise for Southeast Asia.

A specific catalyst for the tech companies is generative AI, with services like ChatGPT rapidly gaining users. Southeast Asia’s accelerating AI adoption has the potential to add about $1 trillion to the region’s economy by 2030, according to a report by consulting firm Kearney.

That means more data centers are needed to store and process the massive amounts of information traversing between content creators, companies and customers. Data center demand in Southeast Asia and North Asia is expected to expand about 25% a year through 2028, according to Cushman & Wakefield data. That compares with 14% a year in the US. By 2028, Southeast Asia will become the second largest non-US source of data center revenue in the world.

Hotspots include Malaysia’s southern Johor Bahru region, where Nvidia last year teamed up with a local utility for a plan to build a $4.3 billion AI data center park. Nvidia is also targeting Vietnam, which CEO Jensen Huang sees as a potential second home for the company, local media reported during his visit in December. Huang was spotted enjoying street food and egg coffee, a Vietnam specialty, as he hung out with local tech contacts in a black T-shirt and jeans.

The company has since reviewed Hanoi, Ho Chi Minh City and Da Nang as potential locations for investments, with Keith Strier, its vice president of worldwide AI initiatives, touring the cities last month.

A region consisting of about a dozen politically, culturally and geographically disparate countries, Southeast Asia isn’t the easiest market for global companies to operate in. Risks include difficulties navigating local working cultures, as well as the volatility of the various currencies, said NWD’s Lim.

But for now, the tech majors are embracing the region’s advantages such as its relatively low-cost yet highly skilled workforce — helpful for building expensive technologies such as large language models that require not just a lot of cash but also skilled engineers. Most of the US firms announced training programs with local governments, with Microsoft promising to train a total 2.5 million people in AI skills in Southeast Asia by 2025.

“This shift is influenced by both external and internal drivers,” said Nicholas Lee, associate director in political consultancy firm Global Counsel’s Singapore office. “Besides the intensifying US-China rivalry and policy divergence across major jurisdictions, subdued revenue growth and rising costs also underline the need for companies to manage expenses prudently.”

--With assistance from Chandra Asmara, Norman Harsono, Nguyen Xuan Quynh and Patpicha Tanakasempipat.

 Bloomberg Businessweek

South Korea prepares support package worth over $7 billion for chip industry

Reuters
Sat, May 11, 2024 

Yellen meets Japan and Korea counterparts in Washington


SEOUL (Reuters) - South Korea is readying plans for a support package for chip investments and research worth more than 10 trillion won ($7.30 billion), the finance minister said on Sunday, after setting its sights on winning a "war" in the semiconductor industry.

Finance Minister Choi Sang-mok said the government would soon announce details of the package, which targets chip materials, equipment makers, and fabless companies throughout the semiconductor supply chain.

The program could include offers of policy loans and the setting-up of a new fund financed by state and private financial institutions, Choi told executives of domestic chip equipment makers at a meeting, the finance ministry said in a statement.

South Korea is also building a mega chip cluster in Yongin, south of its capital, Seoul, which it touts as the world's largest such high-tech complex.


President Yoon Suk Yeol has vowed to pour all possible resources into winning the "war" in chips, promising tax benefits for investments.

($1=1,369.6500 won)

(Reporting by Ju-min Park; Editing by Clarence Fernandez)

Thursday, May 09, 2024

TikTok Asks Court To Declare Ban Unconstitutional

Congress is "silencing the 170 million Americans who use the platform to communicate," the company argues.


ELIZABETH NOLAN BROWN
5.8.2024 
REASON

(Tom Williams/CQ Roll Call/Newscom)


A new law banning TikTok if it doesn't divorce its parent company is "obviously unconstitutional," TikTok Inc. and ByteDance argue in a new federal court filing.

The Protecting Americans From Foreign Adversary Controlled Applications Act, passed and signed into law late last month, singles out ByteDance and its subsidiary TikTok Inc., requiring the former to divest itself of the latter within 270 days. If ByteDance doesn't, the TikTok app will be banned in the U.S.

Congress is "silencing the 170 million Americans who use [TikTok] to communicate," and "crafted a two-tiered speech regime" that is unconstitutional, TikTok argues.

The new law allows a similar ultimatum to be applied to other social media platforms with ties to "foreign adversaries" if the president deems them a threat. But this process requires at least some nominal checks and balances that don't apply in TikTok's case. And no other app or company is explicitly named in the new legislation.

"For the first time in history, Congress has enacted a law that subjects a single, named speech platform to a permanent, nationwide ban, and bars every American from participating in a unique online community with more than 1 billion people worldwide," states TikTok's petition to the U.S. Court of Appeals for the District of Columbia.

The company is asking the court to review the constitutionality of the law, which it argues is both a violation of the First Amendment and an unconstitutional bill of attainder. Bills of attainder, which regulate or punish a particular entity (without the benefit of due process), are barred by the Constitution.

TikTok also argues that the law violates its "rights under the equal protection component of the Fifth Amendment's Due Process Clause because it singles Petitioners out for adverse treatment without any reason for doing so."
An American Company With American Rights

Opponents of TikTok often argue that as a Chinese company, TikTok is afforded no free speech protections and the First Amendment doesn't apply here.

This is wrong in two ways. First, because American TikTok users have First Amendment rights which are not in question here.

Second, because TikTok Inc. is a U.S. company. It's incorporated in California and has its main office there, with additional offices in New York, San Jose, Chicago, and Miami.

TikTok Inc. is a subsidiary of ByteDance, which is incorporated in the Cayman Islands (not China) and its leadership is based in Singapore and the U.S. (not China).

ByteDance was founded in China back in 2012. But today, ByteDance's founder—a Chinese national based in Singapore—only has a 21 percent ownership stake in the company. Another 21 percent is owned by employees of the company (including around 7,000 Americans, per the petition) and 58 percent is owned by institutional investors, including BlackRock (an American company), General Atlantic (an American company), and Susquehanna International Group (headquartered in Pennsylvania).

It's hard to pin down TikTok (the platform, not the American company) as belonging to any particular nation. But the idea that it's purely a "Chinese app" is demonstrably false.
A Ban By Any Other Name

TikTok rejects the idea—often cited by politicians in support of the law—that this isn't a ban and therefore isn't actually censorship.

"Banning TikTok is so obviously unconstitutional, in fact, that even the Act's sponsors recognized that reality, and therefore have tried mightily to depict the law not as a ban at all, but merely a regulation of TikTok's ownership," notes the petition. "They claim that the Act is not a ban because it offers ByteDance a choice: divest TikTok's U.S. business or be shut down."

"But in reality, there is no choice," the company argues. "The 'qualified divestiture' demanded by the Act to allow TikTok to continue operating in the United States is simply not possible: not commercially, not technologically, not legally. And certainly not on the 270-day timeline required by the Act."

The petition lays out multiple reasons why divestiture isn't feasible, including the fact that the source code is massive and complicated, making "moving all TikTok source code development from ByteDance to a new TikTok owner…impossible as a technological matter."

"It would take years for an entirely new set of engineers to gain sufficient familiarity with the source code to perform the ongoing, necessary maintenance and development activities for the platform," states TikTok's petition. "Moreover, to keep the platform functioning, these engineers would need access to ByteDance software tools, which the Act prohibits." The petition also notes that "the Chinese government has made clear that it would not permit a divestment of the recommendation engine that is a key to the success of TikTok in the United States."

"Like the United States, China regulates the export of certain technologies originating there," notes the petition. "China's official news agency has reported that under these rules, any sale of recommendation algorithms developed by engineers employed by ByteDance subsidiaries in China, including for TikTok, would require a government license." The petition notes that "China adopted these enhanced export control restrictions between August and October 2020, shortly after President [Donald] Trump's August 6, 2020 and August 14, 2020 executive orders targeting TikTok."

No Due Process

Even if divesture could happen, the act "would still be an extraordinary and unconstitutional assertion of power," TikTok argues. It opens the door to the government simply declaring that companies they don't like must divest of particular products—including platforms for speech—or else those products will be banned. "If Congress can do this, it can circumvent the First Amendment by invoking national security and ordering the publisher of any individual newspaper or website to sell to avoid being shut down."

"By banning all online platforms and software applications offered by 'TikTok' and all ByteDance subsidiaries, Congress has made a law curtailing massive amounts of protected speech," it concludes. But "the government cannot, consistent with the First Amendment, dictate the ownership of newspapers, websites, online platforms, and other privately created speech forums."

In this case, the lawmakers' ploy to ban TikTok has been undertaken without a single non-hypothetical finding of danger by Congress, nor any consideration of less restrictive means of allaying any concerns, the company argues.

TikTok Inc. "worked with the government for four years on a voluntary basis to develop a framework to address the government's concerns," it points out. As part of this engagement, the company "voluntarily invested more than $2 billion to build a system of technological and governance protections—sometimes referred to as 'Project Texas'—to help safeguard U.S. user data and the integrity of the U.S. TikTok platform against foreign government influence."

The company also committed to a draft National Security Agreement developed with the Committee on Foreign Investment in the United States. "Congress tossed this tailored agreement aside, in favor of the politically expedient and punitive approach of targeting for disfavor one publisher and speaker (TikTok Inc.), one speech forum (TikTok), and that forum's ultimate owner (ByteDance Ltd.)," the petition states.

TikTok Inc. and ByteDance are now asking the court to "issue a declaratory judgment that the Act violates the U.S. Constitution" and an order stopping the U.S. Attorney General from enforcing the act.



More Sex & Tech News

• Check out Reason's new Artificial Intelligence issue.

• The fight over an Idaho "abortion trafficking" law continues in a federal appeals court.

• Alabama's Attorney General "cannot constitutionally prosecute people for acts taken within the State meant to facilitate lawful out-of-state conduct, including obtaining an abortion," writes U.S. District Court Judge Myron Thompson, declining to dismiss a case against Attorney General Steve Marshall's pledge to prosecute people who help Alabama residents obtain out-of-state abortions. Reason's Emma Camp has more.

• Microsoft is building an AI tool to compete with OpenAI's ChatGPT and Google's Gemini.

• Minnesota "spends $100 million a year to detain about 750 individuals who are deemed 'likely' to commit sex offenses," notes Jacob Sullum.

Today's Image

Chinatown, NYC | 2013 (ENB/Reason)

Tuesday, May 07, 2024

 

Study: humans rate artificial intelligence as more ‘moral’ than other people



AI responses to questions of morality are getting better, and it raises some questions for the future


GEORGIA STATE UNIVERSITY





ATLANTA  —  A new study has found that when people are presented with two answers to an ethical question, most will think the answer from artificial intelligence (AI) is better than the response from another person.

Attributions Toward Artificial Agents in a Modified Moral Turing Test,” a study conducted by Eyal Aharoni, an associate professor in Georgia State’s Psychology Department, was inspired by the explosion of ChatGPT and similar AI large language models (LLMs) which came onto the scene last March.

“I was already interested in moral decision-making in the legal system, but I wondered if ChatGPT and other LLMs could have something to say about that,” Aharoni said. “People will interact with these tools in ways that have moral implications, like the environmental implications of asking for a list of recommendations for a new car. Some lawyers have already begun consulting these technologies for their cases, for better or for worse. So, if we want to use these tools, we should understand how they operate, their limitations and that they’re not necessarily operating in the way we think when we’re interacting with them.”

To test how AI handles issues of morality, Aharoni designed a form of a Turing test.

“Alan Turing, one of the creators of the computer, predicted that by the year 2000 computers might pass a test where you present an ordinary human with two interactants, one human and the other a computer, but they’re both hidden and their only way of communicating is through text. Then the human is free to ask whatever questions they want to in order to try to get the information they need to decide which of the two interactants is human and which is the computer,” Aharoni said. “If the human can’t tell the difference, then, by all intents and purposes, the computer should be called intelligent, in Turing’s view.”

For his Turing test, Aharoni asked undergraduate students and AI the same ethical questions and then presented their written answers to participants in the study. They were then asked to rate the answers for various traits, including virtuousness, intelligence and trustworthiness.

“Instead of asking the participants to guess if the source was human or AI, we just presented the two sets of evaluations side by side, and we just let people assume that they were both from people,” Aharoni said. “Under that false assumption, they judged the answers’ attributes like ‘How much do you agree with this response, which response is more virtuous?’”

Overwhelmingly, the ChatGPT-generated responses were rated more highly than the human-generated ones.

“After we got those results, we did the big reveal and told the participants that one of the answers was generated by a human and the other by a computer, and asked them to guess which was which,” Aharoni said.

For an AI to pass the Turing test, humans must not be able to tell the difference between AI responses and human ones. In this case, people could tell the difference, but not for an obvious reason.

“The twist is that the reason people could tell the difference appears to be because they rated ChatGPT’s responses as superior,” Aharoni said. “If we had done this study five to 10 years ago, then we might have predicted that people could identify the AI because of how inferior its responses were. But we found the opposite — that the AI, in a sense, performed too well.”

According to Aharoni, this finding has interesting implications for the future of humans and AI.

“Our findings lead us to believe that a computer could technically pass a moral Turing test — that it could fool us in its moral reasoning. Because of this, we need to try to understand its role in our society because there will be times when people don’t know that they’re interacting with a computer and there will be times when they do know and they will consult the computer for information because they trust it more than other people,” Aharoni said. “People are going to rely on this technology more and more, and the more we rely on it, the greater the risk becomes over time.”

—By Katherine Duplessis

Saturday, May 04, 2024

 

Random robots are more reliable


New AI algorithm for robots consistently outperforms state-of-the-art systems



NORTHWESTERN UNIVERSITY

NoodleBot simulation 

VIDEO: 

RESEARCHERS TESTED THE NEW AI ALGORITHM'S PERFORMANCE WITH SIMULATED ROBOTS, SUCH AS NOODLEBOT.

view more 

CREDIT: NORTHWESTERN UNIVERSITY





Northwestern University engineers have developed a new artificial intelligence (AI) algorithm designed specifically for smart robotics. By helping robots rapidly and reliably learn complex skills, the new method could significantly improve the practicality — and safety — of robots for a range of applications, including self-driving cars, delivery drones, household assistants and automation.

Called Maximum Diffusion Reinforcement Learning (MaxDiff RL), the algorithm’s success lies in its ability to encourage robots to explore their environments as randomly as possible in order to gain a diverse set of experiences. This “designed randomness” improves the quality of data that robots collect regarding their own surroundings. And, by using higher-quality data, simulated robots demonstrated faster and more efficient learning, improving their overall reliability and performance.

When tested against other AI platforms, simulated robots using Northwestern’s new algorithm consistently outperformed state-of-the-art models. The new algorithm works so well, in fact, that robots learned new tasks and then successfully performed them within a single attempt — getting it right the first time. This starkly contrasts current AI models, which enable slower learning through trial and error. 

The research will be published on Thursday (May 2) in the journal Nature Machine Intelligence.

“Other AI frameworks can be somewhat unreliable,” said Northwestern’s Thomas Berrueta, who led the study. “Sometimes they will totally nail a task, but, other times, they will fail completely. With our framework, as long as the robot is capable of solving the task at all, every time you turn on your robot you can expect it to do exactly what it’s been asked to do. This makes it easier to interpret robot successes and failures, which is crucial in a world increasingly dependent on AI.”

Berrueta is a Presidential Fellow at Northwestern and a Ph.D. candidate in mechanical engineering at the McCormick School of Engineering. Robotics expert Todd Murphey, a professor of mechanical engineering at McCormick and Berrueta’s adviser, is the paper’s senior author. Berrueta and Murphey co-authored the paper with Allison Pinosky, also a Ph.D. candidate in Murphey’s lab.

The disembodied disconnect

To train machine-learning algorithms, researchers and developers use large quantities of big data, which humans carefully filter and curate. AI learns from this training data, using trial and error until it reaches optimal results. While this process works well for disembodied systems, like ChatGPT and Google Gemini (formerly Bard), it does not work for embodied AI systems like robots. Robots, instead, collect data by themselves — without the luxury of human curators.

“Traditional algorithms are not compatible with robotics in two distinct ways,” Murphey said. “First, disembodied systems can take advantage of a world where physical laws do not apply. Second, individual failures have no consequences. For computer science applications, the only thing that matters is that it succeeds most of the time. In robotics, one failure could be catastrophic.”

To solve this disconnect, Berrueta, Murphey and Pinosky aimed to develop a novel algorithm that ensures robots will collect high-quality data on-the-go. At its core, MaxDiff RL commands robots to move more randomly in order to collect thorough, diverse data about their environments. By learning through self-curated random experiences, robots acquire necessary skills to accomplish useful tasks.

Getting it right the first time

To test the new algorithm, the researchers compared it against current, state-of-the-art models. Using computer simulations, the researchers asked simulated robots to perform a series of standard tasks. Across the board, robots using MaxDiff RL learned faster than the other models. They also correctly performed tasks much more consistently and reliably than others. 

Perhaps even more impressive: Robots using the MaxDiff RL method often succeeded at correctly performing a task in a single attempt. And that’s even when they started with no knowledge.

“Our robots were faster and more agile — capable of effectively generalizing what they learned and applying it to new situations,” Berrueta said. “For real-world applications where robots can’t afford endless time for trial and error, this is a huge benefit.”

Because MaxDiff RL is a general algorithm, it can be used for a variety of applications. The researchers hope it addresses foundational issues holding back the field, ultimately paving the way for reliable decision-making in smart robotics.

“This doesn’t have to be used only for robotic vehicles that move around,” Pinosky said. “It also could be used for stationary robots — such as a robotic arm in a kitchen that learns how to load the dishwasher. As tasks and physical environments become more complicated, the role of embodiment becomes even more crucial to consider during the learning process. This is an important step toward real systems that do more complicated, more interesting tasks.”

The study, “Maximum diffusion reinforcement learning,” was supported by the U.S. Army Research Office (grant number W911NF-19-1-0233) and the U.S. Office of Naval Research (grant number N00014-21-1-2706).


Future direction: NoodleBot [VIDEO] | 

The published study includes tests performed with simulated robots. Next, they will test the algorithm on robots in the real world. They developed this snake-like robot, called "NoodleBot," for future testing.


Simulated robots learn in one [VIDEO] |

This video illustrates the single-shot learning capabilities of MaxDiff RL.


Although the current study tested the AI algorithm only on simulated robots, the researchers have developed NoodleBot for future testing of the algorithm in the real world.

CREDIT

Northwestern University