ChatGPT has read almost the whole internet. That hasn't solved its diversity issues
AI language models are booming. The current frontrunner is ChatGPT, which can do everything from taking a bar exam, to creating an HR policy, to writing a movie script.
But it and other models still can’t reason like a human. In this Q&A, Dr. Vered Shwartz (she/her), assistant professor in the UBC department of computer science, and masters student Mehar Bhatia (she/her) explain why reasoning could be the next step in AI—and why it’s important to train these models using diverse datasets from different cultures.
What is ‘reasoning’ for AI?
Shwartz: Large language models like ChatGPT learn by reading millions of documents, essentially the entire internet, and recognizing patterns to produce information. This means they can only provide information about things that are documented on the internet. Humans, on the other hand, are able to use reasoning. We use logic and common sense to work out meaning beyond what is explicitly said.
Bhatia: We learn reasoning abilities from birth. For instance, we know not to switch on the blender at 2 a.m. because it will wake everyone up. We’re not taught this, but it’s something you understand based on the situation, your environment and your surroundings. In the near future, AI models will handle many of our tasks. We can’t hard code every single common-sense rule into these robots, so we want them to understand the right thing to do in a specific context.
Shwartz: Bolting on common-sense reasoning to current models like ChatGPT would help them provide more accurate answers and so, create more powerful tools for humans to use. Current AI models have displayed some form of common-sense reasoning. For example, if you ask the latest version of ChatGPT about a child’s and an adult’s mud pie, it can correctly differentiate between dessert and a face full of dirt based on context.
Where do AI language models fail?
Shwartz: Common-sense reasoning in AI models is far from perfect. We’ll only get so far by training on massive amounts of data. Humans will still need to intervene and train the models, including by providing the right data.
For instance, we know that English text on the web is largely from North America, so English language models, which are the most commonly used, tend to have a North American bias and are at risk of either not knowing about concepts from other cultures or of perpetuating stereotypes. In a recent paper we found that training a common-sense reasoning model on data from different cultures, including India, Nigeria and South Korea, resulted in more accurate, culturally informed responses.
Bhatia: One example included showing the model an image of a woman in Somalia receiving a henna tattoo and asking why she might want this. When trained with culturally diverse data, the model correctly suggested she was about to get married, whereas previously it had said she wanted to buy henna.
Shwartz: We also found examples of ChatGPT lacking cultural awareness. When given a hypothetical situation where a couple tipped four per cent in a restaurant in Spain, the model suggested they may have been unhappy with the service. This assumes that North American tipping culture applies in Spain when actually, tipping is not common in the country and a four per cent tip likely meant exceptional service.
Why do we need to ensure that AI is more inclusive?
Shwartz: Language models are ubiquitous. If these models assume the set of values and norms associated with western or North American culture, their information for and about people from other cultures might be inaccurate and discriminatory. Another concern is that people from diverse backgrounds using products powered by English models would have to adapt their inputs to North American norms or else they might get suboptimal performance.
Bhatia: We want these tools for everyone out there to use, not just one group of people. Canada is a culturally diverse country and we need to ensure the AI tools that power our lives are not reflecting just one culture and its norms. Our ongoing research aims to foster inclusivity, diversity and cultural sensitivity in the development and deployment of AI technologies.
DOI
10.18653/v1/2023.emnlp-main.496
Researchers developing AI to make the internet more accessible
‘Web agent’ navigates complex websites using language commands
COLUMBUS, Ohio – In an effort to make the internet more accessible for people with disabilities, researchers at The Ohio State University have begun developing an artificial intelligence agent that could complete complex tasks on any website using simple language commands.
In the three decades since it was first released into the public domain, the world wide web has become an incredibly intricate, dynamic system. Yet because internet function is now so integral to society’s well-being, its complexity also makes it considerably harder to navigate.
Today there are billions of websites available to help access information or communicate with others, and many tasks on the internet can take more than a dozen steps to complete. That’s why Yu Su, co-author of the study and an assistant professor of computer science and engineering at Ohio State, said their work, which uses information taken from live sites to create web agents — online AI helpers — is a step toward making the digital world a less confusing place.
“For some people, especially those with disabilities, it’s not easy for them to browse the internet,” said Su. “We rely more and more on the computing world in our daily life and work, but there are increasingly a lot of barriers to that access, which, to some degree, widens the disparity.”
The study was presented in December at the Thirty-seventh Conference on Neural Information Processing Systems (NeurIPS), a flagship conference for AI and machine learning research.
By taking advantage of the power of large language models, the agent works similarly to how humans behave when browsing the web, said Su. The Ohio State team showed that their model was able to understand the layout and functionality of different websites using only its ability to process and predict language.
Researchers started the process by creating Mind2Web, the first dataset for generalist web agents. Though previous efforts to build web agents focused on toy simulated websites, Mind2Web fully embraces the complex and dynamic nature of real-world websites and emphasizes an agent’s ability of generalizing to entirely new websites it has never seen before. Su said that much of their success is due to their agent’s ability to handle the internet’s ever-evolving learning curve. The team lifted over 2,000 open-ended tasks from 137 different real-world websites, which they then used to train the agent.
Some of the tasks included booking one-way and round-trip international flights, following celebrity accounts on Twitter, browsing comedy films from 1992 to 2017 streaming on Netflix, and even scheduling car knowledge tests at the DMV. Many of the tasks were very complex – for example, booking one of the international flights used in the model would take 14 actions. Such effortless versatility allows for diverse coverage on a number of websites, and opens up a new landscape for future models to explore and learn in an autonomous fashion, said Su.
“It’s only become possible to do something like this because of the recent development of large language models like ChatGPT,” said Su. Since the chatbot became public in November 2022, millions of users have used it to automatically generate content, from poetry and jokes to cooking advice and medical diagnoses.
Still, because one website could contain thousands of raw HTML elements, it would be too costly to feed so much information to a single large language model. To address this gap, the study also introduces a framework called MindAct, a two-pronged agent that uses both small and large language models to carry out these tasks. The team found that by using this strategy, MindAct significantly outperforms other common modeling strategies and is able to understand various concepts at a decent level.
With more fine-tuning, the study points out, the model could likely be used in tandem with both open-and closed-source large language models such as Flan-T5 or GPT-4. However, their work does highlight an increasingly relevant ethical problem in creating flexible artificial intelligence, said Su. While it could certainly serve as a helpful agent to humans surfing the web, the model could also be used to enhance systems like ChatGPT and turn the entire internet into an unprecedentedly powerful tool, said Su.
“On the one hand, we have great potential to improve our efficiency and to allow us to focus on the most creative part of our work,” he said. “But on the other hand, there’s tremendous potential for harm.” For instance, autonomous agents able to translate online steps into the real world could influence society by taking potentially dangerous actions, such as misusing financial information or spreading misinformation.
“We should be extremely cautious about these factors and make a concerted effort to try to mitigate them,” said Su. But as AI research continues to evolve, he notes that it’s likely society will experience major growth in the commercial use and performance of generalist web agents in the years to come, especially as the technology has already gained so much popularity in the public eye.
“Throughout my career, my goal has always been trying to bridge the gap between human users and the computing world,” said Su. “That said, the real value of this tool is that it will really save people time and make the impossible possible.”
The research was supported by the National Science Foundation, the U.S. Army Research Lab and the Ohio Supercomputer Center. Other co-authors were Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang and Huan Sun, all of Ohio State.
#
Contact: Yu Su, Su.809@osu.edu
Written by: Tatyana Woodall, Woodall.52@osu.edu
No comments:
Post a Comment