National Science Foundation supports Hoda Eldardiry's research to enhance AI ethics education
Eldardiry and her team will develop practical competencies that enable students to translate ethical principles into concrete decision-making in artificial intellingence system design.
Virginia Tech
As artificial intelligence (AI) increasingly affects peoples’ everyday lives, Hoda Eldardiry, associate professor in the Department of Computer Science and core faculty at the Sanghani Center for Artificial Intelligence and Data Analytics, is conducting research in engineering and computing education that will help students in majors such as computer science, computer engineering, and data science bridge the gap between the classroom and the job site.
Recently, she received a $349,360 grant from the National Science Foundation’s (NSF) Engineering Education program to support her work.
“We want to ensure that every student is adequately prepared to not only confront but act on the challenges that new AI technologies pose to humans and society,” said Eldardiry.
Her team for the estimated three-year project includes co-principal investigators Qin Zhu, associate professor, and Dayoung Kim, assistant professor, both in the Department of Engineering Education; James Weichert, a master’s degree student in computer science advised by Eldardiry; and two Ph.D. students in engineering education, Yixiang Sun advised by Zhu and Emad Ali advised by Kim.
Eldardiry said their research — which includes AI ethics issues related to autonomous vehicles, privacy, and bias — differs from theoretical AI ethics research because their approach is to improve AI ethics education from the perspective of industry professionals currently working in AI and AI policy.
They have already interviewed a group of these professionals to get a better sense of how they view the AI policy landscape and more crucially, what skills they need to apply their technical backgrounds to real-world problems involving the ethical use of AI. With this project, they aim to engage practicing AI engineers to better understand how they translate AI ethics principles into practical applications when designing AI systems.
“We call these skills ‘translational competencies,’ and this is really the heart of our research,” Eldardiry said. “A curriculum shaped by this research can help cultivate the competencies needed for students to apply often vague ethical principles to concrete decision-making in the development and use of AI systems."
In reviewing current curricula, Eldardiry said, one ethical concern that arises with more and more powerful AI tools and vast amounts of data is the privacy of user data. This is especially important when AI technologies can leverage that user data to find connections or identify users in a way that humans cannot. The social media platform TikTok is a good example of this because it collects so much data about what videos you are watching and is, therefore, really good at triangulating what your interests are and perhaps more personal things like your political ideology or sexual orientation.
“When we talk about privacy in a computer science ethics class, it is brought up as a fundamental ethical principle, but then the conversation normally stops there. Our current curriculum does not go further into depth about what specific kind of privacy we want to guarantee or the technical details required to build a system that does actually preserve user privacy,” she said. “This is seen as an ‘advanced topic’ that is outside the scope of an undergraduate or even graduate ethics course, but the reality is that these details might be key when a student graduates and is in charge of using or developing an AI system.”
Another example is self-driving cars and how they should be programmed to prioritize human life. While it is easy to say that the car should avoid any harm to humans all the time, there are inevitably situations where that is not possible and the car must make a split-second decision. So what should the car be programmed to do in that case? Perhaps there is no single “correct” answer, but this is also not an unrealistic scenario to be talking about in an ethics class, Eldardiry said.
“Ultimately, we would like to see a paradigm shift in AI ethics education away from a hands-off approach where students are not engaging with the course material to a very hands-on approach where students are taught and expected to apply the ethical principles they learn or develop to their engineering work,” said Eldardiry. “These translational skills are something that future AI engineers will undoubtedly need in their toolkit and will form a growing part of their job expectations as even the development of AI programs becomes more automated."
Despite its impressive output, generative AI doesn’t have a coherent understanding of the world
Researchers show that even the best-performing large language models don’t form a true model of the world and its rules, and can thus fail unexpectedly on similar tasks.
Massachusetts Institute of Technology
CAMBRIDGE, MA – Large language models can do impressive things, like write poetry or generate viable computer programs, even though these models are trained to predict words that come next in a piece of text.
Such surprising capabilities can make it seem like the models are implicitly learning some general truths about the world.
But that isn’t necessarily the case, according to a new study. The researchers found that a popular type of generative AI model can provide turn-by-turn driving directions in New York City with near-perfect accuracy — without having formed an accurate internal map of the city.
Despite the model’s uncanny ability to navigate effectively, when the researchers closed some streets and added detours, its performance plummeted.
When they dug deeper, the researchers found that the New York maps the model implicitly generated had many nonexistent streets curving between the grid and connecting far away intersections.
This could have serious implications for generative AI models deployed in the real world, since a model that seems to be performing well in one context might break down if the task or environment slightly changes.
“One hope is that, because LLMs can accomplish all these amazing things in language, maybe we could use these same tools in other parts of science, as well. But the question of whether LLMs are learning coherent world models is very important if we want to use these techniques to make new discoveries,” says senior author Ashesh Rambachan, assistant professor of economics and a principal investigator in the MIT Laboratory for Information and Decision Systems (LIDS).
Rambachan is joined on a paper about the work by lead author Keyon Vafa, a postdoc at Harvard University; Justin Y. Chen, an electrical engineering and computer science (EECS) graduate student at MIT; Jon Kleinberg, Tisch University Professor of Computer Science and Information Science at Cornell University; and Sendhil Mullainathan, an MIT professor in the departments of EECS and of Economics, and a member of LIDS. The research will be presented at the Conference on Neural Information Processing Systems.
New metrics
The researchers focused on a type of generative AI model known as a transformer, which forms the backbone of LLMs like GPT-4. Transformers are trained on a massive amount of language-based data to predict the next token in a sequence, such as the next word in a sentence.
But if scientists want to determine whether an LLM has formed an accurate model of the world, measuring the accuracy of its predictions doesn’t go far enough, the researchers say.
For example, they found that a transformer can predict valid moves in a game of Connect 4 nearly every time without understanding any of the rules.
So, the team developed two new metrics that can test a transformer’s world model. The researchers focused their evaluations on a class of problems called deterministic finite automations, or DFAs.
A DFA is a problem with a sequence of states, like intersections one must traverse to reach a destination, and a concrete way of describing the rules one must follow along the way.
They chose two problems to formulate as DFAs: navigating on streets in New York City and playing the board game Othello.
“We needed test beds where we know what the world model is. Now, we can rigorously think about what it means to recover that world model,” Vafa explains.
The first metric they developed, called sequence distinction, says a model has formed a coherent world model it if sees two different states, like two different Othello boards, and recognizes how they are different. Sequences, that is, ordered lists of data points, are what transformers use to generate outputs.
The second metric, called sequence compression, says a transformer with a coherent world model should know that two identical states, like two identical Othello boards, have the same sequence of possible next steps.
They used these metrics to test two common classes of transformers, one which is trained on data generated from randomly produced sequences and the other on data generated by following strategies.
Incoherent world models
Surprisingly, the researchers found that transformers which made choices randomly formed more accurate world models, perhaps because they saw a wider variety of potential next steps during training.
“In Othello, if you see two random computers playing rather than championship players, in theory you’d see the full set of possible moves, even the bad moves championship players wouldn’t make,” Vafa explains.
Even though the transformers generated accurate directions and valid Othello moves in nearly every instance, the two metrics revealed that only one generated a coherent world model for Othello moves, and none performed well at forming coherent world models in the wayfinding example.
The researchers demonstrated the implications of this by adding detours to the map of New York City, which caused all the navigation models to fail.
“I was surprised by how quickly the performance deteriorated as soon as we added a detour. If we close just 1 percent of the possible streets, accuracy immediately plummets from nearly 100 percent to just 67 percent,” Vafa says.
When they recovered the city maps the models generated, they looked like an imagined New York City with hundreds of streets crisscrossing overlaid on top of the grid. The maps often contained random flyovers above other streets or multiple streets with impossible orientations.
These results show that transformers can perform surprisingly well at certain tasks without understanding the rules. If scientists want to build LLMs that can capture accurate world models, they need to take a different approach, the researchers say.
“Often, we see these models do impressive things and think they must have understood something about the world. I hope we can convince people that this is a question to think very carefully about, and we don’t have to rely on our own intuitions to answer it,” says Rambachan.
In the future, the researchers want to tackle a more diverse set of problems, such as those where some rules are only partially known. They also want to apply their evaluation metrics to real-world, scientific problems.
###
This work is funded, in part, by the Harvard Data Science Initiative, a National Science Foundation Graduate Research Fellowship, a Vannevar Bush Faculty Fellowship, a Simons Collaboration grant, and a grant from the MacArthur Foundation.
No comments:
Post a Comment