Friday, October 21, 2022

Interview With a Genius: Macarthur Fellow Yejin Choi Talks Teaching Common Sense to Artificial Intelligence

Yejin Choi declined the call that would transform her life—several times.


MacArthur© Photo: Yejin Choi


A lot of AI challenges are human challenges, just to summarize.


Blake Montgomery - Gizmodo

The MacArthur Foundation announced last week that University of Washington computer science professor, 45, was one of 25 recipients of its eponymous fellowship, commonly known as the “Genius Grant.” Choi thought the foundation’s attempts to contact her were spam calls, and then, when the organization was finally in touch, that the calls concerned consulting work. She’s not alone. Multiple fellows told the Washington Post that they ignored the foundation’s attempts to reach them. One blocked its calls entirely.

The Genius Grant comes with a no-strings-attached prize of $800,000, given over five years. In its citation, the foundation’s board wrote, “Choi’s research brings us closer to computers and artificial intelligence systems that can grasp more fully the complexities of language and communicate accurately with humans.” Her work concerns teaching AI to understand the concepts that ripple beneath the surface of language, or, in her words, “very trivial knowledge that you and I share about the world that machines don’t.”

Gizmodo spoke to Choi shortly after the foundation’s announcement of the 2022 fellows. She said the award will allow her to TK. She also accidentally activated her Amazon Alexa.

This interview has been edited for length and clarity.

Gizmodo: Where were you when you got the call from the MacArthur Foundation?

Yejin Choi: I was just at home working and doing Zoom meetings, and I ignored all the calls, thinking that they must be spam.

How many calls did you end up getting?

I actually didn’t even realize until the announcement that they actually did call me on the phone, which I did ignore. I remember—I just didn’t think that it was them. When they did get in touch, I thought it was about consulting work they wanted me to do.

How do you describe your work?

I build AI systems for primarily for natural language understanding. It’s in the field of natural language understanding, which is a subfield of AI—building AI systems for human language understanding, that’s broadly the field that I belong to. And more concretely, what I do has to do with generally reading between the lines of text to so that we can infer the implied messages, the intent of people. Recently, I started focusing more on common sense knowledge and reasoning, because that’s important background knowledge that people rely on when they interpret language.

I’ve heard what you do called natural language processing, do you intentionally use the word ‘understanding’ instead of ‘processing’?

NLP, or natural language processing, is the name of the field. It’s about either natural language understanding or generation of natural language. Right now, neural networks are almost like a parrot or mouth without a brain in that it is able to speak, but it may or may not actually make sense, and it generally doesn’t really understand the underlying concepts of text. It’s very easy for them to make mistakes and say silly things, and if you try to do question-and-answer with neural language models, then sometimes they say very silly things.

So you’re trying to help them say less than less silly things?

Yeah. But it’s very difficult actually to fix the fundamental challenge, which is that AI systems fundamentally lack knowledge about how the world works, whereas humans have more conceptual understanding of how the world works, how the physical world works, and how the social world works. It’s very trivial knowledge that you and I share about the world that machines don’t.

Why do you think that’s an important field of study? What will it do for humanity when machines understand that?


Because that’s how humans communicate with each other. It’s all about the subtext and the messages and understanding each other’s intent behind what they say. You know, when you ask someone, “Can you pass the salt?”, you’re never really asking literally, whether you’re capable of passing the salt or not. You’re just asking them to give it to you, right? And so human language is like that: There’s always this figurative or implied meaning, and that is what does matter, and that is what’s hard for machines to correctly understand. Part of that requires common sense reasoning.

What do you think machines and AI will be able to do if they achieve that understanding?

AI systems today are more capable of language processing than before, so now we can ask questions in natural language. We all know that there’s some limitation to it, though, so right now, it’s not very reliable. You don’t speak in complex language yet. But it’s going to really improve the interactibility with AI systems. It’s also going to enhance the robustness of AI systems. So you may have heard about the Alexa Amazon system, making a mistake.

Choi paused because her Alexa activated.

I will not say the name again. But that system recommended a child touch an electrical socket with a penny. Apparently, that happened because of an internet meme. Fortunately, the child was with her mom, and she knew enough not to do that. It’s a very bad idea. But currently, because the AI system doesn’t truly understand what it means to touch the electric socket with a penny, it’s just going to repeat what people said on the internet without filtering. That’s one example where, in terms of robustness of AI systems as well as the safety of AI systems when they interact with humans, we need to teach them what what language actually means and the implications.

There’s been quite a lot of high-profile AI in the news recently: DALL-E won an art contest, a Google engineer was fired because he hired a lawyer to represent an AI he claimed was sentient. I was wondering what you make of what you made of those particular stories.

They reflect that AI is advancing fast in some capacities. It’s likely that it’s going to be more interior integrated with the human lives in the coming years more and more so, but people also talk about how DALL-E make silly mistakes when you ask a very simple compositional question like putting something on top of something else.

We’ve talked a little bit about like the abstract nature and the ultimate goals of your work. Can you tell me a little bit about what experiments you’re working on now in what you’re researching now?

Maybe I can tell you a little bit about the common sense route. Common sense was the lofty goal of the AI field in early days—the 70s and 80s. That was the number one goal back then. People quickly realized that, although it was super easy for humans, it’s strangely difficult to write programs that encode common sense and then build a machine that can do anything like trivial things that humans can do. So AI, researchers then decided that it’s such a stupid idea to work on it because it’s just too hard. So saying even the word was supposed to be bad. You’re not going to get taken seriously if you say the word because it was such a taboo for some decades that followed the initial AI period. So when I started working on common sense a few years ago, that was exactly the reaction that I got from other people that thought I was too naive.

We had a hunch that it could work much better than before, because things change a lot since the 70s and 80s. And today, now we have a deep learning neural network. So now we have a lot of data. Now we have a lot of computing power. And also we have crowdsourcing platforms that can support scientific research as well. Collectively, we studied making neural common sense models that can learn simple common sense knowledge and reason about causes and effects of everyday events, like what a person might do as a reaction to particular events. We built neural models and that was much better than what people anticipated. The recipe that goes to that work is a symbolic knowledge graph that is being used as a textbook to teach neural networks.

What do you think the MacArthur award will allow you to do that you weren’t doing before?

We’ve made some exciting progress toward common sense, but it’s still very far from making a real world impact, so there’s a lot more to be done. Pursuing a research direction that’s potentially seen as overly ambitious and therefore risky can be hard in terms of gaining resources or in terms of gaining community support.

I really didn’t imagine that I would ever get this kind of recognition, especially by doing research such as common sense AI models. It just seemed too adventurers for me to have this kind of recognition from the field. I did it primarily because I was excited about it. I was curious about it. I thought that somebody has to try, and even if I fail, the insights coming from the failure should be useful to the community. I was willing to take that risk because I didn’t care too much about following a safe route for success. I just wanted to have fun and adventure with a life that I live only once. I wanted to do what I am excited about, as opposed to living my life for chasing success. I’m still having a hard time every morning convincing myself that this is all for real, that this happened, despite all these challenges that I had to go through working on this line of research over the past few years. I had very long-term impostor syndrome. Altogether, this was a big surprise.

This award has two meanings. One is resources that can be so helpful for me to pursue this road less taken. It’s wonderful to have the financial support towards that. But also, it’s spiritual support and mental support so that it doesn’t feel like too many failures along the way. When you take roads not taken, there’s so many obstacles. It’s only romantic at the beginning. The whole route can be a lot of loneliness and struggle. Being in that mode for prolonged time can be hard, so I really appreciate the encouragement that this award will give me and my collaborators.

I mentioned that I was interviewing you to a friend of mine who’s a computer science researcher as well. He said he was really interested in a paper you were involved with, called “The Delphi Experiment: Can Machines Learn Morality?” It was a bit controversial, highly discussed within the community. Were there any lessons you drew from working on that paper and the subsequent reaction to it?

That was a total surprise how much attention it drew. If I knew in advance how much attention it would have drawn, I would have approached it a little bit differently, especially the online demo initial version. We had a disclaimer, but people don’t pay attention to disclaimers and then take a screenshot in a way that’s very misleading. But here’s what I learned: First of all, I do think that it’s very important that we do think about how to teach AI some sort of ethical, moral, and cultural norms so that they can interact with humans more safely and respectfully.

There are so many challenges, though: It needs to support diverse viewpoints and understand where the ethical norms might be ambiguous or diverse. There are cases when everyone might agree. For example, killing a person is one of the worst things that you probably don’t want to commit versus maybe cutting a line in a long line. If somebody’s really sick and needs to cut the line, probably everybody’s okay with that. Then there are many other contexts in which cutting the line becomes a little bit harder to decide whether it’s okay or not. Even the awareness of the fact that, depending on the context, the decision varies a lot. They’re ambiguous cases. I think this is all an important aspect of the knowledge that we need to teach AI to better understand.

Perhaps one misunderstanding is that it’s not the case that AI always have to make binary decisions such that there’s only one correct answer and then that’s the only correct answer, which is actually different from my moral framework. Then it can be a big problem. But I don’t think AI should be doing it. It should understand where the ambiguous cases are, but minimally, it should learn so that it doesn’t violate human norms that are important, for example, asking the child to touch the electric circuit.

AI systems are already making decisions that do have moral implications, not addressing it is not a solution. It’s already doing it.

Those are all the questions that I have. Is there anything you think I haven’t asked about that I should be?

Maybe I can add a bit about the moral disagreement speak. I’m quite interested in reducing biases in language such as racism and sexism. What I found a very interesting phenomenon in the world is that two people may not agree on whether something is sexist or not. Even within left-leaning people, depending on how left you are, you may not agree on whether something is a microaggression or not. That makes it so difficult to build an AI that would make everyone happy. The AI might disagree with your level of understanding of sexism, for example. That’s one challenge. Another challenge is that this would be like a step towards building an AI system that’s perfectly good, which in itself would be really hard. And then there’s this other challenge, which is the “good” label can be subjective or controversial in itself. With all this in mind, when you build a system, you try to reduce the bias. You can imagine that from some people’s perspective, that AI that’s presumably reducing or detecting sexism isn’t adequate because it’s not catching the example that I want to catch. The problem actually comes from humans, because it is the humans who had all these issues from which AI learned.


A lot of AI challenges are human challenges, just to summarize.

No comments: