Photo courtesy of the University of Chicago.
Maroon Staff / The Chicago Maroon
By Isaac Krakowka
“[Is artificial intelligence] going to make us evil and worse versions of ourselves? I don’t know,” James Evans, a professor of sociology, said during a discussion titled “The Ethics of Artificial Intelligence.” The event, hosted by philosophy professor Agnes Callard as part of the Night Owls series, was held in University Church last Thursday night.
“I don’t have stances. I have questions,” Callard said when asked about her position on the ethics of artificial intelligence (AI). “I feel like my students are in the same position. We are not sure how to think about the fact that some pretty large percentage of our lives are lived online, and we’re also not sure about where it’s going.”
Discussion began with Callard grappling with two distinctly different viewpoints on the agency of the Internet in a modern society.
“I have a kind of basic question about whether or not we’re living in a digital utopia in which the free exchange of ideas has never been easier or better, or a kind of privacy-encroaching social media hellscape in which our identities are being stolen by these online interactions,” Callard said.
“If everybody is connecting and collaborating and chit-chatting, this global collapse of differential culture...facilitates more combinations in the short term, but it deprives humanity of the possibility of diversity in the long term,” Evans said in response to Callard’s questions about privacy encroachment.
Theo Knights, a first-year master’s student, and Evan Zhao, a fourth-year undergraduate student, were among the many attendees at the talk. Zhao expressed his enjoyment for this series of lectures because of the inquisitive nature that Callard and her contemporaries bring to debate topics. “A lot of faculty don’t really make efforts to have this more public-facing, student-oriented event, so it’s interesting to come here and be able to see these faculty discussing ideas,” Zhao said.
Knights came to see the discussion in part because he wanted to see how a non-sensationalized discussion of artificial intelligence played out, noting that the issue incites a great deal of discussion, but few agreed-upon facts.
“I think there is a lot of alarmism, so it’ll be interesting to see how that can be engaged, maybe discussed, from a place of knowledge rather than usual speculation that a lot of people have about jobs disappearing because of AI,” Knights said.
Confronting the sociological implications of artificial intelligence, Evans and Callard discussed different social implications of AI, including increased homogeneity of ideas and the tools that artificial intelligence can use to predict human scenarios. AI is “predicting what scientific results are going to come up in the future, what scientific papers will be written next year,” said Callard. “It’s amazing that that can be done.”
Evans cited decreases in the freedom of choice in society as a potential danger of AI technology.
“Any choice that I make online is the result of the placement of an opportunity which was generated as a prediction of the choice I would like to make,” Evans said.
Students also raised concerns centered around the accessibility of personal information and data targeted by AI. One student questioned whether the prediction of the disease dynamics of coronavirus is an ethical application of the technology. Another student asked about artificial intelligence recognizing faces to track movement, diminishing an individual’s privacy online.
“I don’t even know what privacy looks like or feels like anymore,” Evans said.
At the culmination of the talk, Evans and Callard said that much of the impact that AI has on daily life is still unknown. Evans pointed out that there is still much to learn about AI’s impact. “There are enormous challenges, and the biggest challenges are the challenges of actually figuring out what’s good and bad,” Evans said.
“The insights of these two are very interesting,” third-year Ian Ross said. “They sound very confident about a lot of answers, not confident about others. Just hearing that kind of informs me a little bit on what’s known and what’s thought about, at least in the philosophical community, about AI.”
---30---
No comments:
Post a Comment