If art is how we express our humanity, where does AI fit in?
Media Lab researcher Ziv Epstein discusses issues arising from the use of generative AI to make art and other media
Peer-Reviewed PublicationThe rapid advance of artificial intelligence has generated a lot of buzz, with some predicting it will lead to an idyllic utopia and others warning it will bring the end of humanity. But speculation about where AI technology is going, while important, can also drown out important conversations about how we should be handling the AI technologies available today.
One such technology is generative AI, which can create content including text, images, audio, and video. Popular generative AIs like the chatbot ChatGPT generate conversational text based on training data taken from the internet.
Today a group of 14 researchers from a number of organizations including MIT published a commentary article in Science that helps set the stage for discussions about generative AI’s immediate impact on creative work and society more broadly. The paper’s MIT-affiliated co-authors include Media Lab postdoctoral researcher Ziv Epstein SM ’19, PhD ’23; recent graduate Matt Groh SM ’19, PhD ’23; MIT PhD candidate Rob Mahari ’17; and Media Lab research assistant Hope Schroeder.
MIT News spoke with Epstein, the lead author of the paper.
Q: Why did you write this paper?
A: Generative AI tools are doing things that even a few years ago we never thought would be possible. This raises a lot of fundamental questions about the creative process and the human’s role in creative production. Are we going to get automated out of jobs? How are we going to preserve the human aspect of creativity with all of these new technologies?
The complexity of black-box AI systems can make it hard for researchers and the broader public to understand what’s happening under the hood, and what the impacts of these tools on society will be. Many discussions about AI anthropomorphize the technology, implicitly suggesting these systems exhibit human-like intent, agency or self-awareness. Even the term “Artificial intelligence” reinforces these beliefs: ChatGPT uses first-person pronouns, and we say AIs “hallucinate.” These agentic roles we give AIs can undermine the credit to creators whose labor underlies the system’s outputs, and can deflect responsibility from the developers and decision makers when the systems cause harm.
We’re trying to build coalitions across academia and beyond to help think about the interdisciplinary connections and research areas necessary to grapple with the immediate dangers to humans coming from the deployment of these tools, such as disinformation, job displacement, and changes to legal structures and culture.
Q: What do you see as the gaps in research around generative AI and art today?
A: The way we talk about AI is broken in many ways. We need to understand how perceptions of the generative process affect attitudes toward outputs and authors, and also design the interfaces and systems in a way that is really transparent about the generative process and avoids some of these misleading interpretations. How do we talk about AI and how do these narratives cut along lines of power? As we outline in the article, there are these themes around AI’s impact that are important to consider: aesthetics and culture; legal aspects of ownership and credit; labor; and the impacts to the media ecosystem. For each of those we highlight the big open questions.
With aesthetics and culture, we’re considering how past art technologies can inform how we think about AI. For example, when photography was invented, some painters said it was “the end of art.” But instead it ended up being its own medium and eventually liberated painting from realism, giving rise to Impressionism and the modern art movement. We’re saying generative AI is a medium with its own affordances. The nature of art will evolve with that. How will artists and creators express their intent and style through this new medium?
Issues around ownership and credit are tricky because we need copyright law that benefits creators, users, and society at large. Today’s copyright laws might not adequately apportion rights to artists when these systems are training on their styles. When it comes to training data, what does it mean to copy? That’s a legal question, but also a technical question. We’re trying to understand if these systems are copying, and when.
For labor economics and creative work, the idea is these generative AI systems can accelerate the creative process in many ways, but they can also remove the ideation process that starts with a blank slate. Sometimes, there’s actually good that comes from starting with a blank page. We don’t know how it’s going to influence creativity, and we need a better understanding of how AI will affect the different stages of the creative process. We need to think carefully about how we use these tools to complement people’s work instead of replacing it.
In terms of generative AI’s effect on the media ecosystem, with the ability to produce synthetic media at scale, the risk of AI-generated misinformation must be considered. We need to safeguard the media ecosystem against the possibility of massive fraud on one hand, and people losing trust in real media on the other.
Q: How do you hope this paper is received — and by whom?
A: The conversation about AI has been very fragmented and frustrating. Because the technologies are moving so fast, it’s been hard to think deeply about these ideas. To ensure the beneficial use of these technologies, we need to build shared language and start to understand where to focus our attention. We’re hoping this paper can be a step in that direction. We’re trying to start a conversation that can help us build a roadmap toward understanding this fast-moving situation.
Artists many times are at the vanguard of new technologies. They’re playing with the technology long before there are commercial applications. They’re exploring how it works, and they’re wrestling with the ethics of it. AI art has been going on for over a decade, and for as long these artists have been grappling with the questions we now face as a society. I think it is critical to uplift the voices of the artists and other creative laborers whose jobs will be impacted by these tools. Art is how we express our humanity. It’s a core human, emotional part of life. In that way we believe it’s at the center of broader questions about AI’s impact on society, and hopefully we can ground that discussion with this.
###
Written by Zach Winn, MIT News Office
JOURNAL
Science
ARTICLE TITLE
Art and the science of generative AI
ARTICLE PUBLICATION DATE
16-Jun-2023
AI could replace humans in social science research
Researchers from Universities of Waterloo, Toronto, Yale, UPenn discuss AI and its application to their work
Peer-Reviewed PublicationIn an article published yesterday in the prestigious journal Science, leading researchers from the University of Waterloo, University of Toronto, Yale University and the University of Pennsylvania look at how AI (large language models or LLMs in particular) could change the nature of their work.
“What we wanted to explore in this article is how social science research practices can be adapted, even reinvented, to harness the power of AI,” said Igor Grossmann, professor of psychology at Waterloo.
Grossmann and colleagues note that large language models trained on vast amounts of text data are increasingly capable of simulating human-like responses and behaviours. This offers novel opportunities for testing theories and hypotheses about human behaviour at great scale and speed.
Traditionally, social sciences rely on a range of methods, including questionnaires, behavioral tests, observational studies, and experiments. A common goal in social science research is to obtain a generalized representation of characteristics of individuals, groups, cultures, and their dynamics. With the advent of advanced AI systems, the landscape of data collection in social sciences may shift.
“AI models can represent a vast array of human experiences and perspectives, possibly giving them a higher degree of freedom to generate diverse responses than conventional human participant methods, which can help to reduce generalizability concerns in research,” said Grossmann.
“LLMs might supplant human participants for data collection,” said UPenn psychology professor Philip Tetlock. “In fact, LLMs have already demonstrated their ability to generate realistic survey responses concerning consumer behaviour. Large language models will revolutionize human-based forecasting in the next 3 years. It won’t make sense for humans unassisted by AIs to venture probabilistic judgments in serious policy debates. I put an 90% chance on that. Of course, how humans react to all of that is another matter.”
While opinions on the feasibility of this application of advanced AI systems vary, studies using simulated participants could be used to generate novel hypotheses that could then be confirmed in human populations.
But the researchers warn of some of the possible pitfalls in this approach – including the fact that LLMs are often trained to exclude socio-cultural biases that exist for real-life humans. This means that sociologists using AI in this way couldn’t study those biases.
Professor Dawn Parker, a co-author on the article from the University of Waterloo, notes that researchers will need to establish guidelines for the governance of LLMs in research.
“Pragmatic concerns with data quality, fairness, and equity of access to the powerful AI systems will be substantial,” Parker said. “So, we must ensure that social science LLMs, like all scientific models, are open-source, meaning that their algorithms and ideally data are available to all to scrutinize, test, and modify. Only by maintaining transparency and replicability can we ensure that AI-assisted social science research truly contributes to our understanding of human experience.”
JOURNAL
Science
Researchers test AI-powered chatbot's medical diagnostic ability
Generative artificial intelligence could serve as a promising tool to assist medical professionals
Peer-Reviewed PublicationBOSTON – In a recent experiment published in JAMA, physician-researchers at Beth Israel Deaconess Medical Center (BIDMC) tested one well-known publicly available chatbot’s ability to make accurate diagnoses in challenging medical cases. The team found that the generative AI, Chat-GPT 4, selected the correct diagnosis as its top diagnosis nearly 40 percent of the time and provided the correct diagnosis in its list of potential diagnoses in two-thirds of challenging cases.
Generative AI refers to a type of artificial intelligence that uses patterns and information it has been trained on to create new content, rather than simply processing and analyzing existing data. Some of the most well-known examples of generative AI are so-called chatbots, which use a branch of artificial intelligence called natural language processing (NLP) that allows computers to understand, interpret and generate human-like language. Generative AI chatbots are powerful tools poised to revolutionize creative industries, education, customer service and more. However, little is known about their potential performance in the clinical setting, such as complex diagnostic reasoning.
“Recent advances in artificial intelligence have led to generative AI models that are capable of detailed text-based responses that score highly in standardized medical examinations,” said Adam Rodman, MD, MPH, co-director of the Innovations in Media and Education Delivery (iMED) Initiative at BIDMC and an instructor in medicine at Harvard Medical School. “We wanted to know if such a generative model could ‘think’ like a doctor, so we asked one to solve standardized complex diagnostic cases used for educational purposes. It did really, really well.”
To assess the chatbot’s diagnostic skills, Rodman and colleagues used clinicopathological case conferences (CPCs), a series of complex and challenging patient cases including relevant clinical and laboratory data, imaging studies, and histopathological findings published in the New England Journal of Medicine for educational purposes.
Evaluating 70 CPC cases, the artificial intelligence exactly matched the final CPC diagnosis in 27 (39 percent) of cases. In 64 percent of the cases, the final CPC diagnosis was included in the AI’s differential – a list of possible conditions that could account for a patient’s symptoms, medical history, clinical findings and laboratory or imaging results.
“While Chatbots cannot replace the expertise and knowledge of a trained medical professional, generative AI is a promising potential adjunct to human cognition in diagnosis,” said first author Zahir Kanjee, MD, MPH, a hospitalist at BIDMC and assistant professor of medicine at Harvard Medical School. “It has the potential to help physicians make sense of complex medical data and broaden or refine our diagnostic thinking. We need more research on the optimal uses, benefits and limits of this technology, and a lot of privacy issues need sorting out, but these are exciting findings for the future of diagnosis and patient care.”
“Our study adds to a growing body of literature demonstrating the promising capabilities of AI technology,” said co-author Byron Crowe, MD, an internal medicine physician at BIDMC and an instructor in medicine at Harvard Medical School. “Further investigation will help us better understand how these new AI models might transform health care delivery.”
This work did not receive separate funding or sponsorship. Kanjee reports royalties for books edited and membership of a paid advisory board for medical education products not related to artificial intelligence from Wolters Kluwer, as well as honoraria for CME delivered from Oakstone Publishing. Crowe reports employment by Solera Health outside the submitted work. Rodman reports no conflicts of interest.
About Beth Israel Deaconess Medical Center
Beth Israel Deaconess Medical Center is a leading academic medical center, where extraordinary care is supported by high-quality education and research. BIDMC is a teaching affiliate of Harvard Medical School, and consistently ranks as a national leader among independent hospitals in National Institutes of Health funding. BIDMC is the official hospital of the Boston Red Sox.
Beth Israel Deaconess Medical Center is a part of Beth Israel Lahey Health, a health care system that brings together academic medical centers and teaching hospitals, community and specialty hospitals, more than 4,800 physicians and 36,000 employees in a shared mission to expand access to great care and advance the science and practice of medicine through groundbreaking research and education.
JOURNAL
JAMA