AI can take over key management roles in scientific research
ESMT BERLIN
Researchers Maximilian Koehler, PhD candidate at ESMT, and Henry Sauermann, professor of strategy at ESMT, explore the role of AI, not as a “worker” performing specific research tasks such as data collection and analysis, but as a “manager” of human workers performing such tasks. Algorithmic management (AM) suggests a significant shift in the way research projects are conducted and can enable projects to operate at larger scale and efficiency.
With the complexity and scope of scientific research rapidly increasing, the study illustrates that AI can not only replicate but also potentially surpass human managers by leveraging its instantaneous, comprehensive, and interactive capabilities. Investigating algorithmic management in crowd and citizen science, Koehler and Sauermann discuss examples of how AI effectively performs five important managerial functions: task division and allocation, direction, coordination, motivation, and supporting learning.
The researchers investigated projects through online documents; by interviewing organizers, AI developers, and project participants; and by joining some projects as participants. This allowed the researchers to identify projects that use algorithmic management, to understand how AI performs management functions, and to explore when AM might be more effective.
The growing number of use cases suggests that the adoption of AM could be a critical factor in improving research productivity. “The capabilities of artificial intelligence have reached a point where AI can now significantly enhance the scope and efficiency of scientific research by managing complex, large-scale projects,” states Koehler.
In a quantitative comparison with a broader sample of projects, the study also reveals that AM-enabled projects are often larger than projects that do not use AM and are associated with platforms that provide access to shared AI tools. This suggests that AM may enable projects to scale but also requires technical infrastructures that stand-alone projects may find difficult to develop. These patterns point towards changing sources of competitive advantage in research and may have important implications for research funders, digital research platforms, and larger research organizations such as universities or corporate R&D labs.
Although AI can take over important management functions, this does not mean that principal investigators or human managers will become obsolete. Sauermann notes, “If AI can take over some of the more algorithmic and mundane functions of management, human leaders could shift their attention to more strategic and social tasks such as identifying high-value research targets, raising funding, or building an effective organizational culture.”
For more information on this research, please contact Maximilian Koehler. The study “Algorithmic Management in Scientific Research,” published in the journal Research Policy, can be viewed here.
--
About ESMT Berlin
ESMT Berlin is a leading global business school with its campus in the heart of Berlin. Founded by 25 global companies, ESMT offers master, MBA, and PhD programs, as well as executive education on its campus in Berlin, in locations around the world, online, and in online blended format. Focusing on leadership, innovation, and analytics, its diverse faculty publishes outstanding research in top academic journals. Additionally, the international business school provides an interdisciplinary platform for discourse between politics, business, and academia. ESMT is a non-profit private institution of higher education with the right to grant PhDs and is accredited by AACSB, AMBA, EQUIS, and ZEvA. It is committed to diversity, equity, and inclusion across all its activities and communities. esmt.berlin
JOURNAL
Research Policy
DOI
Trust your doctor: Study shows human medical professionals are more reliable than artificial intelligence tools
New research in the American Journal of Preventive Medicine puts the accuracy of advice given by large language models to the test
ELSEVIER
Ann Arbor, April 2, 2024 – When looking for medical information, people can use web search engines or large language models (LLMs) like ChatGPT-4 or Google Bard. However, these artificial intelligence (AI) tools have their limitations and can sometimes generate incorrect advice or instructions. A new study in the American Journal of Preventive Medicine, published by Elsevier, assesses the accuracy and reliability of AI-generated advice against established medical standards and finds that LLMs are not trustworthy enough to replace human medical professionals just yet.
Andrei Brateanu, MD, Department of Internal Medicine, Cleveland Clinic Foundation, says, "Web search engines can provide access to reputable sources of information, offering accurate details on a variety of topics such as preventive measures and general medical questions. Similarly, LLMs can offer medical information that may look very accurate and convincing, when in fact it may be occasionally inaccurate. Therefore, we thought it would be important to compare the answers from LLMs with data obtained from recognized medical organizations. This comparison helps validate the reliability of the medical information by cross-referencing it with trusted healthcare data."
In the study 56 questions were posed to ChatGPT-4 and Bard, and their responses were evaluated by two physicians for accuracy, with a third resolving any disagreements. Final assessments found 28.6% of ChatGPT-4's answers accurate, 28.6% inaccurate, and 42.8% partially accurate but incomplete. Bard performed better, with 53.6% of answers accurate, 17.8% inaccurate, and 28.6% partially accurate.
Dr. Brateanu explains, "All LLMs, including ChatGPT-4 and Bard, operate using complex mathematical algorithms. The fact that both models produced responses with inaccuracies or omitted crucial information highlights the ongoing challenge of developing AI tools that can provide dependable medical advice. This might come as a surprise, considering the advanced technology behind these models and their anticipated role in healthcare environments."
This research underscores the importance of being cautious and critical of medical information obtained from AI sources, reinforcing the need to consult healthcare professionals for accurate medical advice. For healthcare professionals, it points to the potential and limitations of using AI as a supplementary tool in providing patient care and emphasizes the ongoing need for oversight and verification of AI-generated information.
Dr. Brateanu concludes, "AI tools should not be seen as substitutes for medical professionals. Instead, they can be considered as additional resources that, when combined with human expertise, can enhance the overall quality of information provided. As we incorporate AI technology into healthcare, it's crucial to ensure that the essence of healthcare continues to be fundamentally human.”
JOURNAL
American Journal of Preventive Medicine
METHOD OF RESEARCH
Content analysis
SUBJECT OF RESEARCH
Not applicable
ARTICLE TITLE
Accuracy of Online Artificial Intelligence Models in Primary Care Settings
Study: AI writing, illustration emits hundreds of times less carbon than humans
While energy use is much lower, tech should not replace humans, authors argue
LAWRENCE — With the evolution of artificial intelligence comes discussion of the technology's environmental impact. A new study has found that for the tasks of writing and illustrating, AI emits hundreds of times less carbon than humans performing the same tasks. That does not mean, however, that AI can or should replace human writers and illustrators, the study’s authors argue.
Andrew Torrance, Paul E. Wilson Distinguished Professor of Law at KU, is co-author of a study that compared established systems such as ChatGPT, Bloom AI, DALL-E2 and others completing writing and illustrating to that of humans.
Like cryptocurrency, AI has been subject to debate about the amount of energy it uses and its contributions to climate change. Human emissions and environmental impact have long been studied, but comparisons between the two have been scant. The authors conducted a comparison and found that AI systems emit between 130 and 1,500 times less CO2e (carbon dioxide equivalent) per page of text generated than human writers and illustration systems between 310 and 2,900 times less CO2e per image than humans.
“I like to think of myself as driven by data, not just what I feel is true. We’ve had discussions about something that appears to be true in terms of AI emissions, but we wanted to look at the data and see if it truly is more efficient,” Torrance said. “When we did it, the results were kind of astonishing. Even by conservative estimates, AI is extremely less wasteful.”
The study, co-written with Bill Tomlinson, Rebecca Black and Donald Patterson of the University of California-Irvine, was published in the journal Nature.
To calculate the carbon footprint of a person writing, the researchers consulted the Energy Budget, a measure that considers the amount of energy used in certain tasks for a set period of time. For example, it is well established how much energy a computer with word processing software uses per hour. When multiplied by the average time it takes a person to write a page of text, on average, 250 words, an estimate can be arrived at. Using the same amount of energy used by the CPUs that operate AI such as ChatGPT, which can produce text much faster, produces an estimate for AI.
Researchers also considered per capita emissions of individuals in the United States and India. Residents of the former have approximate annual emissions of 15 metric tons CO2e per year, while the latter is an average of 1.9 metric tons. The two nations were chosen as they have the highest and lowest respective per capita environmental impact of countries with population higher than 300 million, and to provide a look at different levels of emissions in different parts of the world in comparison to AI.
Results showed that Bloom is 1,400 times less impactful than a U.S. resident writing a page of text and 180 times less impactful than a resident of India.
In terms of illustration, results showed that DALL-E2 emits approximately 2,500 times less CO2e than a human artist and 310 times less than an India-based artist. Figures for Midjourney were 2,900 times less for the former and 370 times less for the latter.
As technologies improve and societies evolve, those figures are almost certain to change as well, Torrance said.
The authors wrote that carbon emissions are only one factor to consider when comparing AI production to human output. As the technologies exist now, they are often not capable of producing the quality of writing or art that a human can. As they improve, they hold the potential to both eliminate existing jobs and create new ones. Loss of employment has potential for substantial economic, societal and other forms of destabilization. For those and other reasons, the authors wrote, the best path forward is likely a collaboration between AI and human efforts, or a system in which people can use AI to be more efficient in their work and retain control of final products.
Legal issues such as the use of copyrighted material in training sets for AI must be considered, the authors wrote, as does the potential for an increase in artificially produced material to result in an increase in the energy it uses and resulting emissions. Collaboration between the two is the most beneficial use of both AI and human labor, the authors wrote.
“We don’t say AI is inherently good or that it is empirically better, just that when we looked at it in these instances, it was less energy consumptive,” Torrance said.
The research was conducted to improve understanding of AI and its environmental impact and to address the United Nation Sustainable Development Goals of ensuring sustainable consumption and production patterns and taking urgent action to combat climate change and its impacts, the researchers wrote.
For their part, the authors have begun to use AI as an aid in producing drafts for some of their writing, but they also agree on the necessity of carefully editing, and adding to, such drafts manually.
“This is not a curse, it’s a boon,” Torrance said of AI. “I think this will help make good writers great, mediocre writers good and democratize writing. It can make people more productive and can be an empowerment of human potential. I’m hugely optimistic that technology is getting better in most respects and lightening the effects we have on the Earth. We hope this is just the beginning and that people continue to dig into this issue further.”
JOURNAL
Nature
METHOD OF RESEARCH
Data/statistical analysis
SUBJECT OF RESEARCH
Not applicable
ARTICLE TITLE
The carbon emissions of writing and illustrating are lower for AI than for humans