MSU expert: How AI can help people understand research and increase trust in science
Michigan State University
EAST LANSING, Mich. – Have you ever read about a scientific discovery and felt like it was written in a foreign language? If you’re like most Americans, new scientific information can prove challenging to understand — especially if you try to tackle a science article in a research journal.
In an era when scientific literacy is crucial for informed decision-making, the abilities to communicate and comprehend complex content are more important than ever. Trust in science has been declining for years, and one contributing factor may be the challenge of understanding scientific jargon.
New research from David Markowitz, associate professor of communication at Michigan State University, points to a potential solution: using artificial intelligence, or AI, to simplify science communication. His work demonstrates that AI-generated summaries may help restore trust in scientists and, in turn, encourage greater public engagement with scientific issues — just by making scientific content more approachable. The question of trust is particularly important, as people often rely on science to inform decisions in their daily lives, from choosing what foods to eat to making critical heath care choices.
Responses are excerpts from an article originally published in The Conversation.
How did simpler, AI-generated summaries affect the general public’s comprehension of scientific studies?
Artificial intelligence can generate summaries of scientific papers that make complex information more understandable for the public compared with human-written summaries, according to Markowitz’s recent study, which was published in PNAS Nexus. AI-generated summaries not only improved public comprehension of science but also enhanced how people perceived scientists.
Markowitz used a popular large language model, GPT-4 by OpenAI, to create simple summaries of scientific papers; this kind of text is often called a significance statement. The AI-generated summaries used simpler language — they were easier to read according to a readability index and used more common words, like “job” instead of “occupation” — than summaries written by the researchers who had done the work.
In one experiment, he found that readers of the AI-generated statements had a better understanding of the science, and they provided more detailed, accurate summaries of the content than readers of the human-written statements.
How did simpler, AI-generated summaries affect the general public’s perception of scientists?
In another experiment, participants rated the scientists whose work was described in simple terms as more credible and trustworthy than the scientists whose work was described in more complex terms.
In both experiments, participants did not know who wrote each summary. The simpler texts were always AI-generated, and the complex texts were always human-generated. When I asked participants who they believed wrote each summary, they ironically thought the more complex ones were written by AI and simpler ones were written by humans.
What do we still need to learn about AI and science communication?
As AI continues to evolve, its role in science communication may expand, especially if using generative AI becomes more commonplace or sanctioned by journals. Indeed, the academic publishing field is still establishing norms regarding the use of AI. By simplifying scientific writing, AI could contribute to more engagement with complex issues.
While the benefits of AI-generated science communication are perhaps clear, ethical considerations must also be considered. There is some risk that relying on AI to simplify scientific content may remove nuance, potentially leading to misunderstandings or oversimplifications. There’s always the chance of errors, too, if no one pays close attention. Additionally, transparency is critical. Readers should be informed when AI is used to generate summaries to avoid potential biases.
Simple science descriptions are preferable to and more beneficial than complex ones, and AI tools can help. But scientists could also achieve the same goals by working harder to minimize jargon and communicate clearly — no AI necessary.
###
Michigan State University has been advancing the common good with uncommon will for more than 165 years. One of the world’s leading public research universities, MSU pushes the boundaries of discovery to make a better, safer, healthier world for all while providing life-changing opportunities to a diverse and inclusive academic community through more than 400 programs of study in 17 degree-granting colleges.
For MSU news on the web, go to MSUToday or x.com/MSUnews.
Journal
PNAS Nexus
Method of Research
Content analysis
Subject of Research
Not applicable
Article Title
From complexity to clarity: How AI enhances perceptions of scientists and the public's understanding of science
Q&A: Promises and perils of AI in medicine, according to UW experts in public health and AI
University of Washington
In most doctors’ offices these days, you’ll find a pattern: Everybody’s Googling, all the time. Physicians search for clues to a diagnosis, or for reminders on the best treatment plans. Patients scour WebMD, tapping in their symptoms and doomscrolling a long list of possible problems.
But those constant searches leave something to be desired. Doctors don’t have the time to sift through pages of results, and patients don’t have the knowledge to digest medical research. Everybody has trouble finding the most reliable information.
Optimists believe artificial intelligence could help solve those problems, but the bots might not be ready for prime time. In a recent paper, Dr. Gary Franklin, a University of Washington research professor of environmental & occupational health sciences and of neurology in the UW School of Medicine, described a troubling experience with Google’s Gemini chatbot. When Franklin asked Gemini for information on the outcomes of a specific procedure – a decompressive brachial plexus surgery – the bot gave a detailed answer that cited two medical studies, neither of which existed.
Franklin wrote that it’s “buyer beware when it comes to using AI Chatbots for the purposes of extracting accurate scientific information or evidence-based guidance.” He recommended that AI experts develop specialized chatbots that pull information only from verified sources.
One expert working toward a solution is Lucy Lu Wang, a UW assistant professor in the Information School who focuses on making AI better at understanding and relaying scientific information. Wang has developed tools to extract important information from medical research papers, verify scientific claims, and make scientific images accessible to blind and low-vision readers.
UW News sat down with Franklin and Wang to discuss how AI could enhance health care, what’s standing in the way, and whether there’s a downside to democratizing medical research.
Each of you has studied the possibilities and perils of AI in health care, including the experiences of patients who ask chatbots for medical information. In a best-case scenario, how do you envision AI being used in health and medicine?
Gary Franklin: Doctors use Google a lot, but they also rely on services like UpToDate, which provide really great summaries of medical information and research. Most doctors have zero time and just want to be able to read something very quickly that is well documented. So from a physician’s perspective trying to find truthful answers, trying to make my practice more efficient, trying to coordinate things better — if this technology could meaningfully contribute to any of those things, then it would be unbelievably great.
I’m not sure how much doctors will use AI, but for many years, patients have been coming in with questions about what they found on the internet, like on WebMD. AI is just the next step of patients doing this, getting some guidance about what to do with the advice they’re getting. As an example, if a patient sees a surgeon who’s overly aggressive and says they need a big procedure, the patient could ask an AI tool what the broader literature might recommend. And I have concerns about that.
Lucy Lu Wang: I’ll take this question from the clinician’s perspective, and then from the patient’s perspective.
From the clinician’s perspective, I agree with what Gary said. Clinicians want to look up information very quickly because they’re so taxed and there’s limited time to treat patients. And you can imagine if the tools that we have, these chatbots, were actually very good at searching for information and very good at citing accurately, that they could become a better replacement for a type of tool like UpToDate, right? Because UpToDate is good, it’s human-curated, but it doesn’t always contain the most fine-grained information you might be looking for.
These tools could also potentially help clinicians with patient communication, because there’s not always enough time to follow up or explain things in a way that patients can understand. It’s an add-on part of the job for clinicians, and that’s where I think language models and these tools, in an ideal world, could be really beneficial.
Lastly, on the patient’s side, it would be really amazing to develop these tools that help with patient education and help increase the overall health literacy of the population, beyond what WebMD or Google does. These tools could engage patients with their own health and health care more than before.
Zooming out from the individual to the systemic, do you see any ways AI could make health systems as a whole function more smoothly?
GF: One thing I’m curious about is whether these tools can be used to help with coordination across the health care system and between physicians. It’s horrible. There was a book called “Crossing the Quality Chasm” that argued the main problem in American medicine is poor coordination across specialties, or between primary care and anybody else. It’s still horrible, because there’s no function in the medical field that actually does that. So that’s another question: Is there a role here for this kind of technology in coordinating health care?
LLW: There’s been a lot of work on tools that can summarize a patient’s medical history in their clinical notes, and that could be one way to perform this kind of communication between specialties. There’s another component, too: If patients can directly interact with the system, we can construct a better timeline of the patient’s experiences and how that relates to their clinical medical care.
We’ve done qualitative research with health care seekers that suggests there are lots of types of questions that people are less willing to ask their clinical provider, but much more willing to put into one of these models. So the models themselves are potentially addressing unmet needs that patients aren’t willing to directly share with their doctors.
What’s standing in the way of these best-case scenarios?
LLW: I think there are both technical challenges and socio-technical challenges. In terms of technical challenges, a lot of these models’ training doesn’t currently make them effective for tasks like scientific search and summarization.
First, these current chatbots are mostly trained to be general-purpose tools, so they’re meant to be OK at everything, but not great at anything. And I think there will be more targeted development towards these more specific tasks, things like scientific search with citations that Gary mentioned before. The current training methods tend to produce models that are instruction-following, and have a very large positive response bias in their outputs. That can lead to things like generating answers with citations that support the answer, even if those citations don’t exist in the real world. These models are also trained to be overconfident in their responses. If the way the model communicates is positive and overconfident, then it’s going to lead to lots of problems in a domain like health care.
And then, of course, there’s socio-technical problems, like, maybe these models should be developed with the specific goal of supporting scientific search. People are, in fact, working toward these things and have demonstrated good preliminary results.
GF: So are the folks in your field pretty confident that that can be overcome in a fairly short time?
LLW: I think the citation problem has already been overcome in research demonstration cases. If we, for example, hook up an LLM to PubMed search and allow it only to cite conclusions based on articles that are indexed in PubMed, then actually the models are very faithful to citations that are retrieved from that search engine. But if you use Gemini and ChatGPT, those are not always hooked up to those research databases.
GF: The problem is that a person trying to search using those tools doesn’t know that.
LLW: Right, that’s a problem. People tend to trust these things because, as an example, we now have AI-generated answers at the top of Google search, and people have historically trusted Google search to only index documents that people have written, maybe putting the ones that are more trustworthy at the top. But that AI-generated response can be full of misinformation. What’s happening is that some people are losing trust in traditional search as a consequence. It’s going to be hard to build back that trust, even if we improve the technology.
We’re really at the beginning of this technology. It took a long time for us to develop meaningful resources on the internet — things like Wikipedia or PubMed. Right now, these chatbots are general-purpose tools, but there are already starting to be mixtures of models underneath. And in the future, they’re going to get better at routing people’s queries to the correct expert models, whether that’s to the model hooked up to PubMed or to trusted documents published by various associates related to health care. And I think that’s likely where we’re headed in the next couple of years.
Trust and reliability issues aside, are there any potential downsides to deploying these tools widely? I can see a potential problem with people using chatbots to self-diagnose when it might be preferable to see a provider.
LLW: You think of a resource like WebMD: Was that a net positive or net negative? Before its existence, patients really did have a hard time finding any information at all. And of course, there’s limited face time with clinicians where people actually get to ask those questions. So for every patient who wrongly self-diagnoses on WebMD, there are probably also hundreds of patients who found a quick answer to a question. I think that with these models, it’s going to be similar. They’re going to help address some of the gaps in clinical care where we don’t currently have enough resources.
Journal
PLOS Digital Health
Method of Research
Commentary/editorial
Subject of Research
Not applicable
Article Title
Google’s new AI Chatbot produces fake health-related evidence-then self-corrects
GenAI4ED: The project set to transform secondary education with Generative Artificial Intelligence
IMDEA Networks participates in this innovative project funded by the European Commission under the Horizon Europe program
IMDEA Networks Institute
In October 2024, the GenAI4ED project officially began—a groundbreaking international initiative funded by Horizon Europe in which IMDEA Networks is actively involved. The project aims to explore how generative artificial intelligence (GenAI) tools can revolutionize secondary education. Scheduled to run until September 2027, GenAI4ED focuses on developing a digital platform to assess and select GenAI-based educational software, promoting its effective integration into classrooms.
The project’s goal is to provide personalized recommendations based on a set of predefined criteria. This will allow teachers and students to identify the tools best suited to their specific needs, taking into account not only the technical effectiveness of the tools but also their impact on the overall educational experience. “GenAI4ED proposes a framework to systematically assess the impact of generative AI tools on educators, focusing on their efficiency, working conditions, job satisfaction, and overall well-being,” explains Nikolaos Laoutaris, Research Professor and Principal Investigator of the project at IMDEA Networks.
Synergy between humans and AI: the key to GenAI4ED
GenAI4ED champions an innovative approach that combines the potential of generative AI with human expertise. Generative AI is impacting society in multiple ways, but its application in education is particularly crucial, with “long-term consequences” that demand a thorough and carefully monitored analysis. “We need to closely monitor and make the best use of AI in educational contexts. We will employ AI, but not in isolation—rather, in a synergistic way with humans involved, including students, educators, AI specialists, and psychologists, among others,” emphasizes Laoutaris.
Three educational pilots
The digital platform will also be tested through several educational pilots conducted across different countries, assessing the impact of GenAI tools in real teaching environments.
- In Wonderful Education (Italy): 100 students and 30 teachers from public schools will participate in controlled testing scenarios.
- In the UK, at The Grammar School: 180 students aged 12–18 will explore the platform in subjects like STEM, languages, and arts.
- In Greece, at Ellinogermaniki Agogi: 15 teachers and 200 students will take part in a pilot project, with a school psychologist monitoring the psychological impact of daily use of the GenAI platform.
The project goes beyond simple technological evaluation. “One of the key objectives is to investigate the complementarity between generative AI and educators’ skills,” Laoutaris notes. Additionally, the project will explore the ethical and psychological implications of AI use in classrooms, seeking to address issues like technostress that can arise from the adoption of new technologies.
Ensuring ethical and legal compliance
IMDEA Networks plays a crucial role in ensuring that the developed tools meet regulatory and ethical standards. “IMDEA Networks will develop innovative techniques using large language models (LLMs) to audit the compliance of the platform and the educational tools with GDPR and education-specific regulations,” explains Laoutaris. The IMDEA Networks team will also focus on algorithmic design and data analysis, collaborating with other partners to implement these tools both as standalone applications and as part of integrated educational platforms.
Transforming education and society
The expected impact of GenAI4ED is significant for both education and society at large. As Laoutaris points out, “education is the foundation of our future as a society, and generative AI must play an important role in it.” The project has the potential to transform secondary education by optimizing educational resources while ensuring the ethical and responsible integration of technology that prioritizes the well-being of educators and students.
Ultimately, GenAI4ED aims not only to explore how generative AI tools can enhance secondary education but also to establish a solid foundation for their large-scale implementation. The project ensures that adoption is effective, ethical, and beneficial for all stakeholders involved.
New framework champions equity in AI for health care
(Toronto, November 18, 2024
) A recent study published in the Journal of Medical Internet Research introduced the EDAI framework, a comprehensive guideline designed to embed equity, diversity, and inclusion (EDI) principles throughout the artificial intelligence (AI) lifecycle. Led by Dr Samira Abbasgholizadeh-Rahimi, PhD, the Canada Research Chair (Tier II) in AI and Advanced Digital Primary Health Care, the research addresses a significant gap in current AI development and implementation practices in health and oral health care, which often overlook critical EDI factors. With EDAI, AI developers, policymakers, and health care providers now have a roadmap to ensure AI systems are not only technologically sound but also socially responsible and accessible to all.
Through a 3-phase research approach, including a systematic literature review and two international workshops with over 60 experts and community representatives, the research team identified essential EDI indicators to weave into each stage of AI lifecycle, from data collection to deployment. Co-designed with input from diverse voices, this framework puts inclusion at the forefront, ensuring that AI in health and oral health care reflects a range of perspectives and serves everyone more equitably and responsibly.
“The AI systems of today are often mirrors reflecting our societal biases rather than windows to a more equitable future. To use AI's power for societal good, we must ensure using frameworks like EDAI to integrate EDI into its lifecycle. Only then can we transform these powerful tools into bridges that connect and uplift everyone, not just the privileged few.” said Dr Rahimi.
The study funded by the Canadian Institutes of Health Research (CIHR) and the Research Funds of Quebec (FRQ) network i.e., Oral and Oral Health Research Network (RSBO), shows that embedding EDI principles into AI is about much more than just checking a box—it’s about tackling deeper biases within systems and organizations that can prevent AI from truly working for everyone. For example, the EDAI framework can be used by AI developers to design diagnostic tools that consider demographic and cultural diversity. Developers can ensure that datasets include diverse populations, enabling AI to provide accurate diagnoses across various demographics, and preventing biases that traditionally affected certain groups.
Similarly, when designing AI for health care management (like scheduling or resource allocation), using EDAI framework during design could ensure equitable health care by optimizing these systems to prioritize underrepresented or underserved communities. For instance, using EDAI, an AI-based patient scheduling system could be carefully developed and implemented with EDI principles in mind to identify underserved communities and marginalized groups facing accessibility challenges and facilitate access to care for these populations.
Along with offering practical steps and guidance, the EDAI framework sheds light on both the roadblocks and facilitators that can affect how EDI principles are incorporated, giving developers and policymakers the insight to tackle challenges and boost the framework’s impact. This initiative is setting the stage for a new standard in AI development and implementation, redefining how AI can enhance health and oral health care for everyone, regardless of background or circumstances.
###
About JMIR Publications:
JMIR Publications, celebrating its 25th anniversary in 2024, is a leading open access digital health research publisher. As a pioneer in open access publishing, JMIR Publications is committed to driving innovation in scholarly communications, advancing digital health research, and promoting open science principles. Our portfolio features 35 open access, peer-reviewed journals dedicated to the dissemination of high-quality research in the field of digital health, including the Journal of Medical Internet Research, as well as cross-disciplinary journals such as JMIR Research Protocols and the new title JMIR XR & Spatial Computing.
To learn more about JMIR Publications, please visit jmirpublications.com or connect with us via Twitter, LinkedIn, YouTube, Facebook, and Instagram.
Head office: 130 Queens Quay East, Unit 1100, Toronto, ON, M5A 0P6 Canada
Media contact: communications@jmir.org
The content of this communication is licensed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, published by JMIR Publications, is properly cited.
Journal
Journal of Medical Internet Research
Method of Research
Systematic review
Subject of Research
People
Article Title
EDAI Framework for Integrating Equity, Diversity, and Inclusion Throughout the Lifecycle of AI to Improve Health and Oral Health Care: Qualitative Study
Article Publication Date
15-Nov-2024
COI Statement
none declared
NTU Singapore start-up BrookieKids launches AI-powered interactive storytelling to help young children practice their mother tongue
Nanyang Technological University
Local education tech startup, BrookieKids, has launched a groundbreaking digital library featuring over 50 Mandarin voice-interactive stories, which will grow with a monthly release of new stories.
Through these engaging, animated tales, children can converse verbally with the stories, boosting their conversational skills and Mandarin language proficiency in a fun and interactive way.
Questions are posed in every story, and the speech AI captures children's verbal responses, guiding the story's direction based on their choices.
Accessible via the mobile application BrookieKids, short for Bilingual Rookies, the stories are designed to create more opportunities for preschoolers and lower primary students aged 3 to 8 years old, to engage in playful and meaningful conversations in Mandarin within the comfort of their homes.
Founder of BrookieKids, Ms Melissa Ng, a former banker and a mother of one, said, “Many Singaporean parents recognise the importance of raising bilingual children from a young age, as language capabilities will have the most rapid development.
“As a parent myself, I know how difficult it is to balance the demands of daily life, so I thought it would be good to have an easy-to-use resource that parents can tap on for building language skills. I know from feedback that what parents appreciate most about BrookieKids, is how it enables their children to speak and interact in Mandarin—something that can be challenging to provide at home.”
The new Speech AI-powered digital library in BrookieKids was developed in collaboration with the Confucius Institute at NTU. It is adapted from the Institute's preschool Mandarin curriculum, which has been purchased by over 500 preschools across Singapore.
Developed together with the experienced curriculum team at NTU, the interactive content in BrookieKids give parents easy access to high-quality Mandarin content, which is well-aligned with local learning standards.
Director of Confucius Institute at NTU Dr Neo Peng Fu, said, “We are excited to partner with BrookieKids on this pilot project, where we’re bringing stories from our library to life in a creative and interactive way. By adapting these stories for a new generation, we’re helping children learn their mother tongue through fun and interactive storytelling, which makes language learning more enjoyable.”
How BrookieKids was developed
As an NTU accountancy and business alumna who graduated from the National Institute of Education (NIE) with a Master of Education in Developmental Psychology in 2020, Melissa decided to tap into her knowledge and experience to kick-start her venture together with two other co-founders.
They started BrookieKids in 2021, where Melissa received further support from her alma mater through the NTU Innovation and Entrepreneurship initiative, including mentorship in business development and access to networking opportunities.
NTU’s Vice President (Innovation and Entrepreneurship) Professor Louis Phee, said the University is committed to empowering start-ups like BrookieKids to succeed and create lasting impact.
“The NTU Innovation & Entrepreneurship (I&E) initiative supports start-ups from across the NTU community — whether they originate from students, faculty, or alumni — by helping to accelerate their innovations into the marketplace. We provide entrepreneurs with the tools and mentorship needed to scale their ventures responsibly, from business development to networking opportunities, including collaborations within NTU. We invite all alumni to engage us, to seek mentorship and join our various I&E programmes,” Prof Phee said.
Currently, BrookieKids can be downloaded on both the Apple Appstore and Google Playstore and enjoy a handful of free voice-interactive stories. Parents can access the full library via a monthly or annual subscription.
“Physical interaction between adult caregivers and children remains essential and is certainly the ideal way to support learning. However, technology can also be thoughtfully integrated to enrich the quality of interactions and enhance language exposure at home,” explained Melissa.
“By blending Speech AI with captivating, animated stories, we aim to create a fun and engaging environment where children can listen more (多听) and speak more (多说) in Mandarin. These are both critical skills for building a strong foundation in language proficiency during the early years.”
Her sentiments are echoed by many young parents, including Ms Amelia Tan, who highlights the importance of her 5-year-old son, Jonathan, enjoying the process of learning Mandarin and building his confidence for the future.
“The cute illustrations and engaging stories make a big difference in reinforcing his learning,” says Amelia, a marketer in the banking sector. “They help him understand vocabulary and context more effectively compared to rote memorisation of Mandarin phrases. He’s learning without feeling pressured, and that makes all the difference.”
Scalable to other mother tongues
Since BrookieKids is powered by proprietary algorithms that work well with existing speech recognition technologies, it is scalable to include other languages as well.
As part of its expansion plan, the team is in discussion with a local publisher to adapt Malay bilingual books onto its platform, supporting children learning the Malay Mother Tongue.
"Our mission is to empower parents, teachers, and the community with innovative solutions that nurture joyful childhoods and inspire children to become lifelong learners and active contributors. We also look forward to partnering with preschools to explore how our suite of solutions can best support their unique needs,” adds Melissa.
Limit hospital emissions by using short AI prompts - study
University of Reading
Hospitals must use artificial intelligence responsibly to avoid huge carbon emissions, new research has shown.
Released before Technology Day (Saturday, 16 November) at the COP29 climate conference in Baku, Azerbaijan, a study investigating the impact of artificial intelligence in healthcare has shown that using large language models to process thousands of patient records daily across multiple hospitals could lead to substantial resource consumption.
Published today (Friday, 15 November) in Internal Medicine Journal, researchers from the University of Adelaide and the University of Reading highlight ways in which hospitals can use AI responsibly - including using shorter prompts to summarise patient data.
Oliver Kleinig, who led the research from the University of Adelaide, said: “Every day you are in hospital, doctors, nurses, and other hospital professionals are documenting pages and pages about your health. By the end of a hospital stay, it is possible to accumulate tens of thousands of words to your name. Unlike busy healthcare staff, private large language models similar to ChatGPT have time to read through and process this information.
“However, with great processing power comes great responsibility. A single AI query uses enough electricity to charge a smartphone 11 times and consumes 20 millilitres of freshwater in Australian data centres. ChatGPT is estimated to use 15 times as much energy as Google.
“Implementing large language models across healthcare could have very significant environmental consequences. Hospital bosses need to think carefully about where and when artificial intelligence should be used in their organisations.”
Questions to consider
ChatGPT's daily carbon emissions already equal that of 400-800 US households. Healthcare AI systems would likely have an even larger footprint, as they require more powerful models to handle complex medical information and must be run locally for patient privacy.
Beyond energy consumption, the hardware needed for these AI systems requires extensive rare earth metal mining, potentially causing habitat destruction. The manufacturing process alone can double the carbon footprint of AI operations.
To reduce the impact of hospitals and medical centres on the environment, the researchers propose five key questions healthcare providers should consider before implementing AI systems, including:
- Does my organisation need a large language model? Could existing technology be sufficient?
- What LLM should I choose? Use the smallest possible model to decrease resource consumption - smaller, fine-tuned LLMs can outperform larger applications.
- How can I optimise my LLM? Use smaller and specific prompts to reduce the carbon impact of applications. Succinct prompts with refined information are more energy efficient.
- What hardware should I run my LLM from? Using hardware that runs on renewable energy is preferable.
- What data should I share? Maximise LLM efficiency by sharing data where appropriate.
The study suggests AI could potentially reduce healthcare's environmental impact in other ways, such as improving patient flow and reducing paper use.
Journal
Internal Medicine Journal
Method of Research
Meta-analysis
Subject of Research
Not applicable
Article Title
The environmental impact of large language models in medicine
Article Publication Date
15-Nov-2024
No comments:
Post a Comment