Monday, November 25, 2024

  

MSU expert: How AI can help people understand research and increase trust in science



Michigan State University




EAST LANSING, Mich. – Have you ever read about a scientific discovery and felt like it was written in a foreign language? If you’re like most Americans, new scientific information can prove challenging to understand — especially if you try to tackle a science article in a research journal.

In an era when scientific literacy is crucial for informed decision-making, the abilities to communicate and comprehend complex content are more important than ever. Trust in science has been declining for years, and one contributing factor may be the challenge of understanding scientific jargon.

New research from David Markowitz, associate professor of communication at Michigan State University, points to a potential solution: using artificial intelligence, or AI, to simplify science communication. His work demonstrates that AI-generated summaries may help restore trust in scientists and, in turn, encourage greater public engagement with scientific issues — just by making scientific content more approachable. The question of trust is particularly important, as people often rely on science to inform decisions in their daily lives, from choosing what foods to eat to making critical heath care choices.

Responses are excerpts from an article originally published in The Conversation.

How did simpler, AI-generated summaries affect the general public’s comprehension of scientific studies?

Artificial intelligence can generate summaries of scientific papers that make complex information more understandable for the public compared with human-written summaries, according to Markowitz’s recent study, which was published in PNAS Nexus. AI-generated summaries not only improved public comprehension of science but also enhanced how people perceived scientists.

Markowitz used a popular large language model, GPT-4 by OpenAI, to create simple summaries of scientific papers; this kind of text is often called a significance statement. The AI-generated summaries used simpler language — they were easier to read according to a readability index and used more common words, like “job” instead of “occupation” — than summaries written by the researchers who had done the work.

In one experiment, he found that readers of the AI-generated statements had a better understanding of the science, and they provided more detailed, accurate summaries of the content than readers of the human-written statements.

How did simpler, AI-generated summaries affect the general public’s perception of scientists?

In another experiment, participants rated the scientists whose work was described in simple terms as more credible and trustworthy than the scientists whose work was described in more complex terms.

In both experiments, participants did not know who wrote each summary. The simpler texts were always AI-generated, and the complex texts were always human-generated. When I asked participants who they believed wrote each summary, they ironically thought the more complex ones were written by AI and simpler ones were written by humans.

What do we still need to learn about AI and science communication?

As AI continues to evolve, its role in science communication may expand, especially if using generative AI becomes more commonplace or sanctioned by journals. Indeed, the academic publishing field is still establishing norms regarding the use of AI. By simplifying scientific writing, AI could contribute to more engagement with complex issues.

While the benefits of AI-generated science communication are perhaps clear, ethical considerations must also be considered. There is some risk that relying on AI to simplify scientific content may remove nuance, potentially leading to misunderstandings or oversimplifications. There’s always the chance of errors, too, if no one pays close attention. Additionally, transparency is critical. Readers should be informed when AI is used to generate summaries to avoid potential biases.

Simple science descriptions are preferable to and more beneficial than complex ones, and AI tools can help. But scientists could also achieve the same goals by working harder to minimize jargon and communicate clearly — no AI necessary.

###

Michigan State University has been advancing the common good with uncommon will for more than 165 years. One of the world’s leading public research universities, MSU pushes the boundaries of discovery to make a better, safer, healthier world for all while providing life-changing opportunities to a diverse and inclusive academic community through more than 400 programs of study in 17 degree-granting colleges.

For MSU news on the web, go to MSUToday or x.com/MSUnews.

Q&A: Promises and perils of AI in medicine, according to UW experts in public health and AI



University of Washington




In most doctors’ offices these days, you’ll find a pattern: Everybody’s Googling, all the time. Physicians search for clues to a diagnosis, or for reminders on the best treatment plans. Patients scour WebMD, tapping in their symptoms and doomscrolling a long list of possible problems.  

But those constant searches leave something to be desired. Doctors don’t have the time to sift through pages of results, and patients don’t have the knowledge to digest medical research. Everybody has trouble finding the most reliable information.  

Optimists believe artificial intelligence could help solve those problems, but the bots might not be ready for prime time. In a recent paperDr. Gary Franklin, a University of Washington research professor of environmental & occupational health sciences and of neurology in the UW School of Medicine, described a troubling experience with Google’s Gemini chatbot. When Franklin asked Gemini for information on the outcomes of a specific procedure – a decompressive brachial plexus surgery – the bot gave a detailed answer that cited two medical studies, neither of which existed.  

Franklin wrote that it’s “buyer beware when it comes to using AI Chatbots for the purposes of extracting accurate scientific information or evidence-based guidance.” He recommended that AI experts develop specialized chatbots that pull information only from verified sources.  

One expert working toward a solution is Lucy Lu Wang, a UW assistant professor in the Information School who focuses on making AI better at understanding and relaying scientific information. Wang has developed tools to extract important information from medical research papersverify scientific claims, and make scientific images accessible to blind and low-vision readers

UW News sat down with Franklin and Wang to discuss how AI could enhance health care, what’s standing in the way, and whether there’s a downside to democratizing medical research.  

Each of you has studied the possibilities and perils of AI in health care, including the experiences of patients who ask chatbots for medical information. In a best-case scenario, how do you envision AI being used in health and medicine? 

Gary Franklin: Doctors use Google a lot, but they also rely on services like UpToDate, which provide really great summaries of medical information and research. Most doctors have zero time and just want to be able to read something very quickly that is well documented. So from a physician’s perspective trying to find truthful answers, trying to make my practice more efficient, trying to coordinate things better — if this technology could meaningfully contribute to any of those things, then it would be unbelievably great. 

I’m not sure how much doctors will use AI, but for many years, patients have been coming in with questions about what they found on the internet, like on WebMD. AI is just the next step of patients doing this, getting some guidance about what to do with the advice they’re getting. As an example, if a patient sees a surgeon who’s overly aggressive and says they need a big procedure, the patient could ask an AI tool what the broader literature might recommend. And I have concerns about that. 

Lucy Lu Wang: I’ll take this question from the clinician’s perspective, and then from the patient’s perspective.  

From the clinician’s perspective, I agree with what Gary said. Clinicians want to look up information very quickly because they’re so taxed and there’s limited time to treat patients. And you can imagine if the tools that we have, these chatbots, were actually very good at searching for information and very good at citing accurately, that they could become a better replacement for a type of tool like UpToDate, right? Because UpToDate is good, it’s human-curated, but it doesn’t always contain the most fine-grained information you might be looking for. 

These tools could also potentially help clinicians with patient communication, because there’s not always enough time to follow up or explain things in a way that patients can understand. It’s an add-on part of the job for clinicians, and that’s where I think language models and these tools, in an ideal world, could be really beneficial. 

Lastly, on the patient’s side, it would be really amazing to develop these tools that help with patient education and help increase the overall health literacy of the population, beyond what WebMD or Google does. These tools could engage patients with their own health and health care more than before.  

Zooming out from the individual to the systemic, do you see any ways AI could make health systems as a whole function more smoothly? 

GF: One thing I’m curious about is whether these tools can be used to help with coordination across the health care system and between physicians. It’s horrible. There was a book called “Crossing the Quality Chasm” that argued the main problem in American medicine is poor coordination across specialties, or between primary care and anybody else. It’s still horrible, because there’s no function in the medical field that actually does that. So that’s another question: Is there a role here for this kind of technology in coordinating health care? 

LLW: There’s been a lot of work on tools that can summarize a patient’s medical history in their clinical notes, and that could be one way to perform this kind of communication between specialties. There’s another component, too: If patients can directly interact with the system, we can construct a better timeline of the patient’s experiences and how that relates to their clinical medical care. 

We’ve done qualitative research with health care seekers that suggests there are lots of types of questions that people are less willing to ask their clinical provider, but much more willing to put into one of these models. So the models themselves are potentially addressing unmet needs that patients aren’t willing to directly share with their doctors. 

What’s standing in the way of these best-case scenarios?  

LLW: I think there are both technical challenges and socio-technical challenges. In terms of technical challenges, a lot of these models’ training doesn’t currently make them effective for tasks like scientific search and summarization.  

First, these current chatbots are mostly trained to be general-purpose tools, so they’re meant to be OK at everything, but not great at anything. And I think there will be more targeted development towards these more specific tasks, things like scientific search with citations that Gary mentioned before. The current training methods tend to produce models that are instruction-following, and have a very large positive response bias in their outputs. That can lead to things like generating answers with citations that support the answer, even if those citations don’t exist in the real world. These models are also trained to be overconfident in their responses. If the way the model communicates is positive and overconfident, then it’s going to lead to lots of problems in a domain like health care.  

And then, of course, there’s socio-technical problems, like, maybe these models should be developed with the specific goal of supporting scientific search. People are, in fact, working toward these things and have demonstrated good preliminary results. 

GF: So are the folks in your field pretty confident that that can be overcome in a fairly short time? 

LLW: I think the citation problem has already been overcome in research demonstration cases. If we, for example, hook up an LLM to PubMed search and allow it only to cite conclusions based on articles that are indexed in PubMed, then actually the models are very faithful to citations that are retrieved from that search engine. But if you use Gemini and ChatGPT, those are not always hooked up to those research databases.  

GF: The problem is that a person trying to search using those tools doesn’t know that. 

LLW: Right, that’s a problem. People tend to trust these things because, as an example, we now have AI-generated answers at the top of Google search, and people have historically trusted Google search to only index documents that people have written, maybe putting the ones that are more trustworthy at the top. But that AI-generated response can be full of misinformation. What’s happening is that some people are losing trust in traditional search as a consequence. It’s going to be hard to build back that trust, even if we improve the technology. 

We’re really at the beginning of this technology. It took a long time for us to develop meaningful resources on the internet — things like Wikipedia or PubMed. Right now, these chatbots are general-purpose tools, but there are already starting to be mixtures of models underneath. And in the future, they’re going to get better at routing people’s queries to the correct expert models, whether that’s to the model hooked up to PubMed or to trusted documents published by various associates related to health care. And I think that’s likely where we’re headed in the next couple of years.  

Trust and reliability issues aside, are there any potential downsides to deploying these tools widely? I can see a potential problem with people using chatbots to self-diagnose when it might be preferable to see a provider. 

LLW: You think of a resource like WebMD: Was that a net positive or net negative? Before its existence, patients really did have a hard time finding any information at all. And of course, there’s limited face time with clinicians where people actually get to ask those questions. So for every patient who wrongly self-diagnoses on WebMD, there are probably also hundreds of patients who found a quick answer to a question. I think that with these models, it’s going to be similar. They’re going to help address some of the gaps in clinical care where we don’t currently have enough resources. 

 

Tiny worm makes for big evolutionary discovery



UC Riverside scientists have described ‘Uncus,’ the oldest ecdysozoan and the first from the Precambrian period



University of California - Riverside

Excavation 

image: 

Scott Evans and Ian Hughes excavating a fossil bed at Nilpena National Park.

view more 

Credit: Mary Droser/UCR



Everyone has a past. That includes the millions of species of insects, arachnids, and nematode worms that make up a major animal group called the Ecdysozoa.

Until recently, details about this group’s most distant past have been elusive. But a UC Riverside-led team has now identified the oldest known ecdysozoan in the fossil record and the only one from the Precambrian period. Their discovery of Uncus dzaugisi, a worm-like creature rarely over a few centimeters in length, is described in a paper published today in Current Biology. 

“Scientists have hypothesized for decades that this group must be older than the Cambrian, but until now its origins have remained enigmatic. This discovery reconciles a major gap between predictions based on molecular data and the lack of described ecdysozoans prior to the rich Cambrian fossils record and adds to our understanding of the evolution of animal life,” said Mary Droser, a distinguished professor of geology at UCR, who led the study.

The ecdysozoans are the largest and most species-rich animal group on Earth, encompassing more than half of all animals. Characterized by their cuticle — a tough external skeleton that is periodically shed — the group comprises three subgroups: nematodes, which are microscopic worms; arthropods, which include insects, spiders, and crustaceans; and scalidophora, an eclectic group of small, scaly marine creatures. 

“Like many modern-day animal groups, ecdysozoans were prevalent in the Cambrian fossil record and we can see evidence of all three subgroups right at the beginning of this period, about 540 million years ago,” said Ian Hughes, a graduate student in marine biology at Harvard University and the paper’s first author. “We know they didn’t just appear out of nowhere, and so the ancestors of all ecdysozoans must have been present during the preceding Ediacaran period.”

DNA-based analyses, used to predict the age of animal groups by comparing them with their closest living relatives, have corroborated this hypothesis. Yet ecdysozoan fossil animals have remained hidden among scores of animal fossils paleontologists have discovered from the Ediacaran Period.

Ediacaran animals, which lived 635-538 million years ago, were ocean dwellers; their remains preserved as cast-like impressions on the seabed that later hardened to rock. Hughes said uncovering them is a labor-intensive, delicate process that involves peeling back rock layers, flipping them over, dusting them off, and piecing them back together to get “a really nice snapshot of the sea floor.”

This excavation process has only been done at Nilpena Ediacara National Park in South Australia, a site Droser and her team have been working at for 25 years that is known for its beautifully preserved Ediacaran fossils.

“Nilpena is perhaps the best fossil site for understanding early animal evolution in the world because the fossils occur during a period of heightened diversity and we are able to excavate extensive layers of rock that preserve these snapshots,” said Scott Evans, an assistant professor of Earth-Life interactions at Florida State University and co-author of the study. “The layer where we found Uncus is particularly exciting because the sediment grains are so small that we really see all the details of the fossils preserved there.”

While the team didn’t set out to find an early ecdysozoan during their 2018 excavation, they were drawn to a mysterious worm-like impression that they dubbed “fishhook.”

“Sometimes we make dramatic discoveries and sometimes we excavate an entire bed and say ‘hmmm, I’ve been looking at that thing, what do you think?’” Hughes said. “That’s what happened here. We had all sort of noticed this fishhook squiggle on the rock. It was pretty prominent because it was really, really deep.”

After seeing more of the worm-like squiggles the team paid closer attention, taking note of fishhook’s characteristics.

“Because it was deep, we knew it wasn’t smooshed easily so it must have had a pretty rigid body,” Hughes said. Other defining characteristics include its distinct curvature and the fact that it could move around — seen by trace fossils in the surrounding area. Paul De Ley, an associate professor of nematology at UCR, confirmed its fit as an early nematode and ruled out other worm types.

“At this point we knew this was a new fossil animal and it belong to the Ecdysozoa,” Hughes said. 

The team called the new animal Uncus, which means “hook” in Latin, noting in the paper its similarities to modern-day nematodes. Hughes said the team was excited to find evidence of what scientists had long predicted; that ecdysozoans existed in the Ediacaran Period.

“It’s also really important for our understanding of what these early animal groups would have looked like and their lifestyle, especially as the ecdysozoans would really come to dominate the marine ecosystem in the Cambrian,” he said.

The paper is titled “An Ediacaran bilateran with an ecdysozoan affinity from South Australia.” Funding for the research came from NASA.


Uncus fossil from Nilpena Ediacara National Park. The numbers correspond to the coordinates of this fossil on the fossil bed surface. Bottom: 3D laser scans enable the researchers to study the fossils’ shape and curvature. 

Credit

Droser Lab/UCR

 

Scientists recreate mouse from gene older than animal life


New research sheds light on evolutionary origins of stem cells with groundbreaking experiment to create mouse using ancient genetic tools



Queen Mary University of London

Image 1 

image: 

The mouse on the left is a chimeric with dark eyes and patches of black fur, a result of stem cells derived from a choanoflagellate Sox gene. The wildtype mouse on the right has red eyes and all white fur. The colour difference is due to genetic markers used to distinguish the stem cells, not a direct effect of the gene itself.

view more 

Credit: Gao Ya and Alvin Kin Shing Lee, with thanks to the Centre for Comparative Medicine Research (CCMR) for their support.




Published in Nature Communications, an international team of researchers has achieved an unprecedented milestone: the creation of mouse stem cells capable of generating a fully developed mouse using genetic tools from a unicellular organism, with which we share a common ancestor that predates animals. This breakthrough reshapes our understanding of the genetic origins of stem cells, offering a new perspective on the evolutionary ties between animals and their ancient single-celled relatives.

In an experiment that sounds like science fiction, Dr Alex de Mendoza of Queen Mary University of London collaborated with researchers from The University of Hong Kong to use a gene found in choanoflagellates, a single-celled organism related to animals, to create stem cells which they then used to give rise to a living, breathing mouse. Choanoflagellates are the closest living relatives of animals, and their genomes contain versions of the genes Sox and POU, known for driving pluripotency — the cellular potential to develop into any cell type — within mammalian stem cells. This unexpected discovery challenges a longstanding belief that these genes evolved exclusively within animals.

“By successfully creating a mouse using molecular tools derived from our single-celled relatives, we’re witnessing an extraordinary continuity of function across nearly a billion years of evolution,” said Dr de Mendoza. "The study implies that key genes involved in stem cell formation might have originated far earlier than the stem cells themselves, perhaps helping pave the way for the multicellular life we see today."

The 2012 Nobel prize to Shinya Yamanaka demonstrated that it is possible to obtain stem cells from “differentiated” cells just by expressing four factors, including a Sox (Sox2) and a POU (Oct4) gene. In this new research, through a set of experiments conducted in collaboration with Dr Ralf Jauch’s lab in The University of Hong Kong / Centre for Translational Stem Cell Biology, the team introduced choanoflagellate Sox genes into mouse cells, replacing the native Sox2 gene achieving reprogramming towards the pluripotent stem cell state. To validate the efficacy of these reprogrammed cells, they were injected into a developing mouse embryo. The resulting chimeric mouse displayed physical traits from both the donor embryo and the lab induced stem cells, such as black fur patches and dark eyes, confirming that these ancient genes played a crucial role in making stem cells compatible with the animal’s development.

The study traces how early versions of Sox and POU proteins, which bind DNA and regulate other genes, were used by unicellular ancestors for functions that would later become integral to stem cell formation and animal development. "Choanoflagellates don’t have stem cells, they’re single-celled organisms, but they have these genes, likely to control basic cellular processes that multicellular animals probably later repurposed for building complex bodies,” explained Dr de Mendoza.

This novel insight emphasises the evolutionary versatility of genetic tools and offers a glimpse into how early life forms might have harnessed similar mechanisms to drive cellular specialisation, long before true multicellular organisms came into being, and into the importance of recycling in evolution.

This discovery has implications beyond evolutionary biology, potentially informing new advances in regenerative medicine. By deepening our understanding of how stem cell machinery evolved, scientists may identify new ways to optimise stem cell therapies and improve cell reprogramming techniques for treating diseases or repairing damaged tissue.

"Studying the ancient roots of these genetic tools lets us innovate with a clearer view of how pluripotency mechanisms can be tweaked or optimised," Dr Jauch said, noting that advancements could arise from experimenting with synthetic versions of these genes that might perform even better than native animal genes in certain contexts. 

 

Transforming marine waste and carbonated water into hydrogels via CO2 release behavior



The study investigates how post-gelation CO₂ release rates affect hydrogel properties, informing medical application development



Tokyo University of Science

CO₂-Based Gelation Process: Mixing Alginate, Calcium Carbonate, and Carbonated Water 

image: 

Carbon dioxide creates the acidic environment necessary for calcium ions to link alginate chains, forming the hydrogel. However, the rapid release of CO₂ after gelation limits calcium ion availability, resulting in reduced crosslinking of the alginate chains.

view more 

Credit: Reproduced from Teshima et al. with permission from the Royal Society of Chemistry Image source: https://doi.org/10.1039/D4MA00257A




Hydrogels, which are soft materials made of water-filled, crosslinked polymer networks, have a wide range of uses, from wound dressings to enhancing soil moisture for plant growth. They are formed through a process called gelation, where polymers in a solution are linked together to form a gel. Biopolymers, such as polysaccharides and proteins, often require the addition of acidic agents for this gelation process. However, these agents can remain in the hydrogel, posing risks for biological applications. To address this issue, a new gelation method uses carbon dioxide (CO₂) instead of acidic agents. CO₂ acts as an acidic agent during gelation but escapes into the atmosphere once the hydrogel forms.

A study led by Professor Hidenori Otsuka and Mr. Ryota Teshima from Tokyo University of Science, Japan, investigated the effect of CO2 release on the properties of hydrogels. Their findings were made available online on June 5, 2024, were published in Issue 16 of the journal Materials Advances on August 21, 2024 and provide valuable insights for synthesizing hydrogels suitable for medical uses.

“The degree of crosslinking in hydrogels is typically controlled by 'pre-gelation parameters,' such as polymer and crosslinker concentrations. However, we demonstrate that the crosslinking degree of hydrogels prepared using carbon dioxide as the acidic agent is also influenced by post-gelation conditions,” says Prof. Otsuka and Mr. Teshima.

The researchers synthesized hydrogels called Alg-gels from alginate, a polymer derived from brown seaweed. They mixed alginate with calcium carbonate (CaCO₃) and added carbonated water, resulting in a porous hydrogel where alginate chains were crosslinked by calcium ions. To control the release of CO₂ from the Alg-gels, two samples were prepared: one incubated in a Petri dish, exposing only its top surface to air, and another on a wire mesh, exposing the entire surface. The rate of CO₂ release was monitored using bromothymol blue (BTB), a pH indicator that changes color with acidity (yellow in acidic conditions, green when neutral, and blue in alkaline conditions).

The gel in the Petri dish gradually turned green (neutral) over 60 minutes, indicating slow CO₂ release, and became fully blue after 5 hours. In contrast, the gel on the wire mesh released CO₂ much faster, turning fully blue by 40 minutes and releasing all the CO₂ in just 90 minutes.

The rapid release of CO₂ immediately after gel formation prevented the calcium carbonate from fully dissolving in the solution, leaving fewer calcium ions available to link the polymer chains. “Rapid release of CO2 from the hydrogel after gelation increased the pH of the system and decreased the degree of crosslinking,” explain Prof. Otsuka and Mr. Teshima. When both samples were subjected to a compression test, the stiffness, breaking stress, and energy required to break the disks were higher for those incubated in the petri dish compared to those on the wire mesh.

This study enhances our understanding of how CO₂ release after gel formation affects the degree of crosslinking and the mechanical properties of hydrogels, providing insights for creating hydrogels using CO₂. Additionally, the use of alginic acid derived from marine waste can transform waste into high-value hydrogels for medical uses such as tissue engineering, including wound healing and organ regeneration.

 

***

 

Reference                    

DOI: 10.1039/D4MA00257A

 

About The Tokyo University of Science
Tokyo University of Science (TUS) is a well-known and respected university, and the largest science-specialized private research university in Japan, with four campuses in central Tokyo and its suburbs and in Hokkaido. Established in 1881, the university has continually contributed to Japan's development in science through inculcating the love for science in researchers, technicians, and educators.

With a mission of “Creating science and technology for the harmonious development of nature, human beings, and society," TUS has undertaken a wide range of research from basic to applied science. TUS has embraced a multidisciplinary approach to research and undertaken intensive study in some of today's most vital fields. TUS is a meritocracy where the best in science is recognized and nurtured. It is the only private university in Japan that has produced a Nobel Prize winner and the only private university in Asia to produce Nobel Prize winners within the natural sciences field.

Website: https://www.tus.ac.jp/en/mediarelations/

 

About Professor Hidenori Otsuka from Tokyo University of Science
Prof. Hidenori Otsuka completed his Ph.D. from the Department of Chemistry, Tokyo University of Science (TUS) Graduate School, and currently heads his own laboratory at TUS. With more than 100 research publications to his credit, his research focuses mainly on the basics and applications of physical chemistry, especially colloid and surface chemistry.

 

Funding information
This work was supported by Grants-in-Aid for Research Activities from the Masason Foundation [grant numbers: GD14469, GD9675, and GD2825] and Grants-in-Aid for Extracurricular Activities for Students at Tokyo University of Science Parents Associations (Kouyoukai) [grant number: 2019-15].

 

The MIT Press releases workshop report on the future of open access publishing and policy



The MIT Press
Open Access at the MIT Press 

image: 

Open Access at the MIT Press 

view more 

Credit: The MIT Press, 2024.




Cambridge, MA (November 18, 2024) – Today, the MIT Press is releasing a comprehensive report that addresses how open access policies shape research and what is needed to maximize their positive impact on the research ecosystem.

The report, entitled "Access to Science & Scholarship 2024: Building an Evidence Base to Support the Future of Open Research Policy," is the outcome of a National Science Foundation-funded workshop held at the D.C. headquarters of the American Association for the Advancement of Science on September 20, 2024.

While open access aims to democratize knowledge, its implementation has been a factor in the consolidation of the academic publishing industry, an explosion in published articles with inconsistent review and quality control, and new costs that may be hard for researchers and universities to bear, with less affluent schools and regions facing the greatest risk. The workshop examined how open access and other open science policies may affect research and researchers in the future, how to measure their impact, and how to address emerging challenges.

The event brought together leading experts to discuss critical issues in open scientific and scholarly publishing. These issues include:

  • The impact of open access policies on the research ecosystem
  • The enduring role of peer review in ensuring research quality
  • The challenges and opportunities of data sharing and curation
  • The evolving landscape of scholarly communications infrastructure

The report identifies key research questions in order to advance open science and scholarship. These include:

  • How can we better model and anticipate the consequences of government policies on public access to science and scholarship?
  • How can research funders support experimentation with new and more equitable business models for scientific publishing?
  • If the dissemination of scholarship is decoupled from peer review and evaluation, who is best suited to perform that evaluation, and how should that process be managed and funded?

“This workshop report is a crucial step in building a data-driven roadmap for the future of open science publishing and policy,” said Dr. Phillip Sharp, Institute Professor and Professor of Biology Emeritus at MIT and faculty lead of the working group behind the workshop and the report. “By identifying key research questions around infrastructure, training, technology, and business models, we aim to ensure that open science practices are sustainable and that they contribute to the highest quality research.”

The full report is available for download here, along with video recordings of the workshop.

About The MIT Press:

The MIT Press is a leading academic publisher committed to advancing knowledge and innovation. It publishes significant books and journals across a wide range of disciplines spanning science, technology, design, humanities, and social science.

For more information, please contact Nick Lindsay at nlindsay@mit.edu.