New tool detects fake, AI-produced scientific articles
Binghamton University researcher develops xFakeSci to root out bogus research
BINGHAMTON, N.Y. -- When ChatGPT and other generative artificial intelligence can produce scientific articles that look real — especially to someone outside that field of research — what’s the best way to figure out which ones are fake?
Ahmed Abdeen Hamed, a visiting research fellow at Binghamton University, State University of New York, has created a machine-learning algorithm he calls xFakeSci that can detect up to 94% of bogus papers — nearly twice as successfully as more common data-mining techniques.
“My main research is biomedical informatics, but because I work with medical publications, clinical trials, online resources and mining social media, I’m always concerned about the authenticity of the knowledge somebody is propagating,” said Hamed, who is part of George J. Klir Professor of Systems Science Luis M. Rocha’s Complex Adaptive Systems and Computational Intelligence Lab. “Biomedical articles in particular were hit badly during the global pandemic because some people were publicizing false research.”
In a new paper published in the journal Scientific Reports, Hamed and collaborator Xindong Wu, a professor at Hefei University of Technology in China, created 50 fake articles for each of three popular medical topics — Alzheimer’s, cancer and depression — and compared them to the same number of real articles on the same topics.
Hamed said when he asked ChatGPT for the AI-generated papers, “I tried to use exact same keywords that I used to extract the literature from the [National Institutes of Health’s] PubMed database, so we would have a common basis of comparison. My intuition was that there must be a pattern exhibited in the fake world versus the actual world, but I had no idea what this pattern was.”
After some experimentation, he programmed xFakeSci to analyze two major features about how the papers were written. One is the numbers of bigrams, which are two words that frequently appear together such as “climate change,” “clinical trials” or “biomedical literature.” The second is how those bigrams are linked to other words and concepts in the text.
“The first striking thing was that the number of bigrams were very few in the fake world, but in the real world, the bigrams were much more rich,” Hamed said. “Also, in the fake world, despite the fact that were very few bigrams, they were so connected to everything else.”
Hamed and Wu theorize that the writing styles are different because human researchers don’t have the same goals as AIs prompted to produce a piece on a given topic.
“Because ChatGPT is still limited in its knowledge, it tries to convince you by using the most significant words,” Hamed said. “It is not the job of a scientist to make a convincing argument to you. A real research paper reports honestly about what happened during an experiment and the method used. ChatGPT is about depth on a single point, while real science is about breadth.”
To further develop xFakeSci, Hamed plans to expand the range of topics to see if the telltale word patterns hold for other research areas, going beyond medicine to include engineering, other scientific topics and the humanities. He also foresees AIs becoming increasingly sophisticated, so determining what is and isn’t real will get increasingly difficult.
“We are always going to be playing catchup if we don’t design something comprehensive,” he said. “We have a lot of work ahead of us to look for a general pattern or universal algorithm that does not depend on which version of generative AI is used.”
Because even though their algorithm catches 94% of AI-generated papers, he added, that means six out of 100 fakes are still getting through: “We need to be humble about what we’ve accomplished. We’ve done something very important by raising awareness.”
Journal
Scientific Reports
Method of Research
Computational simulation/modeling
Article Title
Detection of ChatGPT fake science with the xFakeSci learning algorithm
“Artificial intelligence will play an increasing role in scientific publications”
“May help editors select papers that will garner greater attention”
(Boston)—Artificial intelligence (AI), in various forms, has burst onto the scene in both society and medicine. Its role in medicine is still evolving, but undoubtedly, it will assist in the evaluation of images (radiographs, pathological reports, videos of colonoscopy,) as well as in preparing discharge summaries, consultative evaluations and diagnosis. It may also help in the long-awaited goal of precision medicine. In addition, it has and will play an increasing role in scientific publication in at least two areas, peer-review and drafting manuscripts.
According to former editor-in-chief of the Journal of the American Medical Association Howard Bauchner, MD, in the coming years, AI will transform the writing of scientific manuscripts, assist in reviewing them, and help editors select the most impactful papers. “Potentially it may help editors increase the influence of their journals,” says Bauchner, professor of pediatrics at Boston University Chobanian & Avedisian School of Medicine.
In a guest editorial in the European Journal of Emergency Medicine, Bauchner examines how AI could be used by editors. “Given that identifying enough peer-reviewers is getting increasingly difficult, editors could use AI to provide an initial “score.” An article with what is determined to have a good score could then be sent for external peer-review (with simply a cursory review by the editors). For articles with an inadequate score, the editors could still consider it for publication after reviewing it or even possibly depending upon the report, ask authors to revise the manuscript,” he explains.
When AI becomes available to predict citations, which influences the journals’ impact factor, Bauchner questions if editors should use the information. “First, editors should establish a vision for their journal – what is its mission – and is an individual article consistent with the mission and “in scope.” Second, editors need to carefully consider the role of value-added pieces. How do they enhance the value of the journal. Third, editors need to maximize the reach of their journal, particularly in social media. Journals are communication networks. Fourth, editors need to understand the meaning of open science, including open peer-review, data-sharing, and open access. After an editor has thought through these issues, then yes, having AI assist in determining how much an article would be cited – assuming the results of the study are valid, and simply not meant to attract attention – is reasonable.”
Bauchner points out that AI will not replace editors or peer-reviewers, but rather provide additional information about the quality of a manuscript making triaging manuscripts faster and more objective. “AI will play an increasing role in scientific publication – particularly in peer-review and drafting of manuscripts. Given that in both areas there are important challenges, investigators, peer-reviewers, editors, and funders should welcome the assistance that AI will provide,” he adds.
Journal
European Journal of Emergency Medicine
Method of Research
Commentary/editorial
Article Title
Artificial intelligence and the future of scientific publication
IOP Publishing partners with Hum to explore data and AI use to enhance content strategy
IOP Publishing
IOP Publishing, the publishing arm of the Institute of Physics, is partnering with Hum, the cutting-edge data and AI company, to explore ways to further enhance its author experience and develop its outstanding publishing services.
This partnership underscores IOP Publishing’s commitment to delivering personalised, relevant, and impactful content experiences across their portfolio of over 100 open access and hybrid journals, ebooks, science news, and other resources.
Hum’s advanced AI platforms analyse vast amounts of audience and content data, uncovering actionable insights that will help IOP Publishing by:
- Targeting outreach and omnichannel campaigns to highly specific audience segments
- Developing marketing and outreach strategies for authors and reviewers
- Identifying emerging topics for future content development
- Surfacing the most relevant content recommendations for individual readers
“At IOP Publishing, deepening our relationship with the scientific community is at the heart of everything we do,” said Miriam Maus, Chief Publishing Officer, “Partnering with Hum provides technology that delivers insights into our content and enhances engagement with authors and readers. We’re looking forward to creating more opportunities to advance science through the global exchange of ideas.”
“We’re thrilled to be able to help IOP Publishing unlock the full potential of its audience and content, driving greater engagement, reach, and impact,” said Tim Barton, Hum CEO. “IOPP has been pushing the boundaries on what researchers, authors, readers, and reviewers expect and is committed to advancing that experience. Hum is eager to help them continue to do so.”
No comments:
Post a Comment