AI
Researchers develop AI model to better predict which drugs may cause birth defects
Data harnessed to identify previously unknown associations between genes, congenital disabilities, and drugs
Peer-Reviewed PublicationNew York, NY (July 17, 2023)—Data scientists at the Icahn School of Medicine at Mount Sinai in New York and colleagues have created an artificial intelligence model that may more accurately predict which existing medicines, not currently classified as harmful, may in fact lead to congenital disabilities.
The model, or “knowledge graph,” described in the July 17 issue of the Nature journal Communications Medicine [DOI: 10.1038/s43856-023-00329-2], also has the potential to predict the involvement of pre-clinical compounds that may harm the developing fetus. The study is the first known of its kind to use knowledge graphs to integrate various data types to investigate the causes of congenital disabilities.
Birth defects are abnormalities that affect about 1 in 33 births in the United States. They can be functional or structural and are believed to result from various factors, including genetics. However, the causes of most of these disabilities remain unknown. Certain substances found in medicines, cosmetics, food, and environmental pollutants can potentially lead to birth defects if exposed during pregnancy.
“We wanted to improve our understanding of reproductive health and fetal development, and importantly, warn about the potential of new drugs to cause birth defects before these drugs are widely marketed and distributed,” says Avi Ma’ayan, PhD, Professor, Pharmacological Sciences, and Director of the Mount Sinai Center for Bioinformatics at Icahn Mount Sinai, and senior author of the paper. “Although identifying the underlying causes is a complicated task, we offer hope that through complex data analysis like this that integrates evidence from multiple sources, we will be able, in some cases, to better predict, regulate, and protect against the significant harm that congenital disabilities could cause.”
The researchers gathered knowledge across several datasets on birth-defect associations noted in published work, including those produced by NIH Common Fund programs, to demonstrate how integrating data from these resources can lead to synergistic discoveries. Particularly, the combined data is from the known genetics of reproductive health, classification of medicines based on their risk during pregnancy, and how drugs and pre-clinical compounds affect the biological mechanisms inside human cells.
Specifically, the data included studies on genetic associations, drug- and preclinical-compound-induced gene expression changes in cell lines, known drug targets, genetic burden scores for human genes, and placental crossing scores for small molecule drugs.
Importantly, using ReproTox-KG, with semi-supervised learning (SSL), the research team prioritized 30,000 preclinical small molecule drugs for their potential to cross the placenta and induce birth defects. SSL is a branch of machine learning that uses a small amount of labeled data to guide predictions for much larger unlabeled data. In addition, by analyzing the topology of the ReproTox-KG more than 500 birth-defect/gene/drug cliques were identified that could explain molecular mechanisms that underlie drug-induced birth defects. In graph theory terms, cliques are subsets of a graph where all the nodes in the clique are directly connected to all other nodes in the clique.
The investigators caution that the study's findings are preliminary and that further experiments are needed for validation.
Next, the investigators plan to use a similar graph-based approach for other projects focusing on the relationship between genes, drugs, and diseases. They also aim to use the processed dataset as training materials for courses and workshops on bioinformatics analysis. In addition, they plan to extend the study to consider more complex data, such as gene expression from specific tissues and cell types collected at multiple stages of development.
“We hope that our collaborative work will lead to a new global framework to assess potential toxicity for new drugs and explain the biological mechanisms by which some drugs, known to cause birth defects, may operate. It’s possible that at some point in the future, regulatory agencies such as the U.S. Food and Drug Administration and the U.S. Environmental Protection Agency may use this approach to evaluate the risk of new drugs or other chemical applications,” says Dr. Ma’ayan.
The paper is titled “Toxicology Knowledge Graph for Structural Birth Defects.”
Additional co-authors are John Erol Evangelista (Icahn Mount Sinai), Daniel J. B. Clarke (Icahn Mount Sinai), Zhuorui Xie (Icahn Mount Sinai), Giacomo B. Marino, (Icahn Mount Sinai), Vivian Utti (Icahn Mount Sinai), Sherry L. Jenkins (Icahn Mount Sinai), Taha Mohseni Ahooyi (Children’s Hospital of Philadelphia), Cristian G. Bologa (University of New Mexico), Jeremy J. Yang (University of New Mexico), Jessica L. Binder (University of New Mexico), Praveen Kumar (University of New Mexico), Christophe G. Lambert (University of New Mexico), Jeffrey S. Grethe (University of California San Diego), Eric Wenger (Children’s Hospital of Philadelphia), Deanne Taylor, (Children’s Hospital of Philadelphia), Tudor I. Oprea (Children’s Hospital of Philadelphia), and Bernard de Bono (University of Auckland, New Zealand).
The project was supported by National Institutes of Health grants OT2OD030160, OT2OD030546, OT2OD032619, and OT2OD030162.
-####-
About the Icahn School of Medicine at Mount Sinai
The Icahn School of Medicine at Mount Sinai is internationally renowned for its outstanding research, educational, and clinical care programs. It is the sole academic partner for the eight- member hospitals* of the Mount Sinai Health System, one of the largest academic health systems in the United States, providing care to a large and diverse patient population.
Ranked 14th nationwide in National Institutes of Health (NIH) funding and among the 99th percentile in research dollars per investigator according to the Association of American Medical Colleges, Icahn Mount Sinai has a talented, productive, and successful faculty. More than 3,000 full-time scientists, educators, and clinicians work within and across 44 academic departments and 36 multidisciplinary institutes, a structure that facilitates tremendous collaboration and synergy. Our emphasis on translational research and therapeutics is evident in such diverse areas as genomics/big data, virology, neuroscience, cardiology, geriatrics, as well as gastrointestinal and liver diseases.
Icahn Mount Sinai offers highly competitive MD, PhD, and Master’s degree programs, with current enrollment of approximately 1,300 students. It has the largest graduate medical education program in the country, with more than 2,000 clinical residents and fellows training throughout the Health System. In addition, more than 550 postdoctoral research fellows are in training within the Health System.
A culture of innovation and discovery permeates every Icahn Mount Sinai program. Mount Sinai’s technology transfer office, one of the largest in the country, partners with faculty and trainees to pursue optimal commercialization of intellectual property to ensure that Mount Sinai discoveries and innovations translate into healthcare products and services that benefit the public.
Icahn Mount Sinai’s commitment to breakthrough science and clinical care is enhanced by academic affiliations that supplement and complement the School’s programs.
Through the Mount Sinai Innovation Partners (MSIP), the Health System facilitates the real-world application and commercialization of medical breakthroughs made at Mount Sinai. Additionally, MSIP develops research partnerships with industry leaders such as Merck & Co., AstraZeneca, Novo Nordisk, and others.
The Icahn School of Medicine at Mount Sinai is located in New York City on the border between the Upper East Side and East Harlem, and classroom teaching takes place on a campus facing Central Park. Icahn Mount Sinai’s location offers many opportunities to interact with and care for diverse communities. Learning extends well beyond the borders of our physical campus, to the eight hospitals of the Mount Sinai Health System, our academic affiliates, and globally.
-------------------------------------------------------
* Mount Sinai Health System member hospitals: The Mount Sinai Hospital; Mount Sinai Beth Israel; Mount Sinai Brooklyn; Mount Sinai Morningside; Mount Sinai Queens; Mount Sinai South Nassau; Mount Sinai West; and New York Eye and Ear Infirmary of Mount Sinai.
JOURNAL
Communications Medicine
METHOD OF RESEARCH
Data/statistical analysis
SUBJECT OF RESEARCH
People
ARTICLE TITLE
Toxicology Knowledge Graph for Structural Birth Defects
ARTICLE PUBLICATION DATE
17-Jul-2023
ChatGPT’s responses to people’s healthcare-related queries are nearly indistinguishable from those provided by humans, new study reveals
ChatGPT’s responses to people’s healthcare-related queries are nearly indistinguishable from those provided by humans, a new study from NYU Tandon School of Engineering and Grossman School of Medicine reveals, suggesting the potential for chatbots to be effective allies to healthcare providers’ communications with patients.
An NYU research team presented 392 people aged 18 and above with ten patient questions and responses, with half of the responses generated by a human healthcare provider and the other half by ChatGPT.
Participants were asked to identify the source of each response and rate their trust in the ChatGPT responses using a 5-point scale from completely untrustworthy to completely trustworthy.
The study found people have limited ability to distinguish between chatbot and human-generated responses. On average, participants correctly identified chatbot responses 65.5% of the time and provider responses 65.1% of the time, with ranges of 49.0% to 85.7% for different questions. Results remained consistent no matter the demographic categories of the respondents.
The study found participants mildly trust chatbots’ responses overall (3.4 average score), with lower trust when the health-related complexity of the task in question was higher. Logistical questions (e.g. scheduling appointments, insurance questions) had the highest trust rating (3.94 average score), followed by preventative care (e.g. vaccines, cancer screenings, 3.52 average score). Diagnostic and treatment advice had the lowest trust ratings (scores 2.90 and 2.89, respectively).
According to the researchers, the study highlights the possibility that chatbots can assist in patient-provider communication particularly related to administrative tasks and common chronic disease management. Further research is needed, however, around chatbots' taking on more clinical roles. Providers should remain cautious and exercise critical judgment when curating chatbot-generated advice due to the limitations and potential biases of AI models.
The study, "Putting ChatGPT’s Medical Advice to the (Turing) Test: Survey Study," is published in JMIR Medical Education. The research team consists of NYU Tandon Professor Oded Nov, NYU Grossman medical student Nina Singh and Grossman Professor Devin M. Mann.
JOURNAL
JMIR Medical Education
Comparison of history of present illness summaries generated by a chatbot and senior internal medicine residents
JAMA Internal Medicine
Peer-Reviewed PublicationAbout The Study: History of present illnesses generated by a chatbot or written by senior internal medicine residents were graded similarly by internal medicine attending physicians. These findings underscore the potential of chatbots to aid clinicians with medical documentation.
Authors: Ashwin Nayak, M.D., M.S., of Stanford University in Stanford, California, is the corresponding author.
To access the embargoed study: Visit our For The Media website at this link https://media.jamanetwork.com/
(doi:10.1001/jamainternmed.2023.2561)
Editor’s Note: Please see the article for additional information, including other authors, author contributions and affiliations, conflict of interest and financial disclosures, and funding and support.
# # #
Embed this link to provide your readers free access to the full-text article This link will be live at the embargo time https://jamanetwork.com/journals/jamainternalmedicine/fullarticle/10.1001/jamainternmed.2023.2561?guestAccessKey=cf47b20f-a168-44da-9d77-0565e5e96a36&utm_source=For_The_Media&utm_medium=referral&utm_campaign=ftm_links&utm_content=tfl&utm_term=071723
JOURNAL
JAMA Internal Medicine
Displacement or complement? HKUST researchers reveal mixed-bag responses in human interaction study with AI
Artificial intelligence (AI) is all the rage lately in the public eye. How AI is being incorporated to the advantage of our everyday life despite its rapid development, however, remains an elusive topic that deserves the attention of many scientists. While in theory, AI can replace, or even displace, human beings from their positions, the challenge remains on how different industries and institutions can take advantage of this technological advancement and not drown in it.
Recently, a team of researchers at the Hong Kong University of Science and Technology (HKUST) conducted an ambitious study of AI applications on the education front, examining how AI could enhance grading while observing human participants’ behavior in the presence of a computerized companion. They found that teachers were generally receptive to AI’s input - until both sides came to an argument on who should reign supreme. This very much resembles how human beings interact with one another when a new member forays into existing territory.
The research was conducted by HKUST Department of Computer Science and Engineering Ph.D. candidate Chengbo Zheng and four of his teammates under the supervision of Associate Professor Prof. Xiaojuan MA. They developed an AI group member named AESER (Automated Essay ScorER) and separated twenty English teachers into ten groups to investigate the impact of AESER in a group discussion setting, where the AI would contribute in opinion deliberation, asking and answering questions and even voting for the final decision. In this study, designed akin to the controlled “Wizard of Oz” research method, a deep learning model and a human researcher would form joint input to AESER, which would then exchange views and conduct discussions with other participants in an online meeting room.
While the team expected AESER to promote objectivity and provide novel perspectives that would otherwise be overlooked, potential challenges were soon revealed. First, there was the risk of conformity, where the engagement of AI would soon create a majority to thwart discussions. Second, views provided by AESER were found to be rigid and even stubborn, which frustrated the participants when they found that an argument could never be “won”. Many also did not think AI’s input should be given equal weight and are more fit to play the role of an assistant to actual human work.
"At this stage, AI is deemed somewhat 'stubborn' by human collaborators, for good and bad,” noted Prof. Ma. “On the one hand, AI is stubborn so it does not fear to express its opinions frankly and openly. However, human collaborators feel disengaged when they could not meaningfully persuade AI to change its view. Humans varying attitudes towards AI. Some consider it to be a single intelligent entity while others regard AI as the voice of collective intelligence that emerges from big data. Discussions about issues such as authority and bias thus arise.”
The immediate next step for the team involves expanding its scope to gather more quantitative data, which will provide more measurable and precise insights into how AI impacts group decision-making. They are also looking to explore large language models (LLMs) such as ChatGPT into the study, which could potentially bring new insights and perspectives to group discussions.
Their study was published at the ACM Conference on Human Factors in Computing Systems in April 2023.
Xiaojuan MA, Associate Professor of the Department of Computer Science and Engineering (CSE), HKUST.
CREDIT
HKUST
METHOD OF RESEARCH
Experimental study
SUBJECT OF RESEARCH
People
ARTICLE TITLE
Competent but Rigid: Identifying the Gap in Empowering AI to Participate Equally in Group Decision-Making