New computational process could help condense decades of disease biology research into days
image:
Clare Boothe Luce Assistant Professor
view moreCredit: Photo by Peter Ringenberg/University of Notre Dame
At 10 one-millionths of a meter wide, a single human cell is tiny. But something even smaller exerts an enormous influence on everything a cell does: proton concentration, or pH. On the microscopic level, pH-dependent structures regulate cell movement and division. Altered pH response can accelerate the development of cancers and neurodegenerative diseases such as Alzheimer’s and Huntington’s.
Researchers hope that pinpointing pH-sensitive structures in proteins would help them determine how proteins respond to pH changes in normal and diseased cells alike and, ultimately, to design drugs to treat these diseases.
Now, in a new study out today in Science Signaling, researchers at the University of Notre Dame present a computational process that can scan hundreds of proteins in a few days, screening for pH-sensitive protein structures.
“Before even picking up a pipette or running a single experiment, we can predict which proteins are sensitive to these pH changes, which proteins actually drive these critical processes like division, migration, cancer development and neurodegenerative disease development,” said Katharine White, the Clare Boothe Luce Assistant Professor in the Department of Chemistry and Biochemistry. “No more searching for the needle in the haystack.”
Determining exactly how pH changes affect the behavior-driving proteins on a molecular level has been a challenge because researchers must laboriously test individual proteins in a signaling pathway for pH sensitivity one by one. Across biology, only 70 cytoplasmic proteins have been confirmed as pH-sensitive — though researchers hypothesize that there are many, many more — and of those, the molecular mechanisms of only 20 are known.
The new study, supported by funding from the National Science Foundation and the National Institutes of Health, developed and validated a modular, computational pipeline that predicts the location of pH-sensitive structures based on existing structural and experimental data.
In the process of developing the pipeline, White’s research group predicted and validated the pH sensitivity of a distinctive binding module known as the Src homology 2 (SH2) domain, which appears in proteins crucial for cell signaling, immune response and development, as well as the pH-dependent function of c-Src, an intensively studied enzyme that is activated in many cancers.
“These proteins are central to cell regulation in addition to being mutated in certain cancers, and in addition to showing that they are pH-sensitive, we’ve also found exactly where on the protein the pH regulation is occurring,” explained Papa Kobina Van Dyck, the lead study author and a recent doctoral graduate in biophysics. “We’ve managed to condense 25 years of work into a few weeks.”
“In addition to cancer and neurodegeneration, pH dynamics are associated with diabetes, autoimmune disorders and traumatic brain injury,” White said. “Our pipeline is a powerful tool for understanding and, ultimately, designing treatments for these conditions, with the potential to transform the field.”
To read the complete news story, visit research.nd.edu.
Journal
Science Signaling
Article Title
Ionizable networks mediate pH-dependent allostery in the SH2 domain–containing signaling proteins SHP2 and SRC
Article Publication Date
11-Nov-2025
Omni-modal language models: Paving the way toward artificial general intelligence
ELSP
image:
Omni-modal language models integrate modality alignment, semantic fusion, and joint representation to enable unified perception and reasoning across text, image, and audio modalities.
view moreCredit: Zheyun Qin & Lu Chen / Shandong University & Shandong Jianzhu University
The survey “A Survey on Omni-Modal Language Models” offers a systematic overview of the technological evolution, structural design, and performance evaluation of omni-modal language models (OMLMs). The work highlights how OMLMs enable unified perception, reasoning, and generation across modalities, contributing to the ongoing progress toward Artificial General Intelligence (AGI).
Recently, Lu Chen, a master’s student at the School of Computer and Artificial Intelligence, Shandong Jianzhu University, in collaboration with Dr. Zheyun Qin, a postdoctoral researcher at the School of Computer Science and Technology, Shandong University, published a comprehensive review entitled “A Survey on Omni-Modal Language Models” in AI+ Journal.
The paper provides an in-depth analysis of the core technological evolution, representative architectures, and multi-level evaluation frameworks of omni-modal language models (OMLMs)—a new generation of AI systems that integrate and reason across multiple modalities, including text, image, audio, and video.
Unlike traditional multimodal systems dominated by a single input form, OMLMs achieve modality alignment, semantic fusion, and joint representation learning, enabling dynamic collaboration among modalities within a unified semantic space. This paradigm allows end-to-end task processing—from perception to reasoning and generation—bringing AI one step closer to human-like cognition.
The study also introduces lightweight adaptation strategies, such as modality pruning and adaptive scheduling, to improve deployment efficiency in real-time medical and industrial scenarios. Furthermore, it explores domain-specific applications of OMLMs in healthcare, education, and industrial quality inspection, demonstrating their versatility and scalability.
“Omni-modal models represent a paradigm shift in artificial intelligence,” said Lu Chen, the first author of the paper.
“By integrating perception, understanding, and reasoning within a unified framework, they bring AI closer to the characteristics of human cognition.”
Corresponding author Dr. Zheyun Qin added:
“Our survey not only summarizes the current progress of omni-modal research but also provides forward-looking insights into structural flexibility and efficient deployment.”
This work offers a comprehensive reference for researchers and practitioners in the field of multimodal intelligence and contributes to the convergence of large language models and multimodal perception technologies.
This paper was published in AI Plus (Chen L., Mu J., Wang J., Kang X., Xi X., Qin Z., A Survey on Omni-Modal Language Models, AI Plus, 2026, 1:0001. DOI: 10.55092/aiplus20260001).
Method of Research
Literature review
Subject of Research
Not applicable
Article Title
A survey on omni-modal language models
Article Publication Date
6-Nov-2025
New study finds generative AI can brainstorm objectives but needs human expertise for decision quality
CATONSVILLE, Md., Nov. 11, 2025 – A new peer-reviewed study in the INFORMS journal Decision Analysis finds that while generative AI (GenAI) can help define viable objectives for organizational and policy decision-making, the overall quality of those objectives falls short unless humans intervene.
In the field of decision analysis, defining objectives is a foundational step. Before you can evaluate options, allocate resources or design policies, you need to identify what you’re trying to achieve.
The research underscores that AI tools are valuable brainstorming partners, but sound decision analysis still requires a “human in the loop.”
The study, “ChatGPT vs. Experts: Can GenAI Develop High-Quality Organizational and Policy Objectives?” was authored by Jay Simon of American University and Johannes Ulrich Siebert of Management Center Innsbruck.
The researchers compared objectives generated by GenAI tools—including GPT-4o, Claude 3.7, Gemini 2.5 and Grok-2—to objectives created by professional decision analysts in six previously published Decision Analysis studies. Each GenAI-generated set was rated across nine key criteria from value-focused thinking (VFT), such as completeness, decomposability and redundancy.
They found that while GenAI frequently produced individually reasonable objectives, the sets as a whole were incomplete, redundant and often included “means objectives” despite explicit instructions to avoid them. “In short, AI can list what might matter, but it cannot yet distinguish what truly matters,” the authors wrote.
“Both lists are better than most individuals could create. However, neither list should be used for a quality decision analysis, as you should only include the fundamental objectives in explicitly evaluating alternatives,” said Ralph Keeney, a pioneer of value-focused thinking, in response to two AI-produced lists of objectives in the study.
To improve GenAI output, the researchers tested several prompting strategies, including chain-of-thought reasoning and expert critique-and-revise methods. When both techniques were combined, the AI’s results significantly improved—producing smaller, more focused and more logically structured sets of objectives.
“Generative AI performs well on several criteria,” said Simon. “But it still struggles with producing coherent and nonredundant sets of objectives. Human decision analysts are essential to refine and validate what the AI produces.”
Siebert added, “Our findings make clear that GenAI should augment, not replace, expert judgment. When humans and AI work together, they can leverage each other’s strengths for better decision making.”
The study concludes with a four-step hybrid model for decision-makers that integrates GenAI brainstorming with expert refinement to ensure the objectives used in analysis are essential, decomposable and complete.
Read the study here.
About INFORMS and Decision Analysis
INFORMS is the world’s largest association for professionals and students in operations research, AI, analytics and data science and related disciplines. It serves as a global authority advancing cutting-edge practices and fostering an interdisciplinary community of innovation.
Decision Analysis, a leading journal published by INFORMS, features research on modeling and supporting decision-making under uncertainty. INFORMS empowers its community to improve organizational performance and drive data-driven decision making through its journals, conferences and resources.
Learn more at www.informs.org or @informs.
Journal
Decision Analysis
Subject of Research
People
Article Title
ChatGPT vs. Experts: Can GenAI Develop High-Quality Organizational and Policy Objectives?
Article Publication Date
11-Nov-2025
Insilico and Lilly enter a research and licensing collaboration to advance AI-driven drug discovery
InSilico Medicine
Cambridge, MA, Nov 10th ,2025 — Insilico Medicine (“Insilico”), a clinical-stage generative artificial intelligence (AI)-driven drug discovery company, announced a research collaboration with Eli Lilly (“Lilly”) that the two parties will combine Insilico’s state-of-the-art Pharma.AI platforms with Lilly’s development and disease expertise to jointly discover and advance innovative therapies.
Insilico will utilize its validated Pharma.AI platform and deep drug discovery expertise to generate, design, and optimize candidate compounds against targets defined under the agreement. Insilico is eligible to receive over $100 million including an upfront, milestone payments, and tiered royalties on net sales upon commercialization of any resulting drug products.
“We are delighted to collaborate with Lilly, a global leader in the pharmaceutical industry, renowned for its commitment to medical innovation,” said Alex Zhavoronkov, PhD, Founder and Co-CEO of Insilico Medicine. “Lilly has been a valued user of our Pharma.AI software suite, and this expanded collaboration further recognizes Insilico’s AI-driven drug discovery capabilities while strengthening our longstanding partnership. By joining forces, we are accelerating the development of transformative therapies to address urgent patient needs worldwide.”
The collaboration represents a further deepening of the partnership between the two companies, which originated with the AI-based software licensing agreement in 2023.
Harnessing state-of-the-art AI and automation technologies, Insilico has significantly improved the efficiency of preclinical drug development. While traditional early-stage drug discovery typically requires 3 to 6 years, from 2021 to 2024 Insilico nominated 20 preclinical candidates, achieving an average turnaround - from project initiation to preclinical candidate (PCC) nomination - of just 12 to 18 months per program, with only 60 to 200 molecules synthesized and tested in each program.
About Insilico Medicine
Insilico Medicine, a leading and global AI-driven biotech company, utilizes its proprietary Pharma.AI platform and cutting-stage automated laboratory to accelerate drug discovery and advance innovations in life sciences research. By integrating AI and automation technologies and deep in-house drug discovery capabilities, Insilico is delivering innovative drug solutions for unmet needs including fibrosis, oncology, immunology, pain, and obesity and metabolic disorders. Additionally, Insilico extends the reach of Pharma.AI across diverse industries, such as advanced materials, agriculture, nutritional products and veterinary medicine. For more information, please visit www.insilico.com]
No comments:
Post a Comment