It’s possible that I shall make an ass of myself. But in that case one can always get out of it with a little dialectic. I have, of course, so worded my proposition as to be right either way (K.Marx, Letter to F.Engels on the Indian Mutiny)
Monday, March 02, 2026
Small models, big insights into vision
College of Engineering, Carnegie Mellon University
Understanding how the brain processes what we see is one of the central questions in neuroscience. Our visual system is incredibly powerful, able to recognize faces, objects, and scenes with ease, yet the details of how individual neurons respond to images remain complex and difficult to study. A new study published in Natureshows that it is possible to capture these responses using models that are both highly accurate and far simpler than previous approaches.
The team began with a large computer model designed to predict how neurons in the visual cortex of non-human subjects respond to images. While this model was very precise, it was also enormous, with millions of parameters, making it almost as hard to understand as the brain itself. Using advanced machine learning techniques, the researchers compressed this model, creating smaller versions that were thousands of times simpler while still predicting neural responses with high accuracy. These compact models allowed the team to examine the inner workings of the visual system in a way that was previously impossible.
“This work shows that we don’t need massive, complicated networks to understand what individual neurons are doing,” explained Matt Smith, professor of biomedical engineering and Neuroscience Institute at Carnegie Mellon University. “By making the models smaller and interpretable, we can actually gain intuition about how the visual system works and develop hypotheses that can be tested in the lab.”
One surprising finding was that even though the model was dramatically reduced in size, it could still capture subtle differences in how neurons respond to similar images. This suggests that the brain’s visual system relies on specific computational patterns that can be represented in a more straightforward way than previously thought. By studying these simplified models, researchers could see how individual neurons detect important features, such as the eyes in a face or the dots in a pattern, offering insights into how visual information is processed at a fine scale.
Beyond helping scientists understand vision, this research also has implications for technology. Modern computer vision systems, like those that recognize faces on a phone or guide self-driving cars, were inspired by the brain but often fail in subtle ways that humans easily handle. Insights from these compact neural models could help improve artificial intelligence systems, making them more robust and adaptable in real-world situations.
“The study also highlights the collaborative nature of this research, combining experimental neuroscience with computational modeling and machine learning,” Smith added. “By working together across institutions and disciplines, we were able to build models that are not only predictive, but also interpretable and meaningful.”
Looking ahead, the researchers are extending these models to account for time, moving from single images to sequences like videos. This could help explain how the visual system tracks movement, recognizes changing patterns, and focuses on important details in dynamic environments. By continuing to simplify and study these models, the collaborators hope to uncover rules that govern how our brains interpret the world around us.
A Caltech-led team of biochemists has homed in on an underexplored small transporter called MurJ that is a vital part of the pathway bacteria use to build their chain-mail-like cell wall. An essential component of the cell wall, called peptidoglycan, provides the strength that allows bacteria to resist pressure. Using advanced tools, the scientists have determined the common mechanism used by three different bacteria-killing viruses to block MurJ from doing its job. The findings reveal a novel target for designing new antibiotics.
Here, three distinct phage Sgl proteins lock the flippase MurJ in an outward-facing state, providing a template for antibiotic discovery.
A paper about the new work was published online in the journal Nature on February 25. The lead author of the paper is Yancheng Evelyn Li, a graduate student in the lab of Bil Clemons, the Arthur and Marian Hanisch Memorial Professor of Biochemistry at Caltech, who is the corresponding author.
"Evolution is powerful, and in bacteria, resistance to antibiotics develops quickly. This means that we now deal with bacteria that are resistant to all the medicines that we have," Clemons says. "In the USA alone, tens of thousands of people die every year from antibiotic-resistant bacterial infections, and that number is rising rapidly. We need new antibiotics to combat this."
Scientists have long been interested in the cellular pathway that builds peptidoglycan, aptly known as the peptidoglycan biosynthesis pathway, as an antimicrobial target. "Peptidoglycan is a unique feature of bacteria, and that makes it an attractive antibiotic target," Clemons says.
Many details of the peptidoglycan biosynthesis pathway are known and have been leveraged as targets for antibiotics. The first pharmaceutical, discovered by Alexander Fleming in the middle part of the last century, was the antibiotic penicillin. It and its derivatives, such as amoxicillin, target a late step in this pathway to kill bacteria.
In bacteria, three key proteins—MraY, MurG, and MurJ—facilitate the transfer and transport of peptidoglycan's building blocks from within the cell across the inner membrane barrier. If any of the three proteins fail, peptidoglycan cannot be made, and bacteria die, making them exciting targets for antibiotic discovery. Scientists know a lot about these proteins, but, as noted by Clemons, many basic mechanistic questions remain unanswered.
While the benefits of inhibiting these proteins are clear, there are currently no medicines that target them. However, Clemons says, "We do know that we can find small molecules, either derived from nature or synthesized in chemical libraries, that will inhibit these proteins. Excitingly, recent discoveries have shown that bacteriophages have figured out how to target this pathway."
The survival of viruses that target bacteria, called bacteriophages, or phages, depends on their ability to enter the bacterial cell, make copies of themselves, and then leave to spread as widely as possible. "Getting back out means that they have to get past the peptidoglycan layer. Because it acts like chainmail, the phages get stuck if they can't break through it," Clemons explains.
The Clemons lab has turned some of its focus to single-stranded DNA and RNA phages, tiny phages with small genomes that require simple methods for killing bacteria. In 2023, the lab published a paper in Science about one such phage, φX174, that has a long history at Caltech.
The weapons these small phages use to kill bacteria are protein antibiotics called single-gene lysis proteins, or Sgls (pronounced like “sigils”). Most recently, Li and Clemons have focused on Sgls that target MurJ for antibiotic discovery. MurJ is a flippase, a protein that "flips" peptidoglycan building blocks across the cellular membrane so they can be used to build the peptidoglycan chain. Collaborators had already shown that two Sgls, SglM and SglPP7—which are unrelated and produced by two different phages—both cause bacterial death by inhibiting MurJ.
In the current work, Li used Caltech's Beckman Institute Biological and Cryogenic Transmission Electron Microscopy (Cryo-EM) Resource Center to reveal how these two Sgls inhibit MurJ's flipping activity. Flippases, like MurJ, work by alternating the access of the molecules they transport between the two sides of the membrane without ever making an opening in the membrane. For MurJ, binding of the peptidoglycan precursor within the cell triggers a structural change that effectively moves the molecule outside the cell. Li found that both Sgls bind to a groove in the flippase that prevents the protein from making these structural changes.
"It is clear that both of these Sgls bind to MurJ in an outward-facing conformation, locking it into this position," Li says. That is exciting to researchers because the outward-facing conformation of MurJ is accessible to the surrounding environment. In theory, that makes it easier to target with antibiotics than an internal-facing conformation.
Clemons says the discovery is shocking for another reason. "These peptides, which have no evolutionary links to each other, have both figured out how to target MurJ in a very similar way. These are two examples of convergent evolution, in which different evolutionary paths arrive at the same solution. We were surprised!"
The researchers add that because viruses evolve rapidly, there is likely an endless supply of phages that will all have Sgls. Because phages are easy to find, mining these viral genomes can lead to new biological discoveries and new antibiotic targets. In the Nature paper, the scientists did just that with a new phage. Working with a collaborator, they identified a new Sgl, called SglCJ3 (from a genome sequence that is predicted to be a phage and is called Changjiang3), for cryo-EM analysis. Li resolved the structure of SglCJ3 bound to MurJ and found that it also binds in the same outward-facing conformation of MurJ.
"This is a third genome that evolved a distinct peptide to inhibit the same target in a similar way," Clemons says. "It is the first strong evidence that evolution identifies MurJ as a great target for killing bacteria, which means we should follow evolution's lead and develop therapeutics that target MurJ. This demonstrates the power of basic biology to help us solve problems in medicine. Our path is set on leveraging Sgl discovery, and we hope to continue to be supported to turn these concepts into realities."
The paper is titled "Convergent MurJ flippase inhibition by phage lysis proteins." Along with Clemons and Li, additional authors are Caltech graduate student Grace F. Baron; and Francesca S. Antillon, Karthik Chamakura, and Ry Young of Texas A&M University. The work was supported by the Chan Zuckerberg Initiative, the National Institutes of Health, the G. Harold and Leila Y. Mathers Foundation, and the Center for Phage Technology at Texas A&M, jointly sponsored by Texas A&M AgriLife.
A Caltech-led team of biochemists has homed in on an underexplored small transporter called MurJ that is a vital part of the pathway bacteria use to build their chain-mail-like cell wall. The scientists have determined the common mechanism used by three different bacteria-killing viruses to block MurJ from doing its job.
Here, MurJ from E. coli transitions from an inward to an outward-facing state, where it is locked by a Sgl protein from one of these bacteria-killing viruses.
Convergent MurJ flippase inhibition by phage lysis proteins
Article Publication Date
25-Feb-2026
The Frontiers of Knowledge Award goes to Charles Manski for incorporating uncertainty into economic research and its application to public policy analysis
The BBVA Foundation Frontiers of Knowledge Awards in Economics, Finance and Management has gone in this eighteenth edition to Charles F. Manski for his pioneering contributions to the measurement of uncertainty in economic research and its application to public policy analysis. The professor at Northwestern University (Chicago, United States) is described by the committee as a “foundational figure” in the development of modern methods that have transformed how economists infer conclusions from data, report degrees of uncertainty in their models, and evaluate public policies in the face of incomplete evidence.
His work over the course of five decades “has profoundly influenced empirical research across education, health policy, labor markets, industrial policy, and social programs, by encouraging economists to rely on credible, transparent inference,” regarding the assumptions on which they base their research.
“The methods he developed assess the degree of confidence we can have in empirical measurement,” the citation continues, enshrining him as “a critical conscience of measurement in the social sciences.”
Manski’s research “has uncovered some of the erroneous assumptions we economists are prey to, which make our predictions and understanding of behavior quite fragile,” says committee member Sir Richard Blundell, David Ricardo Professor of Political Economy at University College London (United Kingdom). “He has taught us to look carefully at the assumptions underpinning our analysis and to base both our predictions and understanding of behavior on evidence that we can really believe in.”
Manuel Arellano, Professor of Economics in the Center for Monetary and Financial Studies (CEMFI) of Banco de España, and secretary to the committee, hails Manski as “a major innovator in methods of empirical measurement in economics and social sciences, who has also made fundamental contributions to methods for measuring the range of possible outcomes that be conclusively posited in policy analysis under conditions of uncertainty.”
Prof. Arellano regards Manski as especially influential in the measurement of economic agents’ expectations: “Firms, households, individuals…, we all make decisions or hold off from making them depending on how sure we feel about our future circumstances. If I can be confident of what my income will be next year, I will opt for consumption decisions that I would make differently if I had serious doubts.”
Manski, he continues, has been at the forefront in advocating the quantification of agent uncertainty: “This means, for example, using surveys to measure the probabilities people assign to the price of their home or their income increasing, or to being unemployed at some point in the future. Manski has been an innovator in measuring such expectations in surveys and in how to use them in economic analysis. This activity was initially greeted with skepticism, but now central banks like Banco de España, Banca d’Italia and the New York Federal Reserve conduct regular surveys of agent expectations, relying largely on the ideas launched by Charles Manski.”
Going with “deep uncertainty” over “incredible certitude”
“Most economists,” says Manski, “don’t deal with uncertainty. They would prefer to get firm answers to questions. And this is particularly the case in studying public policy.” The public, he explains, “wants to have answers to know if a policy is good or not, and economists like to provide them.” The result is that conclusions in economics are frequently characterized by what he calls “incredible certitude,” with figures and percentages that lack robust empirical support.
“On those occasions when economists do explicitly deal with uncertainty, they do so by stating that there is a 10% or 20% chance that a policy will have a given effect. But my own work focuses on more difficult situations where you can’t put probabilities on things.”
Manski has pioneered the development of econometric methods to study precisely these situations of “deep uncertainty,” with this uncertainty factored into the analysis: “These are difficult public policy problems, and what you really need to do is to quantify the uncertainty. What that implies is that instead of providing a point estimate of some quantity, like what the tax revenue will be under certain income tax policies, I might give a bound, an interval, to say it will be between this and that level. And the width of that bound is going to express how much uncertainty there is. The bound may be very narrow, meaning we know a lot, or very wide, meaning we know relatively little.”
Reflecting uncertainty through a range of possible outcomes
In the late 1980s, during a term directing the Institute for Research on Poverty at the University of Wisconsin-Madison, Manski identified a methodological gap that would change the course of his academic career and ultimately transform modern econometrics.
A colleague of his was conducting a study into trajectories of homelessness, which had run into a critical problem of missing data. Its design was that of a longitudinal study following the lives of homeless people, but of the original sample only around 60% could be located one year later. The main obstacle to data collection was the absence of a fixed address for as many as 40% of the reporting subjects.
Manski explains: “The standard procedure at the time was to assume that when people leave a sample, they do so at random, which implies that the subjects the team couldn’t locate after a year were just like the subjects that they could. In other words, if the people that you can’t find are essentially the same as those you are able to interview for a second or the third time, then you don’t need to worry about the missing data.”
But he was not convinced. His reading was that the homeless individuals who could not subsequently be found were likely to be experiencing systematically worse conditions than those who reappeared, precisely because they had gone off grid. And this would invalidate any conclusion based on random missingness.
This problem, which Manski has explored throughout his career, led him to the concept of partial identification, one of his most notable contributions to the field of econometrics. His proposal – as developed in Identification for Prediction and Decision (Harvard University Press, 2007) – was to replace the misleading certainty of a specific value with the use of intervals or bounds; a fundamental methodological contribution whereby, rather than engineering a point estimate, i.e., a single numerical value, a range of outcomes is offered that takes data uncertainty into account.
“The idea is that you cannot arrive at definitive conclusions or definitive numbers, but must always work with intervals. Ideally, you would like the intervals to be as small as possible, but then there’s a trade-off. You can tighten the interval by making more assumptions, but there is a danger in that,” he warns. For, in effect, “you can draw strong conclusions if you make strong assumptions, but then they won’t be believable. As a kind of a shorthand, I call this process the law of decreasing credibility.”
For Iván Fernández Val, Professor of Economics at Boston University, currently visiting at CEMFI, this contribution of Manski’s has been “foundational and transformative,” because it “has changed the way economists think and how we look at data.” To illustrate the importance of the partial identification concept, Fernández Val gives the example of an electoral poll where only two parties are standing: “Some people are going to respond, but others will decline. So even if you interview the entire electorate, you can never know exactly which percentage will vote each way. All you can say is that there is a range of possible proportions for each of the parties, consistent with the observed responses. This observation fundamentally changes the way data is analyzed. Instead of using methods that seek to estimate a single number, what you get is a range of possible values. This may complicate the exercise, but it offers a more realistic picture of the uncertainty inherent in estimates.”
The challenge of analyzing public policies in an “uncertain world”
Charles Manski’s methodological contributions have found practical expression in public policy design, especially in education. The economist’s interest in the field dates back to his PhD thesis, in which he analyzed decision-making at the pre-university stage, focusing on how pupils in their last year of secondary school came to decide whether or not to go to college and, in the affirmative case, which college they would choose.
“I wanted to understand college-going decisions,” says Manski, adding that his interest was driven by a policy question: “In the early 1970s, the federal government began a grant policy, a scholarship scheme. The idea was to try to get more low-income students to go to college, because tuition was expensive.”
His response was to develop a counterfactual hypothesis – an approach that sets out alternative scenarios and compares them to the real outcome – seeking to understand how students take decisions under uncertainty and to assess the impact of the economic assistance offered. “The problem was that we had data on the numbers attending university at the time, but what we wanted to know was how many more 18- or 19-year-olds would choose to go to college and what kinds of colleges they would choose,” if the new grant policy was implemented.
Manski wrote up his conclusions in a paper co-authored with David Wise of the Kennedy School at Harvard, which would lead on to their 1983 book College Choice in America and culminate in one of the awardee’s landmark works, PublicPolicy in an Uncertain World: Analysis and Decision (2013).
When designing public policies that involve social dynamics, as in the education field, it is important to know how individuals influence each other. It was to address this need that Manski formalized what is known as the “reflection problem,” mathematically analyzing the complex question of whether an individual is acting under the influence of their immediate peers or whether the influence is reciprocal. Among its other uses, this approach can elucidate how the make-up of a class influences the way students learn and interact with each other.
In this scenario, GarcÃa Montalvo continues, Manski proposes carrying out observations at various points over a period of time. This may not yield a precise value for the influence at work but will at least give “an uncertainty interval that is not too wide.”
Optimizing medical decisions under uncertainty
Another area of research enriched by Manski’s contributions is the measurement of uncertainty in health and medical decision-making, as described in his book Patient Care under Uncertainty (Princeton University Press, 2019).
“You might think that medical treatment and clinical decision-making would be a strange topic for an economist to study,” he observes, “but actually there’s a lot in common with studying public policy. A medical doctor is basically acting as what we call in economics a social planner on behalf of the patient. The doctor is trying to do the best thing possible for the patient, whether they suffer from heart disease or cancer or diabetes or whatever, but there’s uncertainty everywhere in medicine.
Manski contends that the same methodological tools developed for measuring uncertainty in economic research and public policy analysis can be applied to help physicians, health authorities, and patients themselves make better decisions in the presence of incomplete or ambiguous evidence. His book analyzes the considerable uncertainty that exists regarding issues like a patient’s real state of health, their potential response to a given treatment, or the evidence derived from clinical trials on a drug’s real effectiveness.
All too often he says, we have cases where the results of clinical trials conducted on a limited group of patients are erroneously extrapolated to the general population, statistical errors occur when opting for one therapy over others, and claims about a drug’s effectiveness do not stand up to analysis of the real-world evidence. For all these reasons, the awardee is a strong advocate of incorporating the case-by-case measurement of real uncertainty into healthcare and medical practice in order to make better evidence-based decisions.
“Doctors do all they can to help their patients,” says Manski, “but it’s difficult for them to deal with uncertainty, and often they have these situations of ambiguity or deep uncertainty where they can’t really put probabilities on competing treatment options. So it turns out that the same methodological research that I do on public policy is also applicable to medical decision-making. Health issues seem to me hugely important, which is why I have a whole set of collaborations with medical researchers and health economists to address decision-making problems in this domain.”
Healthy skepticism to arrive at credible conclusions
Manski declares himself proud to be described by the Frontiers of Knowledge committee as “a critical conscience of measurement in the social sciences,” since he “cares deeply” about the credibility of the assumptions economists and other specialists bring to bear in their work. In this respect, he describes himself as healthily skeptical.
“There’s a lot of hype in research, there’s a lot of marketing that goes on, where economists try to draw very strong conclusions that are actually not that believable. And what econometricians like myself do is to say, wait a minute, you have no real basis to draw that conclusion. This, of course, is particularly important with public policy, because public policy is innately heavily political, and people are going to look at it from different positions. It’s essential therefore that the research is robust and credible so people will believe it, and not think somebody’s just making it up.”
At the age of 77, Manski combines a busy teaching schedule with his continuing efforts to refine measurement of uncertainty, with the aim of optimizing decision-making based on the best possible evidence, even among non-econometricians: “I’m currently working on a practical project to turn all these ideas into a web application that even someone without mathematical knowledge can use to optimize decision-making in healthcare or any other field.”
Laureate bio notes
Charles F. Manski (Boston, Massachusetts, United States, 1948) received his bachelor’s degree (1970) and PhD (1973), both in economics, from the Massachusetts Institute of Technology. He spent the first twenty-five years of his academic career at Carnegie Mellon University (1973-1980), The Hebrew University of Jerusalem (1979-1983) and, on his return to the United States, the University of Wisconsin-Madison (1983-1998), where he held a series of professorships in the Department of Economics, as well as leading its Institute for Research on Poverty. Since 1997, he has been Board of Trustees Professor in Economics at Northwestern University (Evanston, Illinois), where he also chaired the Department of Economics from 2007 to 2010. Manski is the author of numerous research papers and nine books, including Discourse on Social Planning under Uncertainty and Identification for Prediction and Decision. He has served as Chair of the Board of Overseers of the Panel Study of Income Dynamics (1994-1998) and as Chair of the National Research Council Committee on Data and Research for Policy on Illegal Drugs (1998-2001). His editorial service includes terms as editor of the Journal of Human Resources, co-editor of the Econometric Society Monograph Series, and a member of the Editorial Board of the Annual Review of Economics.
Nominators
A total of 82 nominations were received in this edition, comprising 70 candidates. The awardee researcher was nominated by Thierry Magnac, Professor of Economics at the University of Toulouse (France), and Richard J. Smith, Emeritus Professor of Econometric Theory and Economic Statistics at the University of Cambridge (United Kingdom).
Economics, Finance and Management committee and evaluation support panel
The committee in this category was chaired by Eric S. Maskin, Adams University Professor in the Department of Economics at Harvard University (United States) and 2007 Nobel Laureate in Economic Sciences, with Manuel Arellano, Professor of Economics in the Center for Monetary and Financial Studies (CEMFI) of Banco de España acting as secretary.
Remaining members were SirRichard Blundell, David Ricardo Professor of Political Economy at University College London (United Kingdom) and 2014 BBVA Foundation Frontiers of Knowledge Laureate in Economics, Finance and Management; Antonio Ciccone, Professor of Economics at the University of Mannheim (Germany); Pinelopi Koujianou Goldberg, William Nordhaus Professor of Economics and Global Affairs at Yale University (United States); Andreu Mas-Colell, Professor Emeritus of Economics at Pompeu Fabra University and the Barcelona School of Economics (Spain) and 2009 BBVA Foundation Frontiers of Knowledge Laureate in Economics, Finance and Management; Lucrezia Reichlin, Professor of Economics at the London Business School (United Kingdom); and Fabrizio Zilibotti, Tuntex Professor of International and Development Economics at Yale University (United States).