Tuesday, February 25, 2025

 

Mizzou study reveals key insights on teacher retention post-pandemic



A University of Missouri survey found 78% of nearly 500 teachers have considered quitting, with more experienced teachers likelier to make that decision.




University of Missouri-Columbia




University of Missouri researchers recently surveyed 468 teachers and found that 78% have thought about quitting the profession since the 2020 pandemic.

And seasoned educators — those with more than 5 years of experience — were more likely to consider leaving than their newer colleagues, according to the study from Mizzou’s College of Education and Human Development.

Teachers cited a range of reasons for wanting to quit, including a lack of administrative support, excessive workloads, inadequate compensation and challenging student behaviors.

“Understanding factors associated with teachers considering leaving the field, such as teacher stress and burnout, is essential for addressing the problems and creating a more supportive environment for educators, which ultimately impacts students and the quality of education they receive,” said Wendy Reinke, a Curators’ Distinguished Professor in Educational, School and Counseling Psychology and associate vice chancellor for research.

When teachers leave, it can hurt students’ learning, emotional well-being and overall success. The findings highlight the need for schools and policymakers to better support teachers, helping them stay in the profession and creating better learning environments for students.

“It’s a ripple effect,” said Reinke, who is also a co-director of the Missouri Prevention Science Institute at Mizzou. “If teachers struggle, instruction suffers, and students don’t learn as they should. When there aren’t enough teachers, schools may hire uncertified staff or increase class sizes, making effective instruction and learning less likely. Disruptive behavior also spreads — kids in disorderly classrooms carry those habits into the next grade.”

This study sets the foundation for future research on developing lasting solutions, including a new project focused on creating and implementing a comprehensive plan to tackle these issues.

“It’s important that we develop ideas for supporting teachers alongside our school partners,” Reinke said. “We want their input to help us figure out what this model would look like to best support teachers, administrators, and ultimately, the students they serve.” “Teacher stress, coping, burnout and plans to leave the field: A post-pandemic survey” was published in the journal School Mental Health. Co-authors are Keith C. Herman, Melissa Stormont and Farshad Ghasemi from the Missouri Prevention Science Institute.

 

Q&A: How AI is changing the film industry



‘AI is Soulless’: Hollywood Film Workers Strike and Emerging Perceptions of Generative Cinema



University of Washington




In 2023, a good portion of Hollywood went on strike — in part over concerns about artificial intelligence in filmmaking. Now the use of AI has roiled this year’s Academy Awards: Several of the best picture nominees used AI in production. “The Brutalist” showed AI generated architecture blueprints in a scene and its editor used a program called Respeecher to hone actors’ Hungarian pronunciations. “Emelia Peréz” used Respeecher to adjust an actor’s singing voice.

Brett Halperin, a University of Washington doctoral student in human centered design and engineering, interviewed picketing film workers about AI during the 2023 strikes. Their concerns ranged from AI’s effects on wages and jobs to the inauthenticity of the resulting art.

Halperin published the findings Feb. 6 in ACM Transactions on Computer-Human Interaction.

UW News spoke with Halperin about how film workers are thinking about AI and the history of technology in filmmaking.

The striking film workers you spoke with raised various concerns about the use of AI in filmmaking. Were you surprised to see some consternation around the Oscars this year?

Brett Halperin: We have seen backlash to AI from workers and the general public manifest in multiple ways over the past few years — from striking to protesting screenings. Many filmmakers have valid concerns about how studio use of AI can undermine their craft and labor. Meanwhile, many writers and artists object to how their materials are scraped and co-opted as training data for machine learning models without their consent or compensation. This makes AI particularly thorny and controversial. But it’s also important to situate this backlash in the broader historical context.

Throughout history, the “death of cinema” trope has resurfaced with each major technological shift. For example, the use of synchronized sound systems starting in 1926 rendered many silent-era acting techniques, production methods and even professions obsolete. While this caused massive disruption, it ultimately created new professions, such as sound specialists, and transformed rather than eradicated cinema. The rise of color, television, digital media and so on follow similar trajectories. AI presents another iteration of this trope that continues to reflect the shifting cultural and industrial anxieties about technological agency. Part of what makes cinema unique relative to other art forms is that it has always depended on complex, evolving technologies. This change is unsettling, but also an opportunity for all of us, including the Academy, to reevaluate what makes film meaningful.

The Academy is reportedly considering making AI disclosure mandatory for the 2026 Oscars. Do you see value in this?

BH: Generally, I think as much transparency as possible is a good thing. But as AI further integrates into production processes and workflows, excessive mandates could become unreasonably cumbersome and difficult to track. So I would first start by asking: What do we mean by AI? Computer-generated imagery and its associated algorithms have been in the Hollywood studio system since the 1970s. At what point did CGI and other algorithmic tools become rebranded as AI?

In my view, regulation should focus on where AI use has the potential to undermine workers and manipulate viewers. For example, AI actors and de-aging techniques might further intensify body image issues among the public, as well as take work away from actual actors. Disclosure would help the Academy and spectators understand what they are seeing to not only assess the ethics, but also better judge and criticize films in general.

The uses of AI in “The Brutalist” and “Emelia Peréz” are relatively minor. What were workers’ feelings about AI tools as instruments to assist their work, rather than replace it?

BH: The workers did not oppose AI altogether. They seemed to recognize that technological change is an ongoing part of cinema and expressed degrees of openness to the creative possibilities. They acknowledged that there are potentially useful applications insofar as the decision-making power and control over AI lies with them rather than studio executives forcing its integration.

That said, the workers seemed to find current AI-assisted capabilities to be rather unimaginative and unequipped to augment (or replace) their work. For example, a writer who tried to use it to assist him described the written outputs as “hacky” and “generic.” Many of the workers made compelling cases for why AI cannot take over the tasks that truly define filmmaking, such as fostering authentic human connection on and off screen and telling stories that matter to people.

What were your major takeaways from talking with the film workers? Have those changed at all as the technology has evolved in the last year and a half?

BH: Despite being around for decades now in various forms, so-called AI today is exhibiting a “novelty effect,” which is currently exploitable, but bound to fade. As AI further integrates and becomes more deeply embedded into cinema like prior technologies, I suspect that the anxiety around it will simmer down.

Rather than fuel the hype cycle, we should remain patient and vigilant in working toward ethical implementations and protections, because AI can incur harms today that require protections for workers and viewers. While Hollywood unions have won protections through collective bargaining agreements, they will need to be continuously updated as the technology  develops, as well as extended to non-unionized workers and workers in other media industries through state and federal policies. I would especially like to see policies that establish informed consent and compensation for artists whose materials are used as AI training data.

What should the public know and consider about AI in filmmaking?

BH: It’s ultimately up to those of us watching movies to decide what we like and don’t like about AI in cinema. We have the power of our attention and wallets to decide what films we want to support. At the end of the day, the Hollywood studio system will invest in what is profitable and divest from what is not. We should listen to the workers for guidance and watch films that align with our values. Despite the current anxiety around AI and the lure of its spectacle today, the public should remember what makes a film truly valuable: the human hearts and souls behind it.

Daniela K. Rosner, a UW professor of human centered design and engineering, is the co-author on the journal article. This research was funded by the Labor Research and Action Network and the National Science Foundation.

For more information, contact Halperin at bhalp@uw.edu.

 

The New Keynesian economics and monetary policy

The Frontiers of Knowledge Award goes to Blanchard, Galí and Woodford in recognition of their profound influence on modern macroeconomics and the design of monetary and fiscal policy rules



The BBVA Foundation recognized Olivier Blanchard, Jordi Galí, and Michael Woodford for “profoundly influencing modern macroeconomic analysis by establishing rigorous foundations for the study of business cycle fluctuations.




BBVA Foundation

Olivier Blanchard, winner of the BBVA Foundation Frontiers of Knowledge Awards in Economics. 

image: 

Olivier Blanchard, winner of the BBVA Foundation Frontiers of Knowledge Awards in Economics.

view more 

Credit: BBVA FOUNDATION




The BBVA Foundation Frontiers of Knowledge Awards in Economics, Finance and Management has gone in this seventeenth edition to Olivier Blanchard (Paris School of Economics and MIT), Jordi Galí (CREI and Pompeu Fabra University), and Michael Woodford (Columbia University) for “profoundly influencing modern macroeconomic analysis by establishing rigorous foundations for the study of business cycle fluctuations.”

The committee hailed the three economists as “central architects of the New Keynesian paradigm” for integrating “monopolistic competition and nominal and real rigidities into dynamic general equilibrium models with rational expectations.” This paradigm, they added, “has been widely used to analyze the stabilizing effect of monetary and fiscal policy on the cyclical movements of real economic activity, unemployment, and inflation.”

This combination of academic research and influence on policy design has meant that the model they constructed “is used by central banks the world over in their monetary policy decision-making and taught throughout the academic world,” said committee member Antonio Ciccone, Professor of Macroeconomics and Financial Markets at the University of Mannheim (Germany).

“They put the pieces in place for a new model that seeks to avoid business cycle fluctuations and keep unemployment or inflation at relatively stable levels,” explained Fabrizio Zilibotti, Professor of International and Development Economics at Yale University (USA), and also a selection committee member. “The New Keynesian theory,” he continued, “emphasizes the role of expectations, which were essentially left out of traditional models. In this model, demand depends on expectations of the future course of the economy. In other words, it looks to the future to make decisions in the present. This is the opposite of the classical models, which decide measures in the present by looking at events in the past.”

The New Keynesian economics and monetary policy

In the early 1980s, Olivier Blanchard found himself moving between appointments at Harvard and MIT. It was in these years that he came to know Jordi Galí and Michael Woodford, leaving a lasting impression as thesis advisor to the former and as a “brilliant young teacher” in the recollection of the latter. By that time, he had already authored a number of seminal articles on speculative bubbles and their impact on macroeconomic cycles.

Blanchard’s founding role in New Keynesian economics can be traced to a joint 1987 paper with Nobuhiro Kiyotaky (Frontiers of Knowledge laureate in Economics in the 13th edition of the awards), where they studied the effects of monetary policy under monopolistic competition. What they found was that nominal rigidities acted as a major distorting factor: when consumers turn pessimistic and cut back on their spending, aggregate output falls. In the absence of nominal rigidities, what would happen is that interest rates would be adjusted so they keep on buying enough to stop the economy from entering recession. But with nominal rigidities this mechanism cannot work.

“What we argued,” says Blanchard in regard to this landmark work, “was that there was a price externality in the sense that in an environment of imperfect competition, price setters had very little incentive to change their prices in response to demand. But the result of each price not changing was that the price level didn’t move, and this led the demand effect to fall on output rather than on prices. I think our contribution was more to clarify the mechanism, but the paper was very influential.”

In 1997, Michael Woodford co-authored a paper with Julio Rotemberg in which they made an econometric estimation of a quantitative model in the New Keynesian framework, laying a number of key theoretical grounds for the development of the New Keynesian model of monetary policy.

Two years later, Jordi Galí partially extended and synthesized this theoretical framework. The Catalan researcher published an article with Mark Gertler (Frontiers of Knowledge laureate in Economics in the 13th edition of the awards) and Robert Clarida, in which they turned their attention to the Taylor Rule, a set of functions relating inflation, interest rates and another economic variable which could be GDP growth or aggregate demand. The result was a guideline for central banks to design the best monetary policy for a given period.

“The basic New Keynesian model consists of three equations,” Professor Galí explains. “The first describes the behavior of economic activity as a function of the interest rate. The second describes how inflation will perform as a function of the level of activity. And, finally, we have a third equation describing the behavior of the monetary authority – that is, the monetary policy rule being implemented by the central bank.”

Unconventional monetary policies

“The importance of the Taylor Rule,” Professor Woodford reflects, “is that it led to a renaissance of research by academic monetary economists in studying actual central bank behavior and trying to use sophisticated models to think about the effects of more realistic specifications of monetary policy.”

In a few short years, this analysis would find practical expression in the unconventional monetary policies that dominated the response to the financial crisis of the start of this century. In Galí’s words: “Woodford’s contributions are a key input to the design of the policies followed by leading central banks when interest rates reach zero, so have no more room to fall.”

The first time they were wielded was in 2002, after the dotcom bubble burst. The U.S. economy was struggling to pull clear of recession and deflation risk loomed large. The Federal Reserve had already cut rates as far as they thought possible and its governors believed they had used up all scope for monetary policy manoeuver, until Michael Woodford convinced them otherwise. In summer 2003, he recalls today, “they decided to experiment, I think successfully” with the use of forward guidance, managing expectations on future policy rates. Specifically, they announced their intention to keep interest rates low for a considerable period until certain economic conditions were fulfilled.

The pillar of this approach is that an important part of what central banks do should be to communicate the likely future path of monetary policy to private economic agents. “Before the 2000s,” says Woodford, “it was commonly accepted by central banks that they should make decisions about how to intervene in markets, but they didn’t need to talk to the public at all about what they were doing, and certainly not to give any hints in advance about what they might do in the future. The idea of forward guidance instead argues that in addition to directly intervening in markets, it’s very important what central banks say to people, not just while they’re doing it, but to give them an idea of what they expect to do further down the road, perhaps months or even years into the future.”

When the financial crisis erupted in 2008, these policies had already been tested in a real-world situation, enabling the Federal Reserve to react quickly, implementing both forward guidance and quantitative easing, buying up large quantities of government securities. At around the same time, Olivier Blanchard was taking up the post of economic counsellor with the International Monetary Fund. During his tenure there, he had multiple occasions to apply the models he had helped devise, seeking to give context to the current juncture and outline alternative strategies. Through these critical insights he was able to shape policies at a vital time and contribute centrally to modern economic analysis.

Laureate bio notes

Olivier Blanchard (Amiens, France, 1948) earned his PhD in Economics at the Massachusetts Institute of Technology (MIT) in 1977. After occupying faculty positions at Harvard University, in 1983 he returned to MIT, where he has spent most of his career, serving as chair of the Economics Department from 1998 to 2003. He was head of the International Monetary Fund’s research department between 2008 and 2015. Robert M. Solow Professor of Economics at MIT since 2014, he is also the Fred Bergsten Senior Fellow at the Peterson Institute for International Economics. He currently combines these appointments with a professorship at the Paris School of Economics. Blanchard is a member of several associations including the Econometric Society and the American Academy of Arts and Sciences, and has also served on the economic advisors panel of the U.S. Congressional Budget Office. He has published more than 150 articles in academic journals and has authored or co-authored several books on macroeconomics.

Jordi Galí (Barcelona, Spain, 1961) received his PhD in Economics from the Massachusetts Institute of Technology (MIT) in 1989, after completing a BA in Business Sciences and an MA in International Management at the ESADE school in Barcelona. He would later earn a BA in Economic Sciences at Pompeu Fabra University. Since 2001, he has combined his position as Senior Researcher at the Centre de Recerca en Economia Internacional (CREI) – where he served as director until 2017 – with a professorship in Economics at Pompeu Fabra. Galí is a Research Fellow at the Centre for Economic Policy Research (CEPR), a Research Associate at the National Bureau of Economic Research (NBER) and a Fellow of the Econometric Society. He has served as co-editor of the Journal of the European Economic Association and co-director of the CEPR International Macroeconomics Programme. In 2012 he was appointed President of the European Economic Association. Among his many distinctions, Galí shared the 2005 Yrjö Jahnsson Award with Tim Besley (15th Frontiers Laureate in Economics, Finance and Management). His research interest centers on macroeconomics and monetary theory, and he has numerous published articles to his name, notably “The Science of Monetary Policy: A New Keynesian Perspective.”

Michael Woodford (Chicopee, Massachusetts, United States, 1955) earned a Juris Doctor (JD) degree from Yale Law School in 1980, then went on to complete a PhD in Economics at the Massachusetts Institute of Technology (MIT) in 1983. The following year he took up a professorship in the Department of Economics at Columbia University, where he is currently the John Bates Clark Professor of Political Economy. He has also taught at the University of Chicago and Princeton University. Woodford is a recipient of MacArthur and Guggenheim Fellowships, among others, and is a Fellow of the American Academy of Arts and Sciences, the Econometric Society and the Society for the Advancement of Economic Theory. He is also a Research Associate at the National Bureau of Economic Research (Cambridge, Massachusetts), a Research Fellow at the Centre for Economic Policy Research (London) and a member of the CESifo Research Network (Munich). In 2024 he was distinguished with the Erwin Plein Nemmers Prize in Economics.

Nominators

A total of 64 nominations were received in this edition. The three laureates were nominated by: Samuel Bentolila, Professor of Economics at the Center for Monetary and Financial Studies (CEMFI) of Banco de España (Spain); Juan Francisco Jimeno, an advisor at Banco de España (Spain); Klaus Adam, Head of the Department of Economics at the University of Mannheim (Germany); Florin O. Bilbiie, Professor of Macroeconomics at the University of Cambridge, Senior J. M. Keynes Fellow, and Janeway Institute Theme Coordinator (United Kingdom); Teresa Garcia-Milà, Director of the Barcelona School of Economics (Spain); Mark Gertler, Henry and Lucy Moses Professor at New York University (United States); Marek Jarociński, Lead Economist at the European Central Bank (Germany); Peter Karadi, Lead Economist at the European Central Bank (Germany); Anton Nakov, Principal Economist at the European Central Bank (Germany); Kalin Nikolov, Head of Section in the Directorate General of Research and Innovation of the European Central Bank (Germany); Stephanie Schmitt-Grohé, Professor of Economics at Columbia University (United States); and Jaume Ventura, Director of the Centre de Recerca en Economia Internacional, CREI (Spain).

Economics, Finance and Management committee and evaluation support panel

The committee in this category was chaired by Eric S. Maskin, Adams University Professor in the Department of Economics at Harvard University (United States) and 2007 Nobel Laureate in Economic Sciences, with Manuel Arellano, Professor of Economics in the Center for Monetary and Financial Studies (CEMFI) of Banco de España acting as secretary.

Remaining members were Sir Richard Blundell, David Ricardo Professor of Political Economy at University College London (United Kingdom) and 2014 BBVA Foundation Frontiers of Knowledge Laureate in Economics, Finance and Management; Antonio Ciccone, Professor of Economics at the University of Mannheim (Germany); Pinelopi Koujianou Goldberg, Elihu Professor of Economics and Global Affairs at Yale University (United States); Andreu Mas-Colell, Professor Emeritus of Economics at Pompeu Fabra University and the Barcelona School of Economics (Spain) and 2009 BBVA Foundation Frontiers of Knowledge Laureate in Economics, Finance and Management; Lucrezia Reichlin, Professor of Economics at the London Business School (United Kingdom); and Fabrizio Zilibotti, Tuntex Professor of International and Development Economics at Yale University (United States).

The evaluation support panel was coordinated by Elena Cartea, Deputy Vice-President for Scientific-Technical Areas at the Spanish National Research Council (CSIC), and Joan Llull Cabrer, Research professor at the Institute for Economic Analysis (IAE, CSIC). Its members were: Inés Macho Stadler, Professor of Economics in the Economic Sciences Faculty at the Universidad Autónoma de Barcelona; Laura Mayoral Santamaría, Scientific Researcher at the Institute for Economic Analysis (IAE, CSIC); Xavier Ramos Morilla, Associate Professor in the Department of Applied Economics at the Universitat Autònoma de Barcelona; and Virginia Sánchez Marcos, Professor of Fundamentals of Economic Analysis in the Department of Economics of the University of Cantabria (UNICAN).

About the BBVA Foundation Frontiers of Knowledge Awards

The BBVA Foundation centers its activity on the promotion of world-class scientific research and cultural creation, and the recognition of talent.

The BBVA Foundation Frontiers of Knowledge Awards, funded with 400,000 euros in each of their eight categories, recognize and reward contributions of singular impact in science, technology, humanities and music, privileging those that significantly enlarge the stock of knowledge in a discipline, open up new fields, or build bridges between disciplinary areas. The goal of the awards, established in 2008, is to celebrate and promote the value of knowledge as a global public good, the best instrument we have to confront the great challenges of our time and expand individual worldviews. Their eight categories address the knowledge map of the 21st century, from basic knowledge to fields devoted to understanding the natural environment, by way of other, closely connected domains like biology and medicine, economics, information technologies, social sciences and the humanities, and the universal art of music.

The BBVA Foundation is aided in this endeavor by the Spanish National Research Council (CSIC), the country’s premier public research organization. CSIC appoints evaluation support panels made up of leading experts in the corresponding knowledge area, who are charged with undertaking an initial assessment of candidates and drawing up a reasoned shortlist for the consideration of the award committees. CSIC is also responsible for designating each committee’s chair across the eight prize categories and participates in the selection of remaining members, helping to ensure objectivity in the recognition of innovation and scientific excellence.

 

Rising wages drive innovation in automation technology




University of Zurich




Does increasing wage pressure encourage automation? Economic theory suggests that it does. Rising wages drive innovation in automation technology as firms seek cost-saving innovations to replace expensive labor. But what is the reality? Do businesses actually develop and implement automation innovation in response to external pressures, such as higher wages? A new study by UZH economists provides the first strong empirical evidence to support this idea.


Patent data and wage analyses
To conduct their study, the authors implemented an innovative approach that combined two distinct datasets. The first was a newly developed classification of automation patents, using European patent data. This new dataset enabled the researchers to measure firms' innovation activity by tracking automation-related patents at company level. The researchers focused on patents for machine tools, textile machinery and paper machines.

This patent dataset was then combined with a macroeconomic dataset covering 41 countries, focusing on innovative firms exposed to global market forces. As a result, the UZH researchers were able to calculate wage levels and analyze how wage fluctuations drive automation innovation. “This novel approach allowed us to isolate the causal impact of labor costs on technological advancements and to gain a more precise understanding of firms’ responses to wage changes,” says first author David Hémous, associate professor of economics of innovation and entrepreneurship at UZH.


Higher minimum wages boost innovation

The authors analyzed past labor market reforms and their impact on innovation trends. Their study shows that higher minimum wages lead firms to develop more automation technologies. “Our data provide strong empirical support for the idea that higher wages for low-skilled workers incentivize firms to invest in automation innovation to reduce their production costs,” says David Hémous. According to the study, a 1% wage increase leads to a 2% to 5% rise in innovation in the relevant field. In turn, rising wages for high-skilled labor reduce automation innovation, as operating and installing automation machinery often requires highly qualified workers. Higher wage costs make automation more expensive, reducing its benefits and thus discouraging innovation.

Impact of labor market policies

The Hartz reforms in Germany, which the authors studied, showed a similar effect. These labor market reforms, implemented between 2003 and 2005, are widely believed to have increased labor supply and reduced wages, particularly for low-skilled workers. This was confirmed by the UZH researchers’ study. “We found that the reforms led to a decline in automation innovation among firms exposed to the German market,” says Hémous. “Policy shocks such as minimum wage increases and Germany’s Hartz reforms further highlight how labor market policies directly shape incentives for firms to invest in automation – or not – and how they affect long-term economic dynamics such as economic growth.”

Not all innovation responds to wage shocks

The authors also found that non-automation innovations, such as improvements in energy efficiency, do not respond to wage shocks. They therefore call for further research into the influence of rising high-skill wages on the development of recent automation technologies, such as AI.

Literature

David Hémous, Morten Olsen, Carlo Zanella, Antoine Dechezleprêtre. Induced Automation Innovation

Evidence from Firm-level Patent Data. Journal of Political Economy. 2025. DOI: 10.1086/734778

 

Smart is sexy: evolution of intelligence partly driven by love



Australian National University
Male mosquitofish in a fish tank 

image: 

Male mosquitofish. Photo: Dave Fanner/ANU

view more 

Credit: Dave Fanner/ANU




The Beatles said it best: Love is all you need. And according to new research from The Australian National University (ANU), the same is true in the animal kingdom. Well, at least for mosquitofish – a matchstick-sized fish endemic to Central America and now found globally. 

According to the ANU scientists, male mosquitofish possess impressive problem-solving skills and can successfully navigate mazes and other tests. Males that perform better have a higher chance of mating. 

Lead author Dr Ivan Vinogradov said male mosquitofish likely evolved better cognitive abilities over time because of the advantage it gave them in finding females and producing offspring – a phenomenon known as “sexual selection”. 

“Evolution of intelligence in animals has long been thought to have been driven by natural selection. Animals that were better at problem solving were more adept at gathering food, finding shelter, and avoiding predators and hence lived longer,” Dr Vinogradov said. 

“They then passed on these genes to their offspring, helping future generations become smarter over time. 

“But there is another explanation for the evolution of intelligence: braininess is an attractive quality to the opposite sex. A better brain might help an animal find more mates, have more sex, and eventually have more babies. 

“This suggests that intelligence in mosquitofish partly evolved through sexual selection, where traits that boost mating and fertilisation success become more common over generations. Sexual selection is usually stronger in males than females because in most species there are more males seeking mates than females ready to mate and breed. 

“Our study suggests that intelligence in mosquitofish isn’t only driven by their need to find food or avoid predators, but also by the complex challenges of finding love. This shows that, much like humans, love conquers all.” 

The researchers measured intelligence of the male mosquitofish by putting them through a series of underwater tests – including navigating mazes, detouring around transparent barriers and learning to remember different coloured spots – offering a reward for the fish that were able to complete the tests. 

The team then tracked, over a two-month period, how many offspring each male produced when competing against other males for a female partner to mate with. 

After performing more than 2,000 paternity tests, the scientists found that smarter males who successfully navigated the underwater tests mated with more females and produced more offspring than less-intelligent fish that failed the tests. 

“Our findings imply that the evolution of cognitive abilities may have been driven by sexual selection for males who gained more mating opportunities,” study co-author Professor Michael Jennions said. 

“Something about these males seemingly gave them an edge in securing mating opportunities. Perhaps females recognised and preferred smarter males, or maybe smarter males were better at chasing the females and forcing them to mate, a common, if unpleasant, practice in mosquitofish. 

“Future research is needed to observe the males more closely and see how the mating behaviour of smarter and dumber males differs.” 

The research is published in Nature Ecology and Evolution. The research team involved scientists from Australia, Germany and South Africa.