Friday, February 16, 2024

 

Quality of care for patients who call 911 varies greatly across the United States, study finds


New research could lead to more consistency and safety measures

Peer-Reviewed Publication

TAYLOR & FRANCIS GROUP




Emergency medical service (EMS) systems are not consistently providing optimal care based on new national standards of quality to patients who call 911, according to a new study from the Icahn School of Medicine of Mount Sinai.

The study demonstrates that EMS performance on key clinical and patient safety measures varies widely across urban and rural communities. The findings, published in the peer-reviewed Prehospital Emergency Careidentify opportunities that could lead to improved care during 911 responses and improved outcomes for patients across the United States.

“EMS systems in the United States have traditionally relied upon operational measures, like response times, to measure performance of the system. However, this study highlights how patient care and experience are not solely determined by how fast an ambulance can arrive at the patient’s side,” explains lead author Michael Redlener, MD, Associate Professor of Emergency Medicine at Icahn Mount Sinai. “While fast response times are essential for rare, critical incidents—like when a patient’s heart stops beating or someone chokes—the vast majority of patients benefit from condition-specific clinical care in the early stages of a medical emergency. It is essential for EMS systems, government officials, and the public to know about the quality and safety of care that is occurring and find ways to improve it.” 

This is the first study to use specific safety and clinical quality measures to assess patient care across the entire 911 system in the United States. The research team reviewed all 911 responses in the United States for the year 2019, more than 26 million responses from 9,679 EMS agencies. They assessed specific quality measures in each call outlined by the National EMS Quality Alliance – a nonprofit organization that was formed to develop and endorse evidence-based quality measures for EMS and healthcare partners that improve the experience and outcomes of patients and care providers. This includes the treatment of low blood sugar, seizures, stroke, pain, and trauma, as well as medication safety and transport safety. Some of the notable findings were:

  • Pain for trauma patients improved in only 16 percent of cases after treatment by EMS.
  • 39 percent of children with wheezing or asthma attacks did not receive breathing treatments during their EMS call, even though earlier treatment can lead to earlier relief of distressing symptoms.
  • Nearly one-third of patients with suspected stroke did not have a stroke assessment documented, potentially delaying or missing time-sensitive treatment.

The researchers also analyzed performance of all EMS agencies, looking at agency size and location—urban, suburban, and rural. They discovered substantial differences in agencies that primarily responded in rural communities compared to urban and suburban areas. Agencies with responses in mostly rural areas were less likely to treat low blood sugar or improve pain for trauma patients, and more likely to use lights and sirens unnecessarily when compared to EMS systems in urban and suburban communities. Previous studies have shown that when lights and sirens are used during EMS transport there is a higher likelihood of accidents, injury, and death, so unnecessary use may be more dangerous. Dr. Redlener says the difference between the highest- and lowest-performing agencies on these key measures is notable.

“This work is not about blaming bad EMS services, but about uncovering opportunities to improve patient care,” Dr. Redlener adds. “We have to move away from solely looking at response times and start looking at performance that directly impacts the people we are meant to treat.”

 

Desert ants: the magnetic field calibrates the navigation system


Peer-Reviewed Publication

UNIVERSITY OF WÜRZBURG

Desert Ants 

IMAGE: 

THE DESERT ANT CATAGLYPHIS NODUS AT ITS NEST ENTRANCE - AN INCONSPICUOUS HOLE IN THE GROUND THAT CANNOT BE SEEN FROM THE ANT'S PERSPECTIVE. TO FIND ITS WAY BACK THERE, THE ANT USES THE EARTH'S MAGNETIC FIELD DURING ITS LEARNING WALKS.

view more 

CREDIT: ROBIN GROB



They are only a few centimeters tall and their brains have a comparatively simple structure with less than one million neurons. Nevertheless, desert ants of the Cataglyphis genus  possess abilities that distinguish them from many other creatures: The animals are able to orient themselves to the Earth's magnetic field.

Visible Changes in the Nervous System

A research team from Julius-Maximilians-Universität Würzburg (JMU) discovered this a few years ago. However, it was previously unknown where in the ants' brains the magnetic information is processed. This has now changed: In a new study published in the journal PNAS - Proceedings of the National Academy of Sciences, the team shows that information about the Earth's magnetic field is primarily processed in the ants' internal compass, the so-called central complex, and in the mushroom bodies, the animals' learning and memory centers.

Professor Wolfgang Rössler, holder of the Chair of Behavioral Physiology and Sociobiology at the University of Würzburg, Dr. Pauline Fleischmann, former scientist at the Chair of Behavioral Physiology and Sociobiology and now a member of the Neurosensorics/ Animal Navigation working group at the University of Oldenburg, and Dr. Robin Grob, who has since moved from Rössler's chair to the Norwegian University of Science and Technology in Trondheim, were responsible for this study.

First Exploratory Walks for Calibration

"Before an ant leaves its underground nest for the first time and goes in search of food, it has to calibrate its navigation system," says Pauline Fleischmann, explaining the background to the work. During so-called learning walks, the animals then explore the immediate surroundings around the nest entrance and repeatedly pirouette around their own body axis with short stops in between. During these pauses, they always look exactly back in the direction of the nest entrance, even though they cannot see it – a tiny hole in the ground.

Thanks to their field studies in southern Greece, where Cataglyphis ants are native, Fleischmann and her colleagues were able to prove that desert ants orient themselves to the Earth's magnetic field during the learning walk phase. Pauline Fleischmann and Robin Grob were once again on site in Greece. This time, however, they not only investigated the ants' orientation behavior while the magnetic field was being manipulated, but also looked for changes in the nervous system of Cataglyphis as an expression of the newly acquired experience.

A Faulty Magnetic Field Disrupts the Learning Process

The zoologists concentrated on young workers that had not yet undertaken any learning walks. The animals were only allowed to set off as part of the precisely planned experiments – sometimes under natural conditions, sometimes in a permanently manipulated magnetic field that, for example, displayed chaotic directions or did not allow horizontal orientation. With this faulty directional information, it was not suitable as a reliable reference system for the ants' behavior to look back to the nest entrance during the learning walks.

The result: "Our neuroanatomical brain analyses show that ants exposed to an altered magnetic field have a smaller volume and fewer synaptic complexes in an area of the brain responsible for the integration of visual information and learning, the so-called mushroom body," explain Fleischmann and Grob. In the central complex, the region of the ant’s brain in which spatial orientation is anchored, the same findings were observed under certain conditions.

The Number of Synaptic Connections Increases

Desert ants that were allowed to make their first excursions under natural conditions were clearly different. Their sensory experiences, a combination of information about the magnetic field, the position of the sun and the visual environment, triggered a learning process that was accompanied by structural changes in the neurons and an increase in synaptic connections in the aforementioned brain regions.

According to the scientists, this leads to the conclusion that magnetic information not only serves as a compass for navigation, but also as a global reference system that is crucial for the formation of spatial memory.

In Search of the Sensory Organ

The results of their experiments prove "that ants need a functioning magnetic compass during their learning walks in order to calibrate their visual compass and at the same time store images of the nest environment in their long-term memory", as Pauline Fleischmann and Robin Grob say. At the same time, their research extends far beyond the field of compass calibration in ants. Wolfgang Rössler emphasizes that "the results provide valuable information on how multisensory stimuli can influence neuronal plasticity of brain circuits for navigation in a critical phase of brain maturation."

In a next step, the team now wants to investigate in which sensory organ the desert ant receives the magnetic information and via which sensory pathways it is transmitted and processed. This has not yet been achieved with any animal species that orients itself to the Earth's magnetic field. Due to their manageable and relatively small nervous system, insects, to which Cataglyphis belongs, offer a unique opportunity to investigate the neuronal basis of magnetic orientation at all levels.

The research team used a 3D Helmholtz coil system to manipulate the earth's magnetic field around the nest entrance.

CREDIT

Robin Grob

Confocal microscope image of the central area in the brain of the desert ant Cataglyphis nodus. The paired mushroom bodies, which are responsible for sensory integration, learning and memory, can be seen on both sides. In the middle between the mushroom bodies is the central complex, a brain structure responsible for orientation in space.

CREDIT

Wolfgang Roessler

 

Build biorefineries and let the natural world power Mexico’s economy


Peer-Reviewed Publication

UNIVERSITY OF SURREY





Mexico could grow its economy while saving the planet by building biorefineries that can turn seaweed, sugar cane, cooking oil and even vegetable peel into fuel and pharmaceuticals. Now, the country needs a new ‘bioeconomy plan’ to unlock its potential, according to a large study from the University of Surrey and Mexico’s Instituto Mexicano del Petroleo.  

Professor Jhuma Sadhukhan, from Surrey’s School of Civil and Environmental Engineering, said: 

“In the drive to net zero, it’s easy to fixate on growing crops to fuel our homes or cars. Our study shows Mexico can do so much more.  

“Around the world, businesses are turning biomass into shoe soles, or skin cream. With the right investment and the right plan, there is no reason Mexico cannot lead the world in this growing industry.”  

There are plenty of reasons to prefer biological material from agricultural, forestry or municipal waste over fossil fuels. Plants are renewable and absorb carbon dioxide while they grow.  

Yet, biofuels are just one of many ways to replace fossil fuels in the chemical industry. For example, Mexico could produce 20 million tonnes of seaweed a year. This can be used to extract proteins and chemicals for making shoes, packaging or face cream. 

The key is to build biorefineries – large plants that turn organic matter into chemicals. The more products made in the same facility, the better for the environment and the economy. Even better – they could generate energy and heat at the same time.  

A biorefinery producing 220 kilotonnes per annum costs about the same as a petrochemical facility: $750 million.  

Now, Mexico needs a national bioeconomy plan – removing barriers to make that investment work. The plan should carefully choose which plants to make into which chemicals – so the industry can be as sustainable and profitable as possible.   

The research is published in the Journal of Cleaner Production, and helps promote UN Sustainable Development Goals 8 (decent work and economic growth), 9 (industry, innovation and infrastructure), and 12 (responsible consumption and production). 

ENDS

 

Hankering for status drives non-executive directors to outstay effectiveness


Desire for social recognition means they are failing shareholders – new research


Peer-Reviewed Publication

UNIVERSITY OF BATH




Long-serving non-executive directors (NEDs) who can’t wean themselves off the social status attached to belonging to the corporate board are failing shareholders and damaging the companies they are meant to serve, new research from the University of Bath and Queensland University of Technology shows.

Board members who exceed their tenure are putting the identity and self-worth they gain from being a director ahead of their duty to shareholders, compromising board renewal and its financial and strategic performance.

Non-executive directors interviewed for the study acknowledged the problem of colleagues staying ‘too long’ and said prolonged tenures can create governance concerns for boards and shareholders, according to the research published in Accounting Forum.

Through in-depth interviews with 11 experienced non-executive directors who have served on 68 boards in Australia, across public companies, government-owned organisations, private companies and mutual banks, the study set out to explore the motives for some non-executive directors to serve on boards beyond recommended tenure limits.

While financial reward and intellectual stimulation undoubtedly played a part in their residency, it was the thought of no longer being able to call themselves a board director – a status that went to the core of how they defined themselves- that compelled them to stay.  

Dr Johanne Grosvold, from the University of Bath’s School of Management, said: “Our findings show that for some NEDs their identity as a board director is more important to them than acting in the interests of shareholders. When it’s a healthy time to step down they don’t want to relinquish an important part of who they are, so instead they ignore their accountability to shareholders.” 

‘It’s like another world, where else do you get the respect just because of your role [as a director]?’ said one NED interviewed for the study.

‘People like to be a director – and like to know that people know that they’re a director,’ said another.

Regulators in UK and Australia recommend a limit of between nine and 12 years for non-executive directors to step down and give way to new candidates, and the NEDs interviewed for the study spoke of a maximum tenure of 10 years being ideal.

Nevertheless, the interviewed directors could easily recount experiences of board colleagues who had served for 15, 18 and over 20 years.

‘Unfortunately, it would be the one director duty that I see directors most breach – that duty to act in the best interests of the board when it comes to their tenure – it’s actually more about keeping their board seat … they just want to stay on the board, they just don’t want to give up their board seat,’ said a study participant. 

The researchers suggest that a board tenure policy can act as an important safeguard against excessive tenure, and the stale thinking that can ensue.

Co-author Dr Natalie Elms, from Queensland University of Technology, said: “Prolonged tenure is a governance concern for corporate boards and exposes the limits of a board’s self-regulation. The issue of term limits is important for ensuring that directors are acting in the best interests of shareholders, and a board renewal policy can act as an important defence against directors’ reluctance to leave a board voluntarily.”

When accountability and identity collide: How director identity shapes board tenure, is published in Accounting Forum.

Long-serving non-executive directors (NEDs) who can’t wean themselves off the social status attached to belonging to the corporate board are failing shareholders and damaging the companies they are meant to serve, new research from the University of Bath and Queensland University of Technology shows.

Board members who exceed their tenure are putting the identity and self-worth they gain from being a director ahead of their duty to shareholders, compromising board renewal and its financial and strategic performance.

Non-executive directors interviewed for the study acknowledged the problem of colleagues staying ‘too long’ and said prolonged tenures can create governance concerns for boards and shareholders, according to the research published in Accounting Forum.

Through in-depth interviews with 11 experienced non-executive directors who have served on 68 boards in Australia, across public companies, government-owned organisations, private companies and mutual banks, the study set out to explore the motives for some non-executive directors to serve on boards beyond recommended tenure limits.

While financial reward and intellectual stimulation undoubtedly played a part in their residency, it was the thought of no longer being able to call themselves a board director – a status that went to the core of how they defined themselves- that compelled them to stay.  

Dr Johanne Grosvold, from the University of Bath’s School of Management, said: “Our findings show that for some NEDs their identity as a board director is more important to them than acting in the interests of shareholders. When it’s a healthy time to step down they don’t want to relinquish an important part of who they are, so instead they ignore their accountability to shareholders.” 

‘It’s like another world, where else do you get the respect just because of your role [as a director]?’ said one NED interviewed for the study.

‘People like to be a director – and like to know that people know that they’re a director,’ said another.

Regulators in UK and Australia recommend a limit of between nine and 12 years for non-executive directors to step down and give way to new candidates, and the NEDs interviewed for the study spoke of a maximum tenure of 10 years being ideal.

Nevertheless, the interviewed directors could easily recount experiences of board colleagues who had served for 15, 18 and over 20 years.

‘Unfortunately, it would be the one director duty that I see directors most breach – that duty to act in the best interests of the board when it comes to their tenure – it’s actually more about keeping their board seat … they just want to stay on the board, they just don’t want to give up their board seat,’ said a study participant. 

The researchers suggest that a board tenure policy can act as an important safeguard against excessive tenure, and the stale thinking that can ensue.

Co-author Dr Natalie Elms, from Queensland University of Technology, said: “Prolonged tenure is a governance concern for corporate boards and exposes the limits of a board’s self-regulation. The issue of term limits is important for ensuring that directors are acting in the best interests of shareholders, and a board renewal policy can act as an important defence against directors’ reluctance to leave a board voluntarily.”

When accountability and identity collide: How director identity shapes board tenure, is published in Accounting Forum.

 

 

Technology with empathy: using conversational agents in education


Various studies have confirmed the effectiveness of digital conversational tools in improving students' motivation and performance


Peer-Reviewed Publication

UNIVERSITAT OBERTA DE CATALUNYA (UOC)





Artificial intelligence and natural language processing technologies are driving the use of pedagogical conversational agents with empathic capabilities. They are virtual tools (e.g. chatbots) which are able to evoke an empathetic reaction in the student while helping them develop their skills. As they are always available and increasingly effective in providing support for students and teachers, these technologies are growing rapidly, especially in the areas of improving and personalizing the online learning experience. However, given their recent inception, there is as yet no broad-based scientific knowledge about the application of these platforms in education, which is why a study by Elvis Ortega-Ochoa, a predoctoral researcher in the SMARTLEARN group at the Universitat Oberta de Catalunya (UOC), focused on the principles that govern these technologies.

The study, in which the UOC postdoctoral researcher Marta Arguedas and the member of the UOC's Faculty of Computer Science, Multimedia and TelecommunicationsThanasis Daradoumis also participated, analysed more than a thousand studies and articles on the subject in order to undertake a scientific review of the most important contributions and draw useful conclusions for their development, such as the design principles to be taken into account when beginning the process for creating these agents.

"Conversational agents must have two of the major skills that teachers put into practice in any teaching and learning process: identifying and regulating emotions by various means, and responding to the student's emotional state while progressing in the intellectual construction and development of their skills", explained Elvis Ortega-Ochoa, who is producing his doctoral thesis as part of the Doctoral Programme in Education and ICT (e-Learning).

The study also provides a comprehensive and state-of-the-art overview of the research designs used in the implementation of these agents. In addition, it examines the factors that influence their effectiveness in education, and also evaluates the types of feedback that improve the impact of empathic agents on learning outcomes.

 

From chatbots to intelligent tutoring systems

These technological conversational learning tools must enable interaction with the student, either synchronously or asynchronously, and may be integrated into the educational process in various formats and channels: these range from a standalone system, such as a chatbot, to use within an intelligent tutoring system. "They're currently being used to develop students' soft skills and to provide motivation for students when they're configured with various coaching techniques. At certain points in the teaching process, they can also be useful for introducing new topics or reinforcing content that's already been learned", explained the UOC researcher.

As for their usefulness and students' perceptions, various studies have shown the effectiveness of these conversational tools in improving motivation and learning performance. "We've seen that these benefits are related to the robustness of the interactions with the tool, so the responsibility for success lies with the development techniques used for these services. Virtual agents that frequently use artificial intelligence and empathetic capabilities are less monotonous and interrupt conversations to a lesser extent", added the researchers. The authors also point out that these benefits may be partially subject to the novelty effect inherent in emerging technologies.

Looking towards the future, specialists in the field anticipate that these agents will further refine the pedagogical and empathetic characteristics presented in the conversations, so that online learning can be more personalized and adapted to students' needs. "With the rise of artificial intelligence and widely used language models like ChatGPT, educational institutions are more willing to experiment in order to incorporate scientific breakthroughs into the institution's pedagogical model across the board, which means we're likely to have an institutional benchmark in this area in the coming months or within a few years", explained Elvis Ortega-Ochoa.

This study makes a significant contribution to providing education and IT professionals with an overview of the latest developments in this field. It lists the design principles to be taken into account when creating these agents, and highlights the transversality of the empathic component in the overall design of the interaction, the promotion of dialogic learning, proficiency in the field of knowledge and personalized feedback according to the student's level. It also shows how the agents were implemented in the learning environments, and provides sufficient factors to take into account when assessing the effectiveness of the design principles.

The study aims to provide a benchmark for educational and technological teams aiming to undertake a project of this type. It also highlights aspects that can be improved, such as the lack of clarity when previous conversations between agents and learners are added to a database in order to determine learning states and personalize responses during the same session.

Finally, the research also discusses ethical considerations related to the use of these agents, and offers some advice for their correct development, such as training the system with unbiased data, ethically managing the information the agent collects, ensuring that its algorithm is inclusive, and preventing it from replicating discriminatory stereotypes.

The researchers also warn that most studies of earlier projects focused on students' perceptions of the quality, experience and emotional bond generated by the interaction, but few assessed the learning and the level of progress in the development of competencies.

Based on these results, the researchers are now considering the possibility of reviewing the scientific breakthroughs in students' emotion regulation strategies during their interaction with an empathic pedagogical conversational agent, and undertaking an in-depth review of the development techniques of these agents to determine which is the most viable according to the resources of the educational and technological team.

 

This research project, funded by the UOC, promotes the Sustainable Development Goals (SDG), and specifically number 4, related to Quality Education.

 

UOC R&I

The UOC's research and innovation (R&I) is helping overcome pressing challenges faced by global societies in the 21st century by studying interactions between technology and human & social sciences with a specific focus on the network society, e-learning and e-health.

Over 500 researchers and more than 50 research groups work in the UOC's seven faculties, its eLearning Research programme and its two research centres: the Internet Interdisciplinary Institute (IN3) and the eHealth Center (eHC).

The university also develops online learning innovations at its eLearning Innovation Center (eLinC), as well as UOC community entrepreneurship and knowledge transfer via the Hubbik platform.

Open knowledge and the goals of the United Nations 2030 Agenda for Sustainable Development serve as strategic pillars for the UOC's teaching, research and innovation. More information: research.uoc.edu.

 

A flicker of truth: Piercing the “continuity illusion”


Peer-Reviewed Publication

CHAMPALIMAUD CENTRE FOR THE UNKNOWN

A Flicker of Truth: Piercing the “Continuity Illusion” 

IMAGE: 

SCIENTISTS AT THE CHAMPALIMAUD FOUNDATION HAVE IDENTIFIED A BRAIN AREA THAT MAY BE KEY TO THE "CONTINUITY ILLUSION”, ENABLING US TO PERCEIVE RAPID SEQUENCES OF STILL IMAGES AS SEAMLESS MOTION, ESSENTIAL FOR EVERYDAY EXPERIENCES LIKE WATCHING MOVIES.

view more 

CREDIT: FREDERICO SEVERO




A study by a team at the Champalimaud Foundation (CF) has cast a new light on the superior colliculus (SC), a deep-seated brain structure often overshadowed by its more prominent cortical neighbour. Their discovery uncovers how the SC may play a pivotal role in how animals see the world in motion, and sheds light on the “continuity illusion”, an essential perceptual process integral to many of our daily activities, from driving vehicles to watching movies.

Imagine watching a film. The moving images you see are actually a series of static frames shown rapidly. This is the continuity illusion at work, where our brain perceives a sequence of quick flashes as continuous, smooth motion. It’s a phenomenon not just vital to our enjoyment of films but also a fundamental aspect of how all mammals, from humans to rats, perceive the dynamic world around them. This study from the CF’s Shemesh Lab, published in Nature Communications, delves into how this illusion is encoded in the brain.

The speed at which flashes must occur for our brain to see them as constant rather than flickering is known as the Flicker Fusion Frequency (FFF) threshold. This threshold varies among animals; for instance, birds, which need to see fast movements, have a higher threshold than humans, which means they can still perceive light as flickering, rather than continuous, even when it’s blinking very rapidly. The FFF threshold is also important in nature, such as in predator-prey interactions, and can be affected by certain diseases like liver disorders or eye conditions like cataracts.

Interestingly, different methods of measuring this threshold, like observing animal behaviour or recording electrical activity in the eyes or the cortex (the brain’s outer layer that processes what we see), can give different results. This suggests that other parts of the brain also play a role in how we perceive flickering light. In this study, researchers combined functional MRI (fMRI) brain scans, behavioural experiments, and electrical recordings of brain activity to understand how this process works. Their findings indicate that the SC is vital in the transition from seeing individual flashes to smooth motion, and that it may be a key component in the creation of the continuity illusion.

A Three-pronged Attack

“This project was really a ground-up endeavour, and began as a conversation between two PhD students at CF”, notes Noam Shemesh, senior author of the study. “Rita Gil, a student in my lab, was exploring the rat brain’s responses to different light frequencies with MRI. Her discussions with Mafalda Valente, in the lab of Alfonso Renart, led to the development of a behavioural task in which rats were trained to distinguish between flashes and continuous light. Using the MRI and behavioural data, they also recorded the brain’s electrical activity during light stimulation. This approach enabled them to measure and compare FFF thresholds using three distinct methods: MRI, behavioural experiments, and electrophysiology. This multimodal approach is quite rare, and is really what sets this study apart. We’re also grateful to Alfonso Renart for the interesting discussions that contributed to this research”.

For the fMRI experiments, rats were shown visual stimuli at frequencies ranging from low to high. To minimise movement and ensure stable brain imaging, the animals were lightly sedated. “fMRI is a non-invasive technique that tracks changes in blood flow, which are indicative of neural activity in the brain”, explains Gil. “One of the advantages of fMRI is its ability to map brain activity throughout the entire visual pathway, simultaneously capturing activity from multiple regions”. The goal was to observe how the brain shifts from perceiving individual flashes of light (static vision) to a continuous flow of light (dynamic vision), and to pinpoint the brain regions involved.

“When we looked at the SC”, says Gil, “we found markedly different responses based on the frequency of visual stimuli. As the frequency of the visual stimulus increased, moving towards continuous light perception, there was a shift in the SC’s response from positive to negative fMRI signal regimes”. Positive signals reflect increased neural activity, while negative signals potentially signify the opposite. Based on these observations, a hypothesis began to form: might the transition from static to dynamic vision in the continuity illusion involve the suppression of activity in the SC?

To answer this question, they next turned to behavioural experiments. Rats were trained in a specially designed box, where they learned to go to one side port if they perceived the light as flickering, and to the other if they perceived it as continuous. Correct choices were rewarded with water to reinforce the learning. By varying the light frequencies displayed, the team recorded at which point the rats perceived the flickering light as continuous. When they compared the behavioural data with the fMRI data, they made a surprising discovery: the change from positive to negative fMRI signals in the SC at certain frequencies matched the frequencies at which rats behaviourally perceived the shift from flickering to continuous light.

Given that the SC showed the strongest correlation between behaviour and fMRI data compared to other brain areas, the researchers targeted it for electrophysiological recordings, directly measuring the electrical activity of its neurons. They used light sedation to maintain consistency with the fMRI conditions. Their aim was to better understand the specific neural mechanisms involved when rats perceive flickering versus continuous light. Did the positive and negative signals detected in fMRI correspond to neural activity and suppression, respectively, as they had hypothesised?

At low light frequencies where rats discerned individual flashes, the researchers observed increased neural activity corresponding to each flash. At higher frequencies perceived as continuous light, the neural responses to these individual flashes diminished, and instead, there were more pronounced responses at both the start and the end of the light stimulation. Notably, there was a marked suppression of neural activity in between these initial (onset) and final (offset) peaks.

Valente notes, “Our measurements of electrical activity in the SC aligned well with our fMRI data, which exhibited onset and offset peaks surrounding the negative signals at higher frequencies. These electrophysiological recordings support the notion that the positive and negative signals recorded in fMRI do indeed represent neural activity and suppression, respectively. It seems that this suppression happens when animals enter a state of dynamic vision mode, potentially serving as a key contributor to flicker fusion and the continuity illusion”.

Reflecting on the study, Valente shares, “What really surprised us was how closely the fMRI signals in the SC matched the behavioural data, even more than those in the cortex, which is typically seen as the main visual processing area in mammals. Equally striking was to find the same patterns in the SC even after we had intentionally disabled the cortex, suggesting that these signals originate in the SC itself and are not just a result of activity from the cortex”.

Gil continues, “This points to the SC’s role as a novelty detector. For instance, at lower light frequencies, each flash seems to be processed as a new event by the SC. But as the frequency increases beyond a certain point, the SC appears to decide the stimulus is no longer new or noteworthy, leading to reduced activity. This could account for the pattern of increased activity at the start and end of high-frequency stimulation, with periods of suppression in between”.

Implications and future directions

“Our findings provide a roadmap for how neuroscience experiments could be conducted in the future”, concludes Shemesh. “By initially using fMRI to present stimuli, researchers can efficiently pinpoint which brain regions to focus on for more detailed electrophysiological studies. This approach not only saves time and resources but also capitalises on fMRI’s strength in reflecting the population activity of brain regions. While it doesn’t offer the granular detail of single-cell activity, fMRI’s ability to show the bigger picture – whether there’s more brain activation or suppression – makes it a valuable first step in guiding subsequent experiments”.

The authors believe that their findings hold relevance for clinical applications. In cases of individuals with visual impairments, optic nerve diseases, or conditions like autism and stroke, this study offers new avenues for both assessment and potential treatment of visual dysfunctions. By determining and comparing FFF thresholds in these individuals against those in healthy populations, and observing how these thresholds evolve, it may be possible to gauge the adaptability of specific brain regions. This could lead to an understanding of which areas of the brain remain amenable to treatment, paving the way for the development of targeted therapeutic interventions.

Looking ahead, the researchers aim to identify which specific cell types in the SC are responsible for the activities they observed. Their broader objective is to deepen our comprehension of the roles of various brain regions within the visual pathway, combining experimental techniques such as targeted lesions or visual deprivation along with MRI studies. These strategies promise to provide a deeper insight into the adaptability and function of visual regions, refining our current model of how each area contributes to visual perception. So, the next time you’re watching a movie, experiencing the illusion of fluid motion from the rapid succession of frames, spare a thought for the intricate processes at play in your brain, and for the ongoing research efforts to unravel them.