Monday, October 07, 2024

 

When dogs and humans find a common language



Institut Pasteur



Humans and dogs have been birds of a feather for millennia. But how can such distant species understand each other? A recent study published in PLOS Biology by researchers from the University of Geneva and the Hearing Institute, an Institut Pasteur Center, reveals that the pair meets halfway between their differences to communicate together. From their findings, the researchers hypothesize that dogs and humans have coadapted to communicate with each other. Comparison with other canids not domesticated by humans, such as wolves, would reveal which of the dog's linguistic abilities are due to genetics and which are due to socialization.

Having a unique speech rate has advantages for a species: it can avoid predators, recognize members of its own kind, including potential mates. But sometimes it can be useful to know how to adapt to a foreign rhythm, to find out what others are saying, whether competitively by eavesdropping, or collaboratively, as between dogs and humans.  Part removed here

In this study, researchers have shed light on the adaptation of dogs and humans to enable communication beyond the barriers of species.

Do you speak dog?

Do you change your voice when speaking to your dog? This is a natural and useful process! Eloïse Déaux, a researcher in animal behavior and neurosciences at the University of Geneva, analyzed hundreds of vocalizations from dogs and humans with her team. According to her results, dogs have a slower vocalization rate than humans. And when a human addresses a dog... the speech’s rhythm is halfway between the two. “Humans slow down their speech when addressing their pets; it's a modification that brings them closer to the dog's typical speech rate and could facilitate understanding,” explains Eloïse Déaux.

So how do we explain this difference between humans and dogs? For the researcher, the answer lies not only in anatomy, but also in brain mechanisms, more specifically neural oscillations, the electrical patterns in the brain that result from the synchronous activity of neurons.

Not on the same wavelength

Neural oscillations are classified according to their frequency: delta waves have a frequency between 0.5 and 4 Hertz (Hz), while theta waves have a frequency between 4 and 7 Hz. They are also distinct in their implications for cognitive mechanisms. In human language, for example, gamma waves are associated with the coding of phonemes (the sounds of speech), theta waves with syllabification, and delta waves with intonation (also known as prosody).

And what about other animals? “The study of the role of brain oscillations in the perception of speech is relatively recent, even in humans,” says Anne-Lise Giraud, professor of neuroscience at the University of Geneva and director of the Hearing Institute, an Institut Pasteur Center, behind the project. “So, applying it to our four-legged companions is not insignificant.” To study the brain waves of canines, researchers have adapted non-invasive electroencephalography (EEG) protocols. Participants, both humans and dogs, are exposed to auditory stimuli, and brain activity in response is measured. “We study the correlation, or similarity, between the acoustic signal and EEG oscillations,” explains the researcher. The result: “In humans, the theta waves that make up the EEG signal line up with the acoustic envelope, whereas in dogs, it's the delta waves.

To prove that the oscillations measured by the EEG enable the perception of speech in the auditory signals, the researchers made sure that the latter are clearly understood by the participants. Part removed here.

From behavior to understanding

In humans, we can easily assess the intelligibility of a sound stimulus: the person is able to report their comprehension in both spoken and written form. With dogs, that’s a whole different ballgame!  “We used dog-directed speech consisting of commands rather than praises, so that we could objectively measure the intelligibility of the stimuli for the dog, via its execution of the various commands,” says Eloïse Déaux. An innovative approach that brings together behavior, understanding, and brain mechanisms.

Using this approach, the limits of intelligibility of acoustic signals for humans and dogs can be tested: the researchers modify the recorded sentences to speed up or slow down their rate. They also play signals without content or prosody. Eloïse Déaux describes her method: “To get the content-free condition, for example, we reverse the recording of the master’s command. So 'sit' becomes something like ‘tis’. We then reverse back the prosody to match that of the original signal.

According to the scientists' findings, dogs don't respond to commands with too fast of a rate. They are therefore able to process human language, but only if it is in the delta band, between 1 and 3 Hz. Furthermore, the loss of content also influences comprehension in dogs: with prosody only, command intelligibility is lower than in normal conditions. “Our study debunks the myth that dogs are only sensitive to our intonation, or prosody. Phonological content is important if they are to understand what we say, but unlike us syllables are not the basic building blocks of comprehension”, reports the researcher.

Furthermore, and critically, the results show that understanding and oscillations are deeply linked. The more the oscillations – theta in humans and delta in dogs – follow the signal’s envelope, the higher the comprehension is.

An exception or a generality?

From their results, the researchers hypothesized that dogs and humans have coadapted to communicate with each other. But is this behavior solely due to the special bond between the two species? “It would be interesting to see whether other animals with which humans interact (cows, sheep, goats, pigs, etc.) have developed the same ability for adaptation, and whether humans reduce their speech rate when talking to them,” note the researchers.

A comparison with other non-domesticated canids, such as wolves, would provide insight into which of the dog's linguistic abilities are due to genetics and which are due to socialization. The NCCR Evolving Language's Special Interest Group (SIG) Canid Cognition, initiated by Klaus Zuberbühler (UniNE), Martin Meyer (UZH) and Anne-Lise Giraud, could tell us more about this in the near future. “A better understanding of the dog's cerebral mechanisms will enable us to gain knowledge about humans and our evolution, as this animal is evolutionarily very distant from us, but also to improve training techniques for our four-legged friends,” concludes Eloïse Déaux.

 

Source

Dog-human vocal interactions match dogs' sensory-motor tuning, Plos Biology, 1er octobre 2024

E. C. Déaux1*, T. Piette1, F. Gaunet2, T. Legou3, L. Arnal4 and A-L. Giraud1,4

1 Department of Basic Neurosciences, Faculty of Medicine, University of Geneva, Geneva, Switzerland
2 Aix-Marseille University and CNRS, Laboratoire de Psychologie Cognitive (UMR 7290), Fédération de recherche 3C, Marseille, France
3 Aix Marseille University and CNRS, Laboratoire Parole et Langage (UMR 6057), 13 100, Aix-en-Provence, France
4 Institut Pasteur, Université Paris Cité, Hearing Institute, Paris, France.
* Corresponding authors


About the NCCR Evolving Language
The Swiss National Centre of Competence in Research (NCCR) Evolving Language is a nationwide interdisciplinary research consortium bringing together research groups from the humanities, from language and computer science, the social sciences, and the natural sciences at an unprecedented level. Together, we aim at solving one of humanity’s great mysteries: What is language? How did our species develop the ca­pa­city for linguistic expression, for processing language in the brain, and for con­sist­ently passing down new variations to the next generation? How will our capacity for language change in the face of digital com­munication and neuroengineering?

About the Institut Pasteur
The Institut Pasteur, a non-profit foundation with recognized charitable status set up by Louis Pasteur in 1887, is today an internationally renowned center for biomedical research.  In the pursuit of its mission to tackle diseases in France and throughout the world, the Institut Pasteur operates in four main areas: research, public health, training, and development of research applications. The Institut Pasteur is a globally recognized leader in infectious diseases, microbiology, and immunology, with research focusing on the biology of living systems. Among its areas of investigation are emerging infectious diseases, antimicrobial resistance, certain cancers, neurodegenerative diseases, and brain connectivity disorders. The Institut Pasteur's outstanding research is facilitated by the development of a technological environment of the highest standard, with core facilities for nanoimaging, computational biology and artificial intelligence. Since its inception, 10 Institut Pasteur scientists have been awarded the Nobel Prize for Medicine, including two in 2008 for the 1983 discovery of the human immunodeficiency virus (HIV) that causes AIDS.
The Institut Pasteur is part of the Pasteur Network a worldwide network of 33 members on five continents, united by Pasteurian values, that contribute to global health.
Since July 1, 2021, the Institut Pasteur is a research partner organization of Université Paris Cité.

 

Addressing climate change and inequality: A win-win policy solution



CMCC Foundation - Euro-Mediterranean Center on Climate Change





Climate change and economic inequality are deeply interconnected, with the potential to exacerbate each other if left unchecked. A new study published in Nature Climate Change sheds light on this critical relationship using data from eight large-scale Integrated Assessment Models (IAMs) to examine the distributional impacts of climate policies and climate risks. The study provides robust evidence that climate policies aligned with the Paris Agreement can mitigate long-term inequality while addressing climate change.

Led by Johannes Emmerling, Senior Scientist at the Euro-Mediterranean Center on Climate Change (CMCC), the study assesses how climate change is projected to increase inequality within countries, with the Gini index increasing by an average of 1.4 points by 2100. However, implementing ambitious climate policies—such as carbon pricing—can significantly reduce this inequality increase in the long term. The study finds that redistributing carbon revenues equally among citizens can not only offset short-term economic costs but also reduce inequality, lowering the Gini index by nearly 2 points.

“This research demonstrates that with careful policy design, we can address both climate change and economic inequality—two of the most pressing challenges of our time,” says Emmerling. “By showing how redistributing carbon revenues can lead to immediate benefits for lower-income households while setting us on a path to a stable climate, we hope to provide policymakers with a roadmap for more equitable and politically feasible climate action.”

The innovative multi-model comparison highlights that, while climate policies may result in a short-term rise in inequality, well-designed redistribution mechanisms can reverse this trend and contribute to greater social justice.

“As countries around the world look for ways to meet climate goals without exacerbating inequality, this paper comes as especially timely, highlighting the importance of smart policy design to ensure that the benefits of climate action are shared equitably,” said Emmerling.

“This research highlights the need and the possibility to align climate safety and climate justice. This is a research topic of high importance for our institute, and this international collaboration is a testament to the capacity of community research to inform high-stakes issues”, concludes Massimo Tavoni, author of the study and director of the European Institute on Economics and the Environment at CMCC.

Large artificial intelligence language models, increasingly unreliable



According to a study by the Universitat Politècnica de València, ValgrAI and the University of Cambridge, published in the journal Nature



Universitat Politècnica de València

Is Chat GPT reliable? 

image: 

UPV researchers

view more 

Credit: UPV





According to José Hernández Orallo, a researcher at the Valencian Institute for Research in Artificial Intelligence (VRAIN) of the UPV and ValgrAI, one of the main concerns about the reliability of language models is that their performance does not match the human perception of task difficulty. In other words, there is a mismatch between expectations that the models will fail based on human perception of task difficulty and the tasks on which the models fail. 'Models can solve certain complex tasks in line with human abilities, but at the same time, they fail on simple tasks in the same domain. For example, they can solve several PhD-level mathematical problems. Still, they can get a simple addition wrong,' notes Hernández-Orallo.

In 2022, Ilya Sutskever, the scientist behind some of the most significant advances in artificial intelligence in recent years (from the Imagenet solution to AlphaGo) and co-founder of OpenAI, predicted that ‘maybe over time that discrepancy will diminish’.

However, the study by the UPV, ValgrAI and Cambridge University team shows this has not been the case. To demonstrate this, they investigated three key aspects that affect the reliability of language models from a human perspective.

There is no ‘safe zone’ in which models work perfectly

The study finds a discordance with perceptions of difficulty. ‘Do models fail where we expect them to fail? Our work finds that models tend to be less accurate on tasks that humans consider difficult, but they are not 100% accurate even on simple tasks. This means that there is no ‘safe zone’ in which models can be trusted to work perfectly,' says researcher of the VRAIN Institute, Yael Moros Daval.

In fact, the team from the VRAIN UPV Institute, ValgrAI and the University of Cambridge assures that the most recent models basically improve their performance in tasks of high difficulty but not in tasks of low difficulty, 'which aggravates the difficulty mismatch between the performance of the models and human expectations', adds Fernando Martínez Plumed, also a researcher at VRAIN UPV.

More likely to provide incorrect answers

The study also finds that recent language models are much more likely to provide incorrect answers rather than avoid giving answers to tasks they are unsure of. 'This can lead users who initially rely too much on the models to be disappointed. Moreover, unlike people, the tendency to avoid providing answers does not increase with difficulty. For example, humans tend to avoid giving feedback on problems beyond their capacity. This puts the onus on users to detect faults during all their interactions with models,' adds Lexin Zhou, a member of the VRAIN team who was also involved in this work.

Sensitivity to the problem statement

Is the effectiveness of question formulation affected by the difficulty of the questions? This is another issue addressed by the UPV, ValgrAI and Cambridge study, which concludes that the current trend of progress in the development of language models and greater understanding of a variety of commands may not free users from worrying about making effective statements. ‘We have found that users can be influenced by prompts that work well in complex tasks but, at the same time, get incorrect answers in simple tasks,’ adds Cèsar Ferri, co-author of the study and researcher at VRAIN UPV and ValgrAI.

Human supervision unable to compensate for these problems

In addition to these findings on aspects of the unreliability of language models, the researchers have discovered that human supervision is unable to compensate for these problems. For example, people can recognise tasks of high difficulty but still frequently consider incorrect results correct in this area, even when they are allowed to say 'I'm not sure', indicating overconfidence.

From ChatGPT to LLaMA and BLOOM

The results were similar for multiple families of language models, including OpenAI's GPT family, Meta's open-weighted LLaMA, and BLOOM, a fully open initiative from the scientific community.

Researchers have further found that issues of difficulty mismatch, lack of proper abstention, and prompt sensitivity remain problematic for new versions of popular families, such as OpenAI's new o1 and Anthropic's Claude-3.5-Sonnet models.

'Ultimately, large language models are becoming increasingly unreliable from a human point of view, and user supervision to correct errors is not the solution, as we tend to rely too much on models and cannot recognise incorrect results at different difficulty levels. Therefore, a fundamental change is needed in the design and development of general-purpose AI, especially for high-risk applications, where predicting the performance of language models and detecting their errors is paramount,' concludes Wout Schellaert, a researcher at the VRAIN UPV Institute.

 

Researchers propose a new paradigm for economic performance and sustainability



University of Turku





An international and interdisciplinary group of researchers has compiled a proposal for a new paradigm for economic performance and sustainability, promoting cooperation and ethical behaviour as key drivers of economic success.

A team of researchers led by Associate Professor Jan Pfister from the University of Turku in Finland has introduced a forward-thinking approach to performance management, offering a vision for how businesses and organizations can achieve both economic success and sustainability. In their recent publication in Qualitative Research in Accounting & Management, the researchers introduce the “prosocial market economy” as a model designed to embed cooperation, ethical behaviour, and sustainability into organizational practices.

Funded by the Research Council of Finland, this research reconsiders the focus on self-interest and short-term gains that dominate traditional economic frameworks. Instead, it argues that organizations fostering prosocial behaviours—those that prioritize collaboration and collective well-being—are more likely to succeed over time. The proposed prosocial market economy highlights ethical responsibility as a key driver of both superior performance and long-term sustainability.

One of the key insights of the research is a new framework for success; the researchers explain why and how organizations emphasizing cooperation and ethical behaviour outperform those driven solely by self-interest.

Drawing on evolutionary theory and insights from Nobel Prize-winning economist Elinor Ostrom, the study suggests that integrating the core design principles of prosocial groups into performance measurement and management practice fosters economic success. The study also highlights the role of sustainability: the prosocial market economy envisions businesses that can achieve economic success while contributing positively to environmental sustainability and social well-being.

“In the context of global challenges such as climate change, inequality, and resource depletion, this new model offers an alternative to traditional economic paradigms that prioritize short-term profits. The prosocial market economy provides a framework where business success and ethical responsibility coexist, helping companies address today’s complex demands while ensuring long-term viability and positive social impact,” explains Pfister.

This interdisciplinary research brought together experts from several institutions in Finland, United Kingdom, USA and United Arab Emirates, and various fields, including accounting, management, and evolutionary biology.

The collaboration underscores the importance of integrating behavioural science and management studies to foster a more sustainable and cooperative economic system.

From Theory to Practice

The prosocial market economy offers practical tools for organizations across industries, both in the private and public sectors, providing business leaders with principles to design prosocial teams and foster collective success.

According to Pfister, “Our research clarifies why and how prosocial groups—those that prioritize shared success over individual gain—outperform groups driven purely by self-interest. This has profound implications for how we measure and manage performance, and how we cultivate organizational behaviour that protects and nurtures a sustainable future.”

With increasing pressure from consumers, policymakers, and investors to embrace sustainable practices, this research moves beyond the conventional Environmental, Social, and Governance (ESG) debate, which often relies on external measurements. The prosocial market economy focuses on transforming organizational cultures from within, embedding sustainability and cooperation into core values. This inside-out approach helps businesses foster genuine, lasting sustainability, avoiding greenwashing and the hypocrisy of superficial ESG compliance.

The prosocial market economy offers a pathway for organizations to be both economically successful and environmentally and socially responsible.

 

New research sheds light on the Tartessos culture’s sustainable constructions skills




University of Helsinki

Casas del Turuñuelo archaeological site 

image: 

1. Casas del Turuñuelo archaeological site. (A) Location of the site in the context of the Iberian peninsula
(Digital Terrain Model from OpenStreetMaps, QGIS version 3.4.6); (B) Map of the settlement of the Middle Guadiana Valley during the sixth/fifth centuries BC (DTM from Instituto Geográfico Nacional de España, QGIS version 3.4.6); (C) Aerial view of the Casas del Turuñuelo tumulus, 2017.

view more 

Credit: Credit: Proyecto Construyendo Tarteso




An international team of researchers have conducted groundbreaking research at the Casas del Turuñuelo site in Guareña, Badajoz, Spain. This archaeological site stands as the best-preserved earthen building in the western Mediterranean. Thanks to its remarkable state of preservation, a team of researchers from the Universities of Helsinki and Murcia, alongside the Institute of Archaeology (CSIC – Junta de Extremadura), has successfully reconstructed the construction process of this monumental edifice. Esther Rodrigues Gonzales and Sebastian Celestino Perez, CSIC, have been in charge of the excavation of this amazing site since since 2014.

Marta Lorenzon from the University of Helsinki, a key member of the research team, explains:

“Our goal was to understand how the earthen materials were processed and used, the organization of labor, and the skill levels of the community involved in the construction. We aimed to shed light on the construction methods, environmental exploitation, and the socio-political structures that enabled the creation of such monumental architecture.”

According to one of the co-writers of the research, Benjamín Cutillas Victoria, the Tartessians' use of earthen construction was a highly adapted and sustainable approach, particularly in local environmental contexts. The fact that they thrived using this form of architecture adds a new dimension to the story of their building skills and culture, one that has not been emphasized much in previous research.

This achievement not only highlights the advanced architectural skills of the Tartessians, but also underscores their sustainable approach to building in harmony with their local environment.

“This research can help us better understand sustainable building practices from the past that might inspire modern construction. The use of local, natural resources in a coordinated manner could offer ideas for eco-friendly building approaches today”, Lorenzon states.

Additionally, understanding how ancient societies organized labor and resource management can provide lessons for modern-day project management and workforce coordination.

“The ability to reconstruct these ancient techniques provides invaluable insights into how this protohistoric culture thrived and adapted, offering a new dimension to our understanding of their ingenuity and resilience”, Lorenzon concludes.

Mudbrick wall documented in the south corridor of the structure and a selection of imported materials 

 

Extensive afforestation and reforestation can brake global warming



Ludwig-Maximilians-Universität München





Simulations by LMU scientists show that large-scale afforestation and reforestation could reduce global peak and end-of-century temperature and shorten the overshoot period during which the 1.5°C target is exceeded.

The Paris Agreement calls on us to limit global warming to 1.5 degrees Celsius above pre-industrial levels. To reach this goal, we need to reduce CO₂ emissions and remove existing CO₂ from the atmosphere. A team led by LMU scientists Yiannis Moustakis and Julia Pongratz has now demonstrated that large-scale afforestation/reforestation (AR) can make an important contribution to such efforts. Simulations by the researchers show that AR could reduce peak and end-of-century temperature and shorten the period during which global temperature exceeds the 1.5-degree target, as they report in the journal Nature Communications.

AR is currently the most frequently deployed method of removing carbon dioxide from the atmosphere. “Overall, the international community has already announced ambitious AR goals of up to 490 million hectares by 2060, and this figure is likely to increase even further as more countries present their long-term plans. We wanted to find out how strongly these measures can influence the climate,” says Moustakis. “Investigating their effects in detail requires the use of cutting-edge models which can represent an interactive carbon cycle that takes into account various processes and feedbacks.”

Modeling with more than 1,200 scenarios

The researchers employed an unprecedented number of more than 1,200 scenarios from Integrated Assessment Models (IAMs) – models that link climate policies with future energy, economic, and land use pathways – as well as restoration priority maps and biodiversity data to develop an ambitious AR scenario. In this way, the scientists were able to take technical and economic challenges into account, while also considering the impact on biodiversity and land availability in the countries.

Based on this, the researchers developed a scenario that foresees 595 million hectares (Mha) of afforested/reforested land by 2060 and 935 Mha by 2100. “This is definitely an ambitious scenario, and one could of course question the feasibility of such ambitious efforts. However, this is not arbitrarily chosen. We tried to develop a scenario that is rather in the range of country peldges globally, extend it to the end of the century, and constrain its spatial and temporal features by technoeconomic considerations, while minimizing the impact on biodiversity. ,” emphasizes Moustakis. Next, the scientists analyzed this AR scenario with the Earth System Model of the Max Planck Institute for Meteorology, which can simulate the effects of afforestation/reforestation on the climate. The simulations used an overshoot scenario – that is to say, a climate scenario where the emission trajectory is such that average global temperature goes beyond the 1.5°C target before falling back below this level around the end of the century. “As delays in drastic climate action persist, such scenarios are becoming increasingly probable,” explains Moustakis.

Average temperature falls, overshoot duration shortens

The results of the simulations show that ambitious AR measures could significantly influence the climate. Specifically, AR could reduce global peak temperature at mid-century by 0.08°C, while lowering end-of-century temperature by 0.2°C compared to a scenario without AR. In addition, the measures could shorten the time during which global temperature overshoots the 1.5-degree target by 13 years. The footprint of AR on global temperature becomes evident already by 2052.

Another important finding of the simulations is that the side effects of AR on climate do not outweigh their positive effects of taking up CO2: AR not only has effects on carbon sequestration, but also changes physical properties of the Earth’s surface, such as the albedo – the ability of the Earth’s surface to reflect sunlight – and the evaporation of water. This can lead to warming in some regions. But as the study shows, the cooling effect of CO₂ absorption predominates, as the local warming caused by AR is not strong enough to cancel out the cooling effect.

“These results show that global AR can in fact make an important contribution to mitigating climate change, when applied at the large sale,” says Moustakis. “But it is not a panacea and must be viewed in a more comprehensive framework that takes socioeconomic trade-offs equally into account. Planting a forest could create jobs, revenue, and promote ecosystem services, but it could also deprive people’s livelihood, exacerbate poverty, financially or physically displace people, and disturb local food networks.”

 

Groundbreaking wastewater treatment research led by Oxford Brookes targets global challenge of toxic ‘forever chemicals’



Oxford Brookes University




Researchers at Oxford Brookes University have pioneered a groundbreaking method to tackle one of the world’s most persistent environmental threats—toxic chemicals in global water supplies. 

 

They have developed a new machine called a hydrodynamic reactor that uses bubbles which form and collapse due to changes in pressure, a process called cavitation. The reactor removes toxic per- and poly-fluoroalkyl substances (PFAS), also known as “forever chemicals” from water. 

 

PFAS chemicals were invented in the 1930s and used in convenience products like waterproof clothing, pizza boxes, nonstick pans, and stain-resistant carpets. Concerns were raised about these chemicals being toxic in the 1970s and confirmed by scientists in the early 2000s. They have been found to be a health hazard that has infiltrated water supplies worldwide. The chemicals are associated with diseases such as ulcerative colitis, thyroid problems, elevated cholesterol, liver damage, and cancer. 

 

Removing PFAS chemicals from water sources is a global challenge. They are released into water sources through industry wastewater, landfills containing contaminated products, domestic wastewater, sewage and agricultural run-off.  

 

Professor Iakovos Tzanakis, who specialises in Engineering Materials at Oxford Brookes and is one of the lead researchers, said: “Once contaminated water is released into rivers, lakes and seas, it eventually infiltrates our public water supplies, including our domestic drinking water. Our challenge has been to find a way of effectively treating water to remove PFAS chemicals sustainably and at a scale.”

 

In the UK the Royal Society of Chemists has launched a campaign to reduce the levels of PFAS in our water supplies. The EU is planning to restrict the use of PFAS chemicals with new rules introduced in 2024. In America, legally enforceable PFAS levels have also been established to protect public safety. 

 

Professor Tzanakis said: “Until now, methods of removing PFAS from water have been costly and time-consuming using chemicals and restricted to laboratory scale. But research I have carried out with my colleague Dr Morteza Ghorbani has identified a potential solution.”

 

Professor Tzanakis and Dr Ghorbani, a Royal Society-Newton Fellow at Oxford Brookes, collaborated with Sabanci University (Turkey), white goods manufacturer Beko (Turkey), KTH Royal Institute of Technology (Sweden), and IVL Swedish Environmental Research Institute to develop and test the hydrodynamic cavitation reactor.

 

The reactor uses fast-moving liquid in tiny spaces to create and pop lots of small bubbles - a process that helps to clean the water. 

 

Professor Tzanakis said: "This technology has the potential to revolutionise wastewater treatment, making it safer and more sustainable for communities around the globe. The advancements in green hydrodynamic cavitation provide a scalable alternative to current methods, overcoming its limitations.”

 

When the eco-friendly and energy-efficient hydrodynamic cavitation reactor was tested at the Hammarby Sjöstad wastewater treatment plant in Sweden, the results were far better than expected.

 

The reactor achieved nearly a 36% degradation rate of 11 common PFAS variants in just 30 minutes and didn’t need additional chemicals.

 

Dr Ghorbani, who is also affiliated to Sabanci University, said: “The results were impressive. We didn’t expect that level of PFAS processing in such a short space of time. We are now exploring the fundamental mechanisms of PFAS removal in depth at Oxford Brookes University to better control and optimise the process. 

 

“Our next step is scaling up our reactor to treat larger volumes of wastewater containing PFAS. We are aiming to treat wastewater containing PFAS volumes up to 20 litres. The reactor for this has already been built. After that, our target is to treat volumes up to 200 litres in a wastewater treatment plant in Sweden.

 

“Our goal is to boost this promising approach by unveiling the intricate physics that drive the technology through the use of various reactors in different countries. We want to ensure this technology is ready for actual wastewater treatment in the near future and as set by EU 2035 strategy when all PFAS-containing waste must be treated.”

 

This research has been published in the Chemical Engineering Journal, and has already received a prestigious award from the British Council in collaboration with Sabanci University to enable the team to continue their work. 

 

Ends 

About Oxford Brookes University
www.brookes.ac.uk

Oxford Brookes is one of the UK's leading modern universities, and is amongst the world’s top universities in 21 subject areas. Set in a world-famous student city, it enjoys an international reputation for teaching excellence as well as strong links with business, industry and the public sector.

 

Are auditory magic tricks possible for a blind audience?



Cell Press




Magic tricks make the impossible seem possible. Magicians have long captivated audiences with visual tricks, such as pulling a bunny from a hat or sawing someone in half, but tricks that rely on sound are scarce. A new article published in the Cell Press journal Trends in Cognitive Sciences on October 4 explores why creating a magical experience using only sound may be challenging and underscores the importance of making magic accessible to people with blindness.

"Given that magic is about the conflict between perceptual processes and our beliefs, we should be able to experience magic in other senses, but it turned out to be really difficult," says corresponding author Gustav Kuhn (@GustavKuhn), an Associate Professor in Psychology at the University of Plymouth. "If you're born blind, you'll likely never have experienced a magic trick. Why is that? Can we create tricks that could be enjoyed and experienced by people with blindness?"

Only a handful of tricks involve other senses, like touch, and virtually none focus solely on auditory perception. But auditory illusions are everywhere. Stereo sound manipulates audio timing between the ears, creating the illusion of sound coming from different directions. Movies use the Shepard tone, an auditory illusion that gives the impression of an endlessly rising pitch, to build unease and tension that keeps the audience on edge.

So, why are auditory magic tricks rare? The researchers argue that the reason may stem from the fundamental differences between how the brain processes visual and auditory information. Humans are visual creatures. We tend to trust what we see more than what we hear, making us more surprised when our vision fools us.

Visual perception also reflects the state of the world, while auditory perception is transient. In other words, sound provides information about an event that has happened. Because magic relies on manipulating the perceived state of the world, this distinction between vision and sound may be at the heart of why auditory tricks are elusive.

"If you see a trumpet, you don't say 'I saw a perception of a trumpet,'" says Kuhn. "But if you hear a trumpet, you’re more likely to say, 'I heard the sound of a trumpet.' This is the kind of difference we don't think about."

Another possibility is that magicians simply never considered creating auditory tricks, though the team believes it’s unlikely given the creativity and history of the craft. Still, to find out, the team launched a competition challenging magicians to conjure tricks using only sound, with results expected in November 2024.

"Magic should not rely on vision alone, and yet it's nearly impossible to perform a trick that does not involve our visual perception," says Kuhn. "We don’t fully understand why yet, but this is an interesting question that invites more investigation into our senses and may help make magic more inclusive."

###

This work was supported by the Agence Nationale de Recherche grant.

Trends in Cognitive Sciences, Kuhn et al. “Magic for the blind: Are auditory tricks impossible?” https://cell.com/trends/cognitive-sciences/fulltext/S1364-6613(24)00223-7

Trends in Cognitive Sciences (@TrendsCognSci), published by Cell Press, is a monthly review journal that brings together research in psychology, artificial intelligence, linguistics, philosophy, computer science, and neuroscience. It provides a platform for the interaction of these disciplines and the evolution of cognitive science as an independent field of study. Visit http://www.cell.com/trends/cognitive-sciences. To receive Cell Press media alerts, please contact press@cell.com.

Disclaimer: AA

 

Online medication abortion direct-to-patient fulfillment before and after the Dobbs v Jackson decision



JAMA Network Open





About The Study: 

The findings of this study suggest that online pharmacies play an increasingly critical role in direct-to-patient medication abortion provision, especially with virtual-only platforms. These findings suggest that barriers to accessing online pharmacies for abortion care should be removed. 



Corresponding Author: To contact the corresponding author, Caila Brander, MSc, email cbrander@ibisreproductivehealth.org.

To access the embargoed study: Visit our For The Media website at this link https://media.jamanetwork.com/

(doi:10.1001/jamanetworkopen.2024.34675)

Editor’s Note: Please see the article for additional information, including other authors, author contributions and affiliations, conflict of interest and financial disclosures, and funding and support.

#  #  #

Embed this link to provide your readers free access to the full-text article 

 http://jamanetwork.com/journals/jamanetworkopen/fullarticle/10.1001/jamanetworkopen.2024.34675?utm_source=For_The_Media&utm_medium=referral&utm_campaign=ftm_links&utm_term=100424

About JAMA Network Open: JAMA Network Open is an online-only open access general medical journal from the JAMA Network. On weekdays, the journal publishes peer-reviewed clinical research and commentary in more than 40 medical and health subject areas. Every article is free online from the day of publication.