It’s possible that I shall make an ass of myself. But in that case one can always get out of it with a little dialectic. I have, of course, so worded my proposition as to be right either way (K.Marx, Letter to F.Engels on the Indian Mutiny)
Thursday, January 23, 2025
Claims for the world’s deepest earthquake challenged by new analysis
The magnitude 7.9 Bonin Islands earthquake sequence, which ruptured deep within the earth near the base of the upper mantle, did not include an aftershock that extended to record depths into the lower mantle, according to a study in The Seismic Record.
When Hao Zhang of the University of Southern California and colleagues re-examined the aftershock sequence of the May 2015 earthquake, they did not find evidence for a 751-kilometer-deep aftershock as reported by previous researchers. This aftershock has been called the deepest earthquake ever recorded.
Instead, their study found a distribution of aftershocks that is compatible with a 12-kilometer sliver of a mantle mineral called olivine that could shed light on how deep earthquakes can occur.
The Bonin Islands earthquake, which ruptured 1000 kilometers offshore of Japan in a remote part of the Pacific Ocean, is one of the deepest and largest earthquakes ever recorded. The earthquake took place within the Izu-Bonin subduction zone 680 kilometers below the Earth’s surface.
The mechanisms behind deep earthquakes—those occurring 500 kilometers or deeper—are something of a mystery to seismologists. Extremely high pressures and temperatures at these depths make rock more likely to bend or deform plastically, rather than break in the brittle fashion that causes earthquake rupture at shallower depths.
These earthquakes also typically produce few aftershocks, Zhang noted, which could provide useful data to understand how these deep events are generated at subduction zones.
Plastic deformation “limits the formation of extensive fracture networks that would typically generate aftershocks,” he said. “Additionally, the high confining pressures promote efficient redistribution of stress following the mainshock, further reducing the likelihood of subsequent seismic events.”
One previous study of the Bonin Islands earthquake reported a foreshock sequence for the event, while a second study detected a potentially record-breaking deep aftershock in the lower mantle.
“Both findings could significantly advance our understanding of deep earthquakes, if accurate,” said Zhang. “However, these two catalogs are inconsistent, and both have methodological limitations. Therefore, it is essential to re-examine the aftershock sequence using improved techniques.”
To gain a better look at the deep and remote earthquake, Zhang and colleagues turned to data collected by a dense seismic array in Japan called Hi-Net, using a combination of techniques to precisely locate seismic signals coming from the event.
Their new analysis detected no foreshocks but identified 14 aftershocks in the upper mantle within a 150-kilometer radius of the earthquake’s hypocenter. One set of aftershocks aligned with the rupture plane of the earthquake one week after the mainshock, with a second set dispersing over a wider area during the second week.
“While it remains challenging to definitely reject the existence of seismicity initiated in the lower mantle and its associated mechanisms, our results do reject the most compelling lower mantle seismicity claim to date,” the researchers write in their paper.
The aftershock pattern is compatible with the presence of a metastable olivine wedge or MOW, the researchers suggested. In a subducting slab, olivine can delay its transition into other mineral states under high temperature and pressure. “This delayed transformation may generate stress and release energy, potentially triggering deep earthquakes,” Zhang said.
With MOWs as potential earthquake nucleation sites, some researchers have proposed this mechanism of transformational faulting as one of the main ways that deep earthquakes occur, he added.
“Furthermore, MOWs offer insights into the thermal structure and behavior of subducting slabs, with colder slabs being more likely to preserve metastable olivine at greater depths,” Zhang added. “By studying MOWs, we can refine models of deep earthquake generation and improve our understanding of the dynamic processes in Earth’s interior.”
Credit: Photo credit: Susan Alberts, Duke University
DURHAM, N.C. -- A few things come to mind when we imagine the “alpha male” type. They’re the ones calling the shots, who get all the girls. But there’s a downside to being a strong and powerful alpha stud -- at least if you’re a baboon.
Studies show that despite their high rank, the No. 1 males in baboon society are also some of the most stressed out, as measured by their high levels of glucocorticoids, the hormones involved in the ‘fight-or-flight’ response.
But the leaders’ stress burden comes from a surprising source. New research reveals it’s not the time they spend fighting with other males that raises their stress hormone levels. Instead, it’s the effort they put into their mates.
That’s according to a new study of wild baboons in Kenya led by Duke University professor Susan Alberts with Catherine Markham at Stony Brook University, as well as senior research scientist Laurence Gesquiere of Duke.
As anyone who has climbed the career ladder knows, leaders can face many sources of stress. Office politics. Tough decisions. The constant pressure to perform.
Alpha male baboons may not have deadlines to meet, but it doesn’t mean their lives are stress-free, the researchers said.
To reach the top spot and stay there, male baboons must rely on their strength and battle skills to jockey for position and earn their place.
The dominant male in a group also jealously guards his right to mate, closely monitoring females during their fertile periods and following them around for days at a time to make sure he is the one who fathers any offspring.
Since 1971, researchers have monitored individual wild baboons in southern Kenya on a near-daily basis, keeping careful track of their social interactions and other behavior over their lifetimes as part of the Amboseli Baboon Research Project.
One of the first hints that the top spot can pose distinct challenges for males came from a study Gesquiere and colleagues published in 2011, which revealed a striking pattern:
Compared with non-leaders, alpha males had significantly higher levels of glucocorticoids.
When stress kicks in, the brain releases these and other hormones that mobilize energy to help the body cope with challenging situations.
In humans, “this response is activated for every kind of challenge we face, whether it's running down the block because you're late for a meeting, running a marathon, or going into a stressful meeting with your boss,” Alberts said.
But what exactly was fueling the baboons’ stress response was unclear.
In the new study, published Jan. 22 in the journal Proceedings of the Royal Society B, the researchers combined 14 years of behavioral records for 204 male baboons with data about their stress hormones, measured from droppings the animals left behind.
The results show, once again, that life is tough at the top. Glucocorticoid levels were 6% higher for alpha males than the rest of the ranks.
But the researchers also discovered a new clue -- top-ranking males also had lower levels of a thyroid hormone called T3, indicating they are burning more calories than they’re taking in.
“Being alpha really has energy consequences,” Gesquiere said.
The new study suggests it’s not the alpha male’s aggressive behavior -- the time he spends fighting and letting everyone know he’s the boss -- that’s taxing his energy reserves.
No matter how often the alpha males threatened, bullied, or pushed around other males, neither their stress hormones nor their thyroid hormones were affected.
“That was a big surprise,” Gesquiere said.
It may be that their flare-ups with other males are too subtle or short-lived to get to them, Alberts explained.
Or that once they get to the top, alpha males no longer need to be overtly aggressive to prove themselves. The alpha can scare other males away with merely a raised eyebrow or a flash of his fangs.
Instead, the researchers found the number one energy drain and source of stress for alpha male baboons was, you guessed it, their mates.
Both their stress hormones and their thyroid hormones -- indicators of their energy demands -- directly correlated with the time they spent monopolizing fertile females.
“They’re essentially staking a claim; preventing other males from gaining access,” Alberts said.
“They're constantly interrupted,” Alberts said. They’ll start to dig up or pry open a tasty morsel, “but then the female gets up and walks away, and they have to abandon it.”
Over time, the demands of being a top-ranking male may be harmful to their health.
Previous studies of Amboseli baboons have found that top-ranking males age faster -- as measured by chemical changes to their DNA -- and live shorter lives than those with lower social standing.
Human hierarchies are more complicated, and so the implications for humans are less clear, Gesquiere said.
We belong to multiple communities throughout our lives, each with its own social dynamics. Someone who’s a peon in the office may be a top player on the pickleball court, for example.
But for dominant male baboons, “stress definitely has long-term consequences,” Gesquiere said.
This research was supported by grants from the National Science Foundation (IOS-1926060 and IOS-1926040) and the National Institutes of Health (NIA R01AG053308, R01AG071684, R01AG075914, and R61AG078470)
CITATION: "Energetic Costs of Social Dominance in Wild Male Baboons," Laurence R. Gesquiere, Christine Adjangba, Georgia Young, Clara Brandon, Sophie Parker, Emily E. Jefferson, Tim L. Wango, Vivian K. Oudu, Raphael S. Mututua, J. Kinyua Warutere, I. Long’ida Siodi, A. Catherine Markham, Elizabeth A. Archie, Susan C. Alberts. Proceedings of the Royal Society B, Jan. 22. DOI: 10.1098/rspb.2024.1790
The team used the environmental circumstances and fecal samples collected from the six years prior to the El Niño drought to study the relationship between the endocrinologic stress response and survival in the white-faced capuchins.
Monkeys who showed a steeper rise in these stress hormones during the mild droughts were more likely to survive the severe El Niño drought.
As weather intensifies globally, longitudinal studies of how wild animals cope with changes in temperature, rainfall and food availability can help us understand which species can adapt rapidly.
White-faced capuchin monkeys in Costa Rica who experienced more intense physiological responses to mild droughts were more fit to survive extreme drought, researchers found in a new UCLA-led study.
Most research on wild animals and humans is focused on the damage that stress response causes to a system — “the wear-and-tear.” In this new study, however, published in the journal of Science Advances, a team of researchers sought to examine the adaptive nature of the stress response in wild primates and how a more robust stress response might help them when faced with catastrophic events.
“We wanted to understand how the stress response adaptively helps these individuals survive greater challenges,” Susan Perry, a UCLA evolutionary anthropologist, field primatologist and co-author of the study, said.
In the absence of an experimental design that could apply the same stressor to all individuals in a population, the researchers took advantage of a natural experiment — a particularly severe El Niño drought — to investigate the relationship between hormonal responses to this extreme stressor and survival outcomes of white-faced capuchins at the Lomas Barbudal Capuchin Monkey Project in Guanacaste, Costa Rica.
In Guanacaste, Perry and a team of researchers have been documenting the lives of monkeys and studying their social behaviors and survival strategies for 35 years. For this particular study, Perry’s research group (including current and former graduate students, and research assistants Irene Godoy, Ashley Mensing, Juliane Damm and Colleen Gault) collaborated with University of Michigan researchers Jacinta Beehner and Sofia Carrera.
How a drought led to discovery
The El Niño drought that spread across Central and South America from 2014 to 2016 was the biggest in recent history and led to the deaths of many monkeys. While devastating for the monkeys and the researchers who have studied them for so long, Perry’s team was able to make use of the environmental circumstances and samples collected from the six years prior to study the relationship between the endocrinologic stress response and survival in the white-faced capuchins.
During the drought, the monkeys started to lose weight, revealing vertebrae and rib cages, and mothers even rejected caring for and nursing their infants, abandoning them to go forage. Monkeys who would normally babysit infants in a mother’s absence also were not taking on child care responsibilities. Mortality rates soared, particularly for infants and older females. This was the only time in Perry’s long-term study that these monkeys, who are usually behaviorally flexible, failed to adapt to an environmental stressor by simply changing how they behaved (e.g., changing their diet).
For 14 female monkeys who survived and 14 who had died, the researchers analyzed glucocorticoid levels in fecal material that had been collected from them in the six years prior to the El Niño drought (2008-2013). Glucocorticoids are steroid hormones that regulate metabolism, inflammation and the immune system.
What glucocorticoids in monkey fecal matter revealed
The researchers discovered that the monkeys who showed a steeper rise in these stress hormones during the mild droughts were more likely to survive the severe El Niño drought than those monkeys who experienced less of a stress response. The findings controlled for other conditions known to affect these hormone levels, such as pregnancy and time of day.
With a clearer picture of what an adaptive stress response looks like for this species and population, Perry’s team can begin to ask questions about the origin and maintenance of individual differences in the endocrine stress response and whether these differences affect survival.
The study also puts a spotlight on the value of long-term studies in the face of climate change. As weather intensifies globally, longitudinal studies of how wild animals cope with changes in temperature, rainfall and food availability can help us understand which species can adapt rapidly through learning or physiological flexibility and which species lack the ability to cope with major environmental changes during their lifetimes. This knowledge can be useful for conservation reasons. For example, a population of highly endangered animals that cannot quickly adapt to change might need to be moved to a place that now has climatic conditions that match the environment in which that population evolved.
Journal
Science Advances
New, embodied AI reveals how robots and toddlers learn to understand
With a brain-inspired architecture, the model offers insights into cognitive development and information processing in neural networks.
Okinawa Institute of Science and Technology (OIST) Graduate University
We humans excel at generalization. If you taught a toddler to identify the color red by showing her a red ball, a red truck and a red rose, she will most likely correctly identify the color of a tomato, even if it is the first time she sees one.
An important milestone in learning to generalize is compositionality: the ability to compose and decompose a whole into reusable parts, like the redness of an object. How we get this ability is a key question in developmental neuroscience – and in AI research.
The earliest neural networks, which have later evolved into the large language models (LLMs) revolutionizing our society, were developed to study how information is processed in our brains. Ironically, as these models became more sophisticated, the information processing pathways within also became increasingly opaque, with some models today having trillions of tunable parameters.
But now, members of the Cognitive Neurorobotics Research Unit at the Okinawa Institute of Science and Technology (OIST) have created an embodied intelligence model with a novel architecture that allows researchers access to the various internal states of the neural network, and which appears to learn how to generalize in the same ways that children do. Their findings have now been published in Science Robotics. “This paper demonstrates a possible mechanism for neural networks to achieve compositionality,” says Dr. Prasanna Vijayaraghavan, first author of the study. “Our model achieves this not by inference based on vast datasets, but by combining language with vision, proprioception, working memory, and attention – just like toddlers do.”
Perfectly imperfect
LLMs, founded on a transformer network architecture, learn the statistical relationship between words that appear in sentences from vast amounts of text data. They essentially have access to every word in every conceivable context, and from this understanding, they predict the most probable answer to a given prompt. By contrast, the new model is based on a PV-RNN (Predictive coding inspired, Variational Recurrent Neural Network) framework, trained through embodied interactions integrating three simultaneous inputs related to different senses: vision, with a video of a robot arm moving colored blocks; proprioception, the sense of our limbs’ movement, with the joint angles of the robot arm as it moves; and a language instruction like “put red on blue.” The model is then tasked to generate either a visual prediction and corresponding joint angles in response to a language instruction, or a language instruction in response to sensory input.
The system is inspired by the Free Energy Principle, which suggests that our brain continuously predicts sensory inputs based on past experiences and takes action to minimize the difference between prediction and observation. This difference, quantified as ‘free energy’, is a measure of uncertainty, and by minimizing free energy, our brain maintains a stable state. Together with limited working memory and attention span, the AI mirrors human cognitive constraints, forcing it to process input and update its prediction in sequence rather than all at once like LLMs do. By studying the flow of information within the model, researchers can gain insights into how it integrates the various inputs to generate its simulated actions.
It is thanks to this modular architecture that the researchers have learned more about how infants may develop compositionality. As Dr. Vijayaraghavan recounts, “We found that the more exposure the model has to the same word in different contexts, the better it learns that word. This mirrors real life, where a toddler will learn the concept of the color red much faster if she’s interacted with various red objects in different ways, rather than just pushing a red truck on multiple occasions.”
Opening the black box
“Our model requires a significantly smaller training set and much less computing power to achieve compositionality. It does make more mistakes than LLMs do, but it makes mistakes that are similar to how humans make mistakes,” says Dr. Vijayaraghavan. It is exactly this feature that makes the model so useful to cognitive scientists, as well as to AI researchers trying to map the decision-making processes of their models. While it serves a different purpose than the LLMs currently in use, and therefore cannot be meaningfully compared on effectiveness, the PV-RNN nevertheless shows how neural networks can be organized to offer greater insight into their information processing pathways: its relatively shallow architecture allows researchers to visualize the network's latent state – the evolving internal representation of the information retained from the past and used in present predictions.
The model also addresses the Poverty of Stimulus problem, which posits that the linguistic input available to children is insufficient to explain their rapid language acquisition. Despite having a very limited dataset, especially compared to LLMs, the model still achieves compositionality, suggesting that grounding language in behavior may be an important catalyst for the impressive language learning ability of children.
This embodied learning could moreover show the way for safer and more ethical AI in the future, both by improving transparency, and by it being able to better understand the effects of its actions. Learning the word ‘suffering’ from a purely linguistic perspective, as LLMs do, would carry less emotional weight than for a PV-RNN, which learns the meaning through embodied experiences together with language.
“We are continuing our work to enhance the capabilities of this model and are using it to explore various domains of developmental neuroscience. We are excited to see what future insights into cognitive development and language learning processes we can uncover,” says Professor Jun Tani, head of the research unit and senior author on the paper. How we acquire the intelligence to create our society is one of the great questions in science. While the PV-RNN hasn’t answered it, it opens new research avenues into how information is processed in our brain. “By observing how the model learns to combine language and action,” summarizes Dr. Vijayaraghavan, “we gain insights into the fundamental processes that underlie human cognition. It has already taught us a lot about compositionality in language acquisition, and it showcases potential for more efficient, transparent, and safe models.”
Journal
Science Robotics
Method of Research
Computational simulation/modeling
Subject of Research
Not applicable
Article Title
Development of compositionality through interactive learning of language and action of robots
Article Publication Date
22-Jan-2025
Game, set, match: Exploring the experiences of women coaches in tennis
As the Australian Open continues in full swing, a new study can reveal female tennis coaches experience significantly more barriers in their profession – including discrimination and marginalisation – than their male counterparts, with fewer than half seeing themselves continuing as coaches long term.
Led by Dr Jasmine Petersen from Flinders University’s Caring Futures Institute, the study interviewed 243 Australian tennis coaches, a third of which were women, assessing their motivation and barriers towards coaching and the support they received.
“Women coaches continue to be a minority in sport and tennis is no exception, with only 20% of tennis coaches globally being women and 26% within Australia,” says Dr Petersen, also a Research Fellow at UniSA.
“The underrepresentation of women in tennis coaching not only limits diversity but also diminishes the visibility of role models who can inspire the next generation of players and coaches.
“Tennis Australia is looking to increase female participation so in order to address this disparity we need to understand from both male and female coaches, what motivates them, what support do they need and what factors make them want to walk away.”
Published in the Journal of Sports Sciences, the study found women coaches reported significant barriers to their coaching careers, including limited opportunities for leadership roles, and experiences of exclusion within professional networks. Additionally, women coaches reported facing challenges in balancing work-life responsibilities.
The key finding however was the concerning trend of women likely to leave the profession early, with fewer than 42% reporting they were likely to commit to coaching for over 10 years, compared to 55% of men.
“The retention of male coaches seemed to be strongly linked to their level of motivation and self-identity, and while women coaches also exhibited high levels of motivation, passion for the game and desire to make an impact, this may not be enough to overcome the significant barriers they face,” says Dr Petersen.
“This suggests that their ability to persist in coaching roles may be more dependent on external factors, such as systemic support and organisational change. It is therefore vital that sporting organisations work to dismantle gender norms and stereotypes and normalise women in coaching roles.
“Our findings show social support is key, so initiatives such as mentoring and networking would go along way to creating a more supportive environment for both men and women coaches, and make more meaningful steps toward achieving gender balance in coaching," added Dr Petersen.
The paper, ‘Exploring the gendered experiences of Australian tennis coaches: A cross-sectional survey of women and men tennis coaches’ by Jasmine M. Petersen, Shane Pill, Kate Ridley, Deborah Agnew and Ivanka Prichard is published in the Journal of Sports Sciences. DOI: 10.1080/02640414.2024.2425906. This research was supported by Tennis Australia through an Australian Sports Commission federal government grant.
Exploring the gendered experiences of Australian tennis coaches: A cross-sectional survey of women and men tennis coaches
Camel milk udderly good alterative to traditional dairy
Edith Cowan University
In addition to being hypoallergenic, camel milk could potentially protect the gut from harmful enzymes and create healthier digestion.
New research from Edith Cowan University (ECU) has found that camel milk contained more naturally occurring bioactive peptides when compared to its bovine counterpart.
“This is a good thing. We now know that camel milk has the potential to be hypoallergenic compared to cow’s milk, but also that it has higher potential to yield bioactive peptides which can have antimicrobial properties and anti-hypertensive properties,” PhD student Mr Manujaya Jayamanna Mohittige said.
“These bioactive peptides can selectively inhibit certain pathogens, and by doing so, create a healthy gut environment and also has the potential to decrease the risk of developing cardiovascular disease in future.”
However, Mr Mohittige noted that the potency of these bioactive peptides still needed further testing.
The research has reiterated that camel milk did not contain major milk allergen β-lactoglobulin (β-Lg), providing dairy users with β-Lg allergy a viable alternative to cow’s milk. Existing literature highlights that lactose levels in camel milk are lower compared to those in cow’s milk.
Nutritional breakdown
According to previous research when comparing beverages, cow’s milk typically contains between 85% to 87% water, with 3.8% to 5.5 % fat, 2.9% to 3.5% protein, and 4.6% lactose.
In contrast, camel milk consists of 87% to 90% water. Its protein content varies from 2.15% to 4.90%, fat ranges from 1.2% to 4.5%, and lactose levels are between 3.5% and 4.5%.
Rising popularity
Currently, around 81% of global milk production is sourced from cows, with camel milk production sitting at fifth place, behind buffaloes, goats and sheep. Camel milk only accounts for around 0.4% of current milk production globally.
However, Mr Mohittige noted that Australia’s semi-arid climate and the existing camel population – albeit somewhat feral – made for a convincing argument to increase both production and consumption.
“Camel milk is gaining global attention, in part because of environmental conditions. Arid or semi-arid areas can be challenging for traditional cattle farming, but perfect for camels,” he added.
A number of camel dairies are already exist in Australia, however, production volumes are still low. Compared to dairy cows, which can produce up to 28 litres of milk a day, camels typically produce around 5 litres a day.