It’s possible that I shall make an ass of myself. But in that case one can always get out of it with a little dialectic. I have, of course, so worded my proposition as to be right either way (K.Marx, Letter to F.Engels on the Indian Mutiny)
Tuesday, May 05, 2026
New research links prenatal chemical exposure to chromosomal abnormalities in adult sperm
Environmental health epidemiologist Melissa Perry and collaborators report new human evidence that prenatal and childhood exposure to persistent environmental chemicals may influence sperm chromosomal integrity decades later.
Environmental health epidemiologist Melissa Perry, Sc.D., MHS, MBA, (pictured) and a dedicated research team have conducted one of first human studies suggesting prenatal childhood chemical exposure was associated with sperm abnormalities.
Credit: Photo by Rene Ayala/George Mason University College of Public Health
An estimated 7% of all men are affected by infertility. Multiple animal studies indicated that exposure to persistent environmental chemicals in early life can negatively impact male reproductive health, and now a human study suggests the same. Environmental health epidemiologist Melissa Perry and a dedicated research team (see full list of authors below) have conducted one of first human studies suggesting prenatal childhood chemical exposure was associated with sperm abnormalities.
Semen quality plays a critical role in reproductive outcomes and healthy sperm have 23 chromosomes (i.e. human genetic materials). Researchers found extra chromosomes in the sperm of participants who were exposed to chemicals early in life. Abnormal, poor sperm quality increases the risk of miscarriages and congenital birth defects, such as Klinefelter Syndrome.
"These findings provide new evidence that fetal and subsequent chemical exposures can have an enduring influence into adulthood on the genetic integrity of sperm,” said Perry, dean of the College of Public Health at George Mason University.
Perry and the team examined semen samples from men aged 22-24 years whose mothers provided blood samples during pregnancy from 1986 to 1987. Forever chemicals, including polychlorinated compounds (PCBs) and perfluorinated compounds (PFASs), were measured in the mother’s blood. The same boys were tested again for chemicals in their blood at ages 7 and 14. Decades later, the men provided sperm, which were assessed in this study. Fetal and early life exposure to higher levels of PCBs and PFASs (found through maternal blood samples and blood in childhood) was associated with sperm containing additional chromosomes in adulthood.
Normal sperm contain either an X (i.e., the designated chromosome for females) or Y (i.e., the one present in males) chromosome. PCB concentration in blood samples was associated mainly with having an additional Y chromosome, while PFAS exposure was consistently associated with both extra Y and X chromosomes.
Researchers theorize that PCB exposure could be from a maternal diet of contaminated seafood. PFAS exposure was likely due to environmental pollutants in food, water, and air.
“Chemical exposure is a public health issue, and there are strong associations with declining sperm concentration and quality. We really need to look toward policy solutions that prevent these chemicals from entering our environment and prevent related harms,” Perry said.
In utero and childhood exposure to organochlorines and perfluorinated chemicals in relation to sperm aneuploidy in adulthood was published in Environmental Health in May 2026. Contributing authors include Alessandra Meddis and Esben Budtz-Jørgensen from Copenhagen University, Heather A. Young and C. Rebecca Robbins from The George Washington University, Niels Jørgensen from Copenhagen University Hospital, Jónrit Halling and Maria Skaalum from The National Hospital of the Faroe Islands, Pál Weihe from the University of Faroe Islands, and Philippe Grandjean from the University of Denmark.
In utero and childhood exposure to organochlorines and perfluorinated chemicals in relation to sperm aneuploidy in adulthood
Article Publication Date
2-May-2026
Minor federal fines offer little deterrence to insurers for Medicare Advantage violations, study finds
Using newly obtained federal data, Brown health policy researchers provide one of the clearest looks at how federal regulators impose Medicare Advantage penalties over a 13-year period, and how they fail to.
A new study from health policy researchers at the Brown University School of Public Health suggests that while regulators have several tools at their disposal to penalize insurance plans that break the rules, they rely mostly on relatively small financial penalties that may do little to deter violations.
The study, published in JAMA Internal Medicine, raises questions about how effectively federal regulators — primarily the Centers for Medicare & Medicaid Services — are overseeing the fast-growing Medicare Advantage industry and protecting patients, according to researchers from Brown’s Center for Advancing Health Policy through Research.
“Nobody really knows how this regulatory authority has been imposing its different enforcement tools over the past decade,” said lead study author Zihan Chen, a Brown doctoral student in health services research. “The study was to address this kind of gap and begin to really understand how the federal government is overseeing Medicare Advantage and using its enforcement actions as a tool to punish or deter kinds of violations.”
The data for the study was obtained as through a Freedom of Information Act request and examines enforcement actions against Medicare Advantage insurers from 2010 through 2023 following violations such as inappropriately denying or delaying covered care.
Zihan said the research team was interested in examining CMS oversight because Medicare Advantage, the private alternative to traditional Medicare, now covers more than half of all Medicare beneficiaries in the U.S. and represents a major share of federal health spending. There have also been various formal complaints about Medicare Advantage insurers from beneficiaries, such as aggressive marketing and burdensome prior authorizations.
As the primary regulator, CMS has several enforcement tools, including the ability to terminate contracts, suspend plans from enrolling new members or marketing their products, and issue financial penalties.
The new analysis found that in practice, CMS rarely used its most severe tools and instead relied overwhelmingly on fines. Of 844 enforcement actions identified over the 13-year period, 87% were monetary penalties, while suspensions accounted for about 12%. Contract terminations made up less than 1%.
Even when fines were imposed by CMS, they were relatively small, peaking at about $6.50 per enrollee in 2019. Most other years they were under $3 per enrollee. While these figures could add up to millions of dollars overall, they are still small compared to the thousands of dollars insurers receive per patient each year. The researchers say it appears these amounts do little to change behavior.
“When fines are levied on plans, it almost means nothing compared to the profits the plans are making. This raises questions about meaningful consequences for violations,” said David Meyers, an associate professor of health services, policy and practice at Brown. “It supports a narrative out there that the government is a little bit asleep at the wheel when it comes to actually regulating the program that they have responsibility for.”
The findings also show that enforcement activity is uneven over time, often clustering around CMS audit cycles, suggesting that violations are more likely to be identified during scheduled reviews than during routine monitoring.
Other findings include differences in the types of plans facing enforcement. For example, contracts that were suspended or terminated tended to have lower quality ratings and served higher shares of low-income and minority beneficiaries, raising concerns that disruptions caused by enforcement may disproportionately affect vulnerable populations.
The research team makes clear in the study that enforcement actions by CMS were not rare. About 42% of Medicare Advantage contracts received at least one enforcement action during the study period and about one in five faced multiple actions, mainly the small fines.
Meyers says the reliance on smaller penalties reflects, in part, how CMS has designed its enforcement system.
“They have the authority to do more, but they’re choosing not to,” he said.
Other factors, such as limited resources, legal risks and resistance from insurers, may also shape how aggressively the agency enforces rules, Meyers added.
By assembling more than a decade of records, the new study is one of the first to examine long-term trends in Medicare Advantage enforcement.
“The Medicare Advantage program is so big and so important, but there’s very little enforcement action that seems to be going on to address challenges that have been widely reported,” Meyers said.
Federal Enforcement Actions Against Medicare Advantage Plans
Article Publication Date
4-May-2026
A toy model to understand how AI learns
A simple physics-inspired model sheds light on how neural networks learn, offering new clues to the surprising efficiency and stability of modern AI systems
Artificial intelligence systems based on neural networks — such as ChatGPT, Claude, DeepSeek or Gemini — are extraordinarily powerful, yet their internal workings remain largely a “black box”. To better understand how these systems produce their responses, a group of physicists at Harvard University has developed a simplified mathematical model of learning in neural networks that can be analysed mathematically using the tools of statistical physics.
“Toy models”, like the one presented in the study just published in the Journal of Statistical Mechanics: Theory and Experiment (JSTAT), provide researchers with a controlled theoretical laboratory for investigating the fundamental mechanisms of neural networks. A deeper understanding of how these systems work could help design artificial intelligence systems that are more efficient and reliable, while also addressing some of the current challenges.
The laws of AI
It’s a bit like when Kepler described the laws governing the motion of the planets. “The way Newton’s laws of gravity were discovered was first by identifying scaling laws between the orbital periods of planets and their radii,” explains Alexander Atanasov, a PhD student in theoretical physics at Harvard University and first author of the new study. Kepler formulated his laws by observing planetary motion, without fully understanding the mechanisms behind it. Yet that work proved crucial: it later enabled Newton to uncover gravity, leading to a much deeper understanding of the universe.
In studies of deep learning—the branch of artificial intelligence based on neural networks—we may still be in a similar Keplerian phase. Today researchers have identified several empirical laws that describe how neural networks behave, but we still lack a kind of “theory of gravity” explaining why they behave that way.
Scientists, for example, know about the scaling laws. “We know that if we take a model and make it bigger, or give it more data, its performance increases,” explains Cengiz Pehlevan, Associate Professor of Applied Mathematics at Harvard University and senior author of the study. These laws make performance predictable, but they do not yet reveal the deeper mechanisms behind it. This approach is not only inefficient—today’s AI systems consume enormous amounts of energy—but also does little to advance our understanding of how these systems actually work.
Neural networks as biological organisms
“Deep learning models are not algorithms written by hand as a set of rules. They’re not engineered manually,” explains Atanasov. “It’s much more similar to an organism being grown in a lab.”
Generative AI chatbots rely on neural networks, a technology that — in a very distant way — resembles the functioning of a biological brain. They consist of many small processing units, called artificial neurons, each performing simple operations but connected together in a complex network.
It is this networked structure that allows “intelligent” behaviour to emerge. Although we know the mathematical operations performed by each individual component, predicting and mechanistically explaining the behaviour of the system as a whole remains extremely difficult: as the number of components grows, the complexity increases rapidly.
A toy model
Since it is currently impossible to analyse a full-scale neural network with exact mathematical methods, Atanasov and his colleagues chose to work with a simplified model that still captures many key features of more complex systems.
“The model we’re studying is simple enough to be solved mathematically,” explains Jacob Zavatone-Veth, Junior Fellow at the Harvard Society of Fellows and co-author of the study. “At the same time, it reproduces several of the key phenomena seen in large neural networks.”
The toy model used in the study is ridge regression, a variant of linear regression.
Linear regression is a statistical method used to estimate relationships between variables. For example, if we know the height and weight of 100 people, we can use linear regression to identify a mathematical relationship between the two and estimate the height of a new person based only on their weight.
The mystery of overfitting — and why it often doesn’t happen
Ridge regression is a type of regression that helps reduce the phenomenon known as overfitting. When models are trained on large datasets, a neural network — a bit like a very diligent but perhaps not particularly insightful student — may end up simply memorising the training data instead of learning patterns that allow it to generalise and make reliable predictions on new data.
Yet deep learning models often behave in a surprising way. “Despite being extremely large, these models can learn from the data without overfitting,” explains Atanasov, calling it “one of the great mysteries of deep learning.”
At first glance this seems counterintuitive. In theory, larger models should be more prone to overfitting. Instead, the scaling laws show that performance often improves as more data are used during training.
New insights
The new study offers one possible piece of that explanation. According to the researchers, the ability of neural networks to learn without overfitting may arise from principles related to renormalization theory, a framework widely used in statistical physics.
To see why, it helps to consider the dimensionality of the data processed by modern AI systems. In the earlier example of linear regression we considered only two variables — height and weight. Real systems such as ChatGPT, however, operate in spaces with thousands or even millions of variables, making an exact mathematical analysis extremely difficult.
Here ideas from statistical physics become useful. In very high-dimensional data, small random variations — known as statistical fluctuations — naturally appear. Renormalization theory shows that many microscopic details can be effectively absorbed into a small number of parameters, meaning that even very complex systems can display relatively simple large-scale behaviour.
Using this framework and their simplified toy model, the researchers show how these high-dimensional fluctuations can actually stabilise learning rather than destabilise it.
“This is something we can understand by analysing simpler linear models,” explains Pehlevan, suggesting that the same mechanism may explain why current neural networks avoid overfitting even when they are highly over-parameterised.
The simplified model may also serve another purpose. As Zavatone-Veth notes, it could be a kind of baseline for understanding how learning might behave in very high-dimensional systems. By studying a model that is simple enough to analyse mathematically, researchers can identify which aspects of learning are likely to be generic—that is, expected to appear across many different neural networks—and which instead depend on the details of a specific model. In this sense, studies like this may help clarify some of the more fundamental principles underlying learning in complex systems.
Method of Research
Computational simulation/modeling
Article Title
Scaling and renormalization in high-dimensional regression
Article Publication Date
5-May-2026
New AI model reads DNA sequences to reconstruct ancestry
Borrowing from chatbots, researchers create first language model for population genetics
Researchers at the University of Oregon have developed an artificial intelligence tool that can read genetic code the way large language models like ChatGPT read text. Scanning the genome for biological mutation patterns, the computer model traces pairs of genes back in time to their last common ancestor.
It’s the first language model designed for population genetics, said Andrew Kern, a computational biologist in the UO College of Arts and Sciences. As described in a paper published April 10 in the Proceedings of the National Academy of Sciences, the AI tool offers scientists a fast and flexible alternative to classical methods for reconstructing evolutionary history.
In practice, it can help researchers like Kern understand when disease-resistance genes emerged in a population, for example, or when species evolved key traits.
“Advances in generative AI and the architectures behind them are potentially useful to a number of fields outside a chatbot,” said Kern, an Evergreen professor of biology. “We’re borrowing strengths from the world of AI and applying them in this different context that’s largely been untapped.”
Genomes are often compared to a written language, with combinations of DNA’s four-letter alphabet — A, T, C and G — forming the basis for genes and chromosomes. Kern and his lab are most interested in what’s misspelled, which scientists call mutations: changes in DNA sequences, like swapped or missing letters, that accumulate over time as part of evolution.
Often harmless, mutations can be passed down from generation to generation, leaving a trail of breadcrumbs for tracing ancestral relationships.
Traditional methods based on math and statistics are the gold standard for translating mutations into ancestry. They’re difficult to beat in most cases, said Kevin Korfmann, lead author of the study and former postdoctoral researcher at the UO. But those classical probabilistic approaches can be slow and struggle with large or incomplete genomic datasets, he added.
So, the researchers looked to AI to efficiently interpret the language of life by modifying a GPT-2 model, the older machine learning architecture behind ChatGPT. But instead of being trained on large volumes of English text, the language model was trained on simulations of genetic evolution across different species — including bacteria, rodents, mosquitoes and primates — to learn and recognize mutation patterns.
“We can’t repeat evolution, so one of the key workflows we have is developing simulations,” Korfmann said. “The simulations mimic evolutionary processes, and then we use the outcomes as training data for our deep learning models.”
In general, stretches of DNA with many mutations likely trace back to a distant common ancestor, whereas those with few mutations are likely to share a more recent ancestor. This helps explain why chimpanzees are considered humans’ closest living relatives, with similar DNA, while sea sponges are the most distant, having diverged genetically more than 700 million years ago.
Based on those mutation patterns and other biological principles, the AI model can predict when gene pairs last shared a common ancestor, known as the “coalescence time.”
In tests, the tool performed as well as state-of-the-art statistical methods, which was surprising to the research team.
“You never really know what’s going to work when you’re essentially borrowing techniques from a totally different world and applying them to a new problem,” Kern said. “But this was a case where things worked really well.”
The computer model was also dramatically faster. While traditional methods can take hours or even days to decode a single mosquito chromosome, the new approach can do it in minutes. That efficiency is especially beneficial for scientists handling large amounts of genetic sequence data.
“Compared to classical inferential approaches, the AI tool doesn’t have to reason about every mutation individually,” Korfmann said. “It just reads the patterns because all of the expensive statistical work was done up front, during training, which sidesteps the bottleneck.”
The model’s simulation-based training also enables scientists to use DNA datasets that are incomplete or missing genetic code — an issue Kern frequently faces when working with mosquito genetic databases for his research on malaria transmission.
That versatility comes at a crucial moment for malaria control, Kern said. For decades, insecticides have been a cornerstone for the prevention of malaria-spreading mosquitoes. But evolution, as Kern puts it, “did its thing.”
“Insecticide resistance is being observed in all of these mosquito populations today,” he said. “A major challenge in preventing the spread of malaria has been understanding the evolution of insecticide resistance. Now, we can go in with our AI model, ask how long ago these resistance genes arose in the population, and learn about the evolutionary history of this critical carrier of malaria.”
Looking ahead, Kern and Korfmann aim to advance the biological model beyond tracing shared ancestry between two lineages towards reconstructing full genealogical trees across multiple lineages. Some traditional methods can already do this, but Kern said they’d like to chase that goal from a machine-learning angle.
“There’s so much going on in the machine learning field that we haven’t applied yet in our field,” Korfmann said. “There’s tons of translational work to do to get these novel algorithms working in biology.”
CHICAGO, IL (May 5, 2026) — A swarm of tiny, shape-changing, all-metal robots might someday deliver drugs and capture biopsy samples painlessly and then safely dissolve without the need for extraction, according to a study to be presented today at Digestive Disease Week® (DDW) 2026.
These first-of-their-kind microrobots combined durability and safety in testing on mice, said study co-lead author Ling Li, MD, instructor, gastroenterology and hepatology at Johns Hopkins University School of Medicine.
“Existing biodegradable microrobots are made of materials such as polymers or hydrogels that biodegrade, but they lack the strength and rigidity that enable our all-metal microrobots to penetrate and cut tissue, while still leaving no trace behind when their work is done,” Dr. Li said.
Using microrobots may someday replace some conventional, uncomfortable, invasive endoscopy procedures by simply swallowing a capsule. Thousands of the devices packed into a capsule could journey into the body, where, like tiny transformers, the pre-programmed microrobots shift their shapes at their destination to form minuscule grippers to collect tissue samples in areas difficult to reach using traditional methods.
They can also transform into microinjectors to deliver medications, serving as an alternative to injection or intravenous infusion for delivering biologics such as anti-tumor necrosis factor (TNF) agents and glucagon-like peptide-1 (GLP-1) medications. By injecting them under the mucosal lining of the gastrointestinal tract and by targeting particular locations in the body rather than distributing medication broadly, this approach could improve how medications are absorbed. It also could eliminate the need for frequent injections or clinic visits for treatment of gastrointestinal conditions such as inflammatory bowel disease, bleeding and cancer.
By altering the thickness of the metal layers, the research team can control the tension between layers and how they fold to form two-dimensional and three-dimensional shapes.
“The variability of the layers’ thickness and use of other materials determines how long the metals last before they begin to biodegrade,” said co-lead author Wangqu Liu, PhD candidate at Johns Hopkins University Whiting School of Engineering, who designed and fabricated the microrobots. “We can control the degradation rate from minutes to months depending on the application.”
The researchers demonstrated their microrobots’ ability to penetrate the inner lining of the intestine for potential drug delivery in the gastrointestinal tracts of mice. They also showed the devices’ ability to morph as programmed and insert their tips into the layer of tissue just beneath the surface of the intestine without punching holes or causing other damage.
The research team developed a novel liquid-free manufacturing process that enabled them to create a new class of stronger microrobots composed of water-soluble metals and metal oxides, which give them their biodegradable properties. The process uses only a tiny amount of metal.
“It’s typically only a few micrograms, and it’s engineered to stay within established safety limits,” Liu said.
“We see these all-metal, biodegradable devices as an important advancement in the effort to realize the full potential of medical microrobots,” Dr. Li said. “We don’t have to choose between strength and safety. We can have both.”
The researchers acknowledge the Gracias laboratory at the Johns Hopkins University Whiting School of Engineering for microrobot design and fabrication, and the Selaru laboratory at the Division of Gastroenterology and Hepatology of Johns Hopkins School of Medicine for animal testing and clinical applications for this work.
DDW Presentation Details
Dr. Ling Li will present data from the study, “Biodegradable shape-morphing microrobots for safe biopsy or drug delivery in the gastrointestinal system,” abstract Tu2176, at 12:30 p.m. CDT, Tuesday, May 5. For more information about featured studies and a schedule of availability for featured researchers, please visit www.ddw.org/press.
###
Digestive Disease Week® (DDW) is the largest international gathering of physicians, researchers, and academics in the fields of gastroenterology, hepatology, endoscopy, and gastrointestinal surgery. Jointly sponsored by the American Association for the Study of Liver Diseases (AASLD), the American Gastroenterological Association (AGA), the American Society for Gastrointestinal Endoscopy (ASGE) and the Society for Surgery of the Alimentary Tract (SSAT), DDW is an in-person and online meeting from May 2-5, 2026. The meeting showcases more than 6,000 abstracts and more than 1,000 lectures on the latest advances in GI research, medicine and technology. More information can be found at www.ddw.org
‘They weren’t burned by accident’: burned stone, child’s bones, and lost jewelry could reveal prehistoric mining camp high in the Pyrenees
Archaeologists uncover possible evidence of ancient copper smelting spanning more than 2,000 years in a mountain cave more than 2,000 meters above sea level
High in the eastern Pyrenees, archaeologists are revealing the secrets of a prehistoric cave full of hearths containing fragments of green rock that could represent early copper mining. People visited this site for well-planned, well-supplied trips spanning two thousand years, overturning previous assumptions that prehistoric peoples didn’t spend long periods at high altitude. The discovery of a child’s finger bone and baby tooth suggest that, after more excavations, we may find that it was also a burial site.
“For a long time, high-mountain environments were seen as marginal, places prehistoric communities passed through occasionally,” said Prof Carlos Tornero of the Catalan Institute of Human Paleoecology and Social Evolution, lead author of the article in Frontiers in Environmental Archaeology. “But we found a really rich archaeological sequence, including multiple combustion structures and a very large number of green mineral fragments. We can’t say exactly how long people stayed each time, but the repeated use of the space and the density of remains suggest occupations that were short to medium in duration, but happening again and again over long periods of time.”
Burning questions
Cave 338 is found at 2,235m above sea level in the Freser Valley. The scientists excavated an area of 6m2 at its entrance, identifying four layers of occupation. The first, most recent layer was thin, showing the cave was not frequently used at that time, and contained some artefacts from historical periods. The fourth, oldest layer contains only charcoal fragments, dated at 6,000 years old.
The researchers hit the jackpot in the second and third layers of the excavation: a total of 23 hearths, containing many crushed, burned green mineral fragments. In-depth material analysis to confirm its identity is underway, but the fragments resemble malachite, which can be treated like this to produce copper. Cave 338 looks like an unexpectedly early high-altitude mining camp.
“Many of these fragments are thermally altered, while other materials in the cave are not, which clearly suggests that fire played an important role in their processing and that there was a deliberate intention behind it,” said Dr Julia Montes-Landa of the University of Granada, co-author. “In other words, they weren’t burned by accident.”
The hearths cut across each other, indicating that the visitors reused this space frequently, but are still distinct, which suggests that those visits were separated by plenty of time. Radiocarbon dating puts the hearth found in the second layer at about 3,000 years old, while the hearths in the third layer are around 5,500 to 4,000 years old.
Secrets of the mountains
The team also found human remains in the third layer — a finger bone and a baby tooth belonging to at least one child, about 11 years old — which could mean there are burials deeper within the cave. However, there isn’t enough evidence to suggest a cause of death or determine if the two bones belonged to the same child. Jewelry found in the second layer offered more information.
“We recovered two pendants: one made from a shell and another from a brown bear tooth,” said Tornero. “They come from prehistoric contexts, most likely around the second millennium BC. The shell pendant is interesting because it has parallels in other sites in Catalonia, which suggests shared traditions or connections between different communities. The bear tooth pendant is much less common. That might point to something more specific or symbolic, possibly linked to the local environment.”
Cave 338 wasn’t a full-time home, but the people who came here found their trips valuable enough to keep returning for millennia. The researchers still have a lot of questions about those trips which they hope to answer with future research. For example, further excavation will help us understand more about how and when humans used the cave. They also want to confirm the exact identity of the green mineral and find out where it came from.
“The identification of the green mineral as malachite is still preliminary,” explained Tornero. “The research ongoing by the University of Granada and the Autonomous University of Barcelona will provide final answers shortly. Also, the excavation hasn’t yet reached the full depth of the site, so the sequence is not completely documented. This summer we will continue the archaeological work.”
Archaeological excavation works at Cova 338 from the inside. Authorship: IPHES-CERCA.
Detail of the pendant made of Glycymeris sp. recovered during the excavation works at Cova 338. Authorship: IPHES-CERCA.
Pendant made from a bear incisor recovered during the excavation works at Cova 338. Authorship: IPHES-CERCA.
The site, located in the Núria Valley, documents recurrent human occupations spanning more than 5,000 years and provides some of the earliest evidence of copper-rich mineral exploitation in Western Europe
The study, led by the Universitat Autònoma de Barcelona and IPHES-CERCA and published in Frontiers in Environmental Archaeology, challenges the traditional view of high mountain areas as marginal
An international research team led by the Universitat Autònoma de Barcelona (UAB) and the Institut Català de Paleoecologia Humana i Evolució Social (IPHES-CERCA) has documented the highest-altitude prehistoric cave with evidence of intense human occupation known to date in the Pyrenees. The site, known as Cova 338, is located at 2,235 meters above sea level in the Núria Valley (Queralbs, Ripollès - Girona) and currently represents the most significant high-mountain prehistoric site documented in the range.
The results show that the cave was repeatedly occupied between the 5th millennium BCE and the end of the 1st millennium BCE, providing new evidence on the exploitation of high-mountain resources in prehistoric times and challenging the traditional idea that these areas were used only sporadically or marginally. Dating indicates that these occupations occurred in several distinct phases, separated by periods of abandonment, suggesting a planned and recurrent use of this space.
This is the main conclusion of the article published in Frontiers in Environmental Archaeology, led by Carlos Tornero, professor in the Department of Prehistory at the UAB and researcher at IPHES-CERCA, with the participation of researchers from IPHES-CERCA, the Universitat Rovira i Virgili, the University of Granada, the Pompeu Fabra University, and the University of the Balearic Islands, among other institutions.
Intense and organized occupation in a high-mountain environment
For decades, archaeological research has interpreted areas above 2,000 meters in altitude as marginal territories, occupied only occasionally. Cova 338 breaks with this model.
Extensive excavations carried out between 2021 and 2023 have revealed “an exceptional archaeological sequence, including numerous combustion structures, faunal remains, ceramic fragments, and a remarkable assemblage of green minerals, likely malachite, a copper-rich mineral”, explains Carlos Tornero. “For the first time in the Pyrenees, high-mountain prehistoric occupations of significant intensity have been documented, characterized by repeated activities and the direct exploitation of mineral resources within the cave.”
Among the recovered materials are also two pendants: one made from a marine shell (Glycimeris) and another from a brown bear tooth, evidencing personal ornamentation practices. The former has parallels in other Catalan sites, while the latter is much rarer and possibly linked to a specific symbolic meaning.
“Cova 338 forces us to rethink the role of high mountain environments in Pyrenean prehistoric societies”, highlights Carlos Tornero. “For a long time, these spaces were assumed to be marginal. What we document here is recurrent occupation, with complex activities and a clear exploitation of mineral resources.”
The evidence suggests that mineral fragments were brought into the cave and subsequently fragmented or processed inside, indicating systematic exploitation of copper-rich minerals in a high-mountain environment throughout the Late Neolithic and the Bronze Age. These data place Cova 338 among the earliest known examples of this type of activity in Western Europe.
Spatial analysis of the site shows a clear internal organization of activities, with differentiated structures and areas. Researchers interpret the cave as a logistical site integrated within well-structured seasonal mobility systems, where human groups returned recurrently to carry out specific tasks.
“The mountain was not a barrier, but an active place within the economic and territorial organization of prehistoric communities”, notes Eudald Carbonell, researcher at IPHES-CERCA and co-author of the study.
A research project under extreme conditions
The research is part of the ARRELS project, a program promoted by the Ministry for Culture of the Government of Catalonia and led by the UAB and IPHES-CERCA, focused on studying the prehistoric roots of human mobility and occupation in the Upper Ripollès region.
Excavations at Cova 338 have posed a major logistical challenge, as access to the cave is only possible on foot from the Núria Valley, with no motorized support allowed. This has required all materials and sediments generated during the digs to be transported manually.
“Conducting an archaeological excavation to current scientific standards under these conditions is extraordinarily demanding”, explains Tornero. The work incorporated high-resolution methodologies, including 3D recording of all materials, systematic sediment sampling, and techniques such as washing and flotation, which allow even the smallest remains to be recovered and provide highly detailed information on the activities carried out in the cave.
Given its scientific importance and excellent state of preservation, the site has been protected and access restricted to ensure the conservation of the deposits and facilitate future research.
The work has also been made possible thanks to the logistical and institutional support of the Queralbs Town Council and the Ter and Freser Headwaters Natural Park, which have facilitated fieldwork in this high-mountain environment.
A key reference for European prehistory
Researchers consider Cova 338 to be a key reference for understanding human occupation of the Pyrenean high mountains and the exploitation of their resources during recent prehistory.
“This site demonstrates that the Pyrenees were not a marginal territory for prehistoric communities, but a space fully integrated into their mobility strategies and territorial exploitation”, concludes Carlos Tornero.
The results open new lines of research into the role of alpine environments in prehistoric societies and the earliest forms of mineral resource exploitation in high-mountain contexts.
Funding source:This research is funded through the project led by Carlos Tornero and Eudald Carbonell Arrels prehistòriques de la transhumància a l’Alt Ripollès: projecte arqueològic 2022–2025 (code CLT009/22/00060; AGAUR-DGPC, Departament de Cultura, Generalitat de Catalunya) and has had the logistic and institutional support of the Queralbs Town Council and the Ter and Freser Headwaters Natural Park, which have facilitated the development of the excavations in this high mountain environment.
Journal
Frontiers in Environmental Archaeology
Article Title
Beyond 2,000 meters, first evidence of intense prehistoric occupation in the Pyrenees