Monday, April 14, 2025

Archaeologists measured and compared the size of 50,000 ancient houses to learn about the history of inequality -- they found that it’s not inevitable




Field Museum

Composite of sites 

image: 

Three excavated Classic period (ca. 550–750 CE) houses at El Palmillo (Valley of Oaxaca, Mexico). Bottom: the largest and most elaborate residential structure (Platform 11). Top right: a less elaborate residence (Structure 35). Top Left: a smaller residence (Terrace 925).

view more 

Credit: Image credit: Linda Nicholas and Gary Feinman.




We’re living in a period where the gap between rich and poor is dramatic, and it’s continuing to widen. But inequality is nothing new. In a new study published in the journal PNAS, researchers compared house size distributions from more than 1,000 sites around the world, covering the last 10,000 years. They found that while inequality is widespread throughout human history, it’s not inevitable, nor is it expressed to the same degree at every place and time.

“This paper is part of a larger study in which over 50,000 houses have been analyzed to use differentials in house sizes as a metric for wealth inequality over time, on six continents,” says Gary Feinman, the MacArthur Curator of Mesoamerican, Central American, and East Asian Anthropology at the Field Museum in Chicago, and the paper’s lead author. “This is an unprecedented data set in archaeology, and it allows us to empirically and systematically look at patterns of inequality over time.”

The paper Feinman led delves into a comparison of the extent of inequality at different localities (mostly archaeological) to figure out how things changed over time. “While there is not one unilinear sequence of change in wealth inequality over time, there are interpretable patterns and trends that cross-cut time and space. What we see is not just noise or chaos,” says Feinman.

The variation that the researchers found challenges long-held views across history and the social sciences that we can use ancient Greece and Rome, or the medieval history of Europe as generalized representations of humanity’s past. “There are a lot of things that have been presumed for centuries— for example, that inequality rises inevitably,” says Feinman. “The traditional thinking expects that once you get larger societies with formal leaders, or once you have farming, inequality is going to go way up. These ideas have been held for hundreds of years, and what we find is that it’s more complicated than that— high degrees of inequality are not inevitable in large societies. There are factors that may make it easier to happen or increase to high degrees, but these factors can be leveled off or modified by different human decisions and institutions.”

“Variability in the sizes of houses may not be the full extent of wealth differences, but it's a consistent indicator of the degree of economic inequality that can be applied across time and space,” says Feinman. “I know from my own archaeological fieldwork in the Valley of Oaxaca, Mexico, that almost always, the larger the house, the more elaborate the house, with special features and thicker walls.”

To quantify and compare economic inequality in different places, at different points in history, the researchers used the variable distributions of house sizes at more than 1000 settlements to calculate a Gini coefficient for each site conducted statistical analyses in which they examined the relationship between the amount of inequality in a society and the political complexity of that society. The Gini coefficient is a commonly employed metric to assess inequality that ranges between 0 (complete equality) and 1 (maximal inequality).The coefficients for each locality were then compared across time and space to examine trends in inequality and assess how it varied in relation to population, political organization, and other potential causal factors.

The investigators then looked at these trends in the Gini values in the context of the size of the sites that were compared and how complex the hierarchical structure of governance was. They found that even while populations have risen over the years, inequality hasn’t always increased in a uniform way.

“The measure of inequality we found in these sites is quite variable, which suggests that there’s not one homogenized pattern,” says Feinman. In other words, contrary to traditional scholarly thinking, there’s no one-size-fits-all explanation for why societies become economically unequal.

“Human choice and governance and cooperation have played a role in damping down inequality at certain times and places, and that is what accounts for this variability in time and space,” says Feinman. “And if inequality isn't inevitable when human aggregations get larger and governmental structures get more hierarchical, then there is a suite of implications for how we view the present and how we look at the past. Although history has shown us that elements of technology and population growth can raise the potential for inequality at certain times and places, that potential is not always realized, as people have implemented leveling mechanisms and systems of governance that mute that potential. The often-expressed views that certain economic, demographic, or technological conditions or factors make great wealth disparities inevitable simply are not borne out by our global past.”

###

 

New archaeological database reveals links between housing and inequality in ancient world





University of Colorado at Boulder



If the archaeological record has been correctly interpreted, stone alignments in Tanzania’s Olduvai Gorge are remnants of shelters built 1.7 million years ago by Homo habilis, an extinct species representing one of the earliest branches of humanity’s family tree.

Archaeological evidence that is unambiguously housing dates to more than 20,000 years ago—a time when large swaths of North America, Europe and Asia were covered in ice and humans had only recently begun living in settlements.

Between that time and the dawn of industrialization, the archaeological record is rich not only with evidence of settled life represented by housing, but also with evidence of inequality.

In a PNAS Special Feature published today, scholars from around the world draw from a groundbreaking archaeological database that collects more than 55,000 housing floor area measurements from sites spanning the globe—data that support research demonstrating various correlations between housing size and inequality.

“Archaeologists have been interested in the study of inequality for a long time,” explains Scott Ortman, a University of Colorado Boulder associate professor of anthropology who partnered with colleagues Amy Bogaard of the University of Oxford and Timothy Kohler of the University of Florida to bring together the PNAS Special Feature. “For a long time, studies have focused on the emergence of inequality in the past, and while some of the papers in the special feature address those issues, others also consider the dynamics of inequality in more general terms.” 

“They use this information to identify the fundamental drivers of economic inequality using a different way of thinking about the archaeological record—more thinking about it as a compendium of human experience. It’s a new approach to doing archaeology.”

Patterns of inequality

Ortman, Bogaard and Kohler also are co-principal investigators on the Global Dynamics of Inequality (GINI) Project funded by the National Science Foundation and housed in the CU Boulder Center for Collaborative Synthesis in Archaeology in the Institute of Behavioral Science to create the database of housing floor area measurements from sites around the world. 

Scholars then examined patterns of inequality shown in the data and studied them in the context of other measures of economic productivity, social stability and conflict to illuminate basic social consequences of inequality in human society, Ortman explains.

“What we did was we crowdsourced, in a sense,” Ortman says. “We put out a request for information from archaeologists working around the world, who knew about the archaeological record of housing in different parts of world and got them together to design a database to capture what was available from ancient houses in societies all over world.”

Undergraduate and graduate research assistants also helped create the database, which contains 55,000 housing units and counting from sites as renowned as Pompeii and Herculaneum, to sites across North and South America, Asia, Europe and Africa. 

“By no stretch of the imagination is it all of the data that archaeologists have ever collected, but we really did make an effort to sample the world and pull together most of the readily available information from excavations, from remote sensing, from LiDAR,” Ortman says.

The housing represented in the data spans non-industrial society from about 12,000 years ago to the recent past, generally ending with industrialization. The collected data then served as a foundation for 10 papers in the PNAS Special Feature, which focus on the archaeology of inequality as evidenced in housing.

Housing similarities

In their introduction to the Special Feature, Ortman, Kohler and Bogaard note that “economic inequality, especially as it relates to inclusive and sustainable social development, represents a primary global challenge of our time and a key research topic for archaeology.”

“It is also deeply linked to two other significant challenges. The first is climate change. This threatens to widen economic gaps within and between nations, and some evidence from prehistory associates high levels of inequality with lack of resilience to climatic perturbations. The second is stability of governance. Clear and robust evidence from two dozen democracies over the last 25 years that links high economic inequality to political polarization, distrust of institutions and weakening democratic norms. Clearly, if maintenance of democratic systems is important to us, we must care about the degree of wealth inequality in society.”

Archaeological evidence demonstrates a long prehistory of inequality in income and wealth, Ortman and his colleagues note, and allows researchers to study the fundamental drivers of those inequalities. The research in the Special Feature takes advantage of the fact “that residences dating to the same chronological period, and from the same settlements or regions, will be subject to very similar climatic, environmental, technological and cultural constraints and opportunities.”

Several papers in the Special Feature address the relationship between economic growth and inequality, Ortman says. “They’re thinking about not just the typical size of houses in a society, but the rates of change in the sizes of houses from one time step to the next.

“One thing we’ve also done (with the database) is arrange houses from many parts of the world in regional chronological sequences—how the real estate sector of past societies changed over time.”

The papers in the Special Feature on topics including the effects of land use and war on housing disparities and the relationship between housing disparities and how long housing sites are occupied. A study that Ortman led and conducted with colleagues from around the world found that comparisons of archaeological and contemporary real estate data show that in preindustrial societies, variation in residential building area is proportional to income inequality and provides a conservative estimator for wealth inequality. 

“Our research shows that high wealth inequality could become entrenched where ecological and political conditions permitted,” Bogaard says. “The emergence of high wealth inequality wasn’t an inevitable result of farming. It also wasn’t a simple function of either environmental or institutional conditions. It emerged where land became a scarce resource that could be monopolized. At the same time, our study reveals how some societies avoided the extremes of inequality through their governance practices.”
The researchers argue that “the archaeological record also shows that the most reliable way to promote equitable economic development is through policies and institutions that reduce the covariance of current household productivity with productivity growth.”

GINI Project data, as well as the analysis program developed for them, will be available open access via the Digital Archaeological Record.

 

The gut health benefits of sauerkraut



University of California - Davis
Fermented Food 

image: 

A new UC Davis study shows that having fermented food like sauerkraut could be good for gut health.

view more 

Credit: Hector Amezcua / UC Davis




Is sauerkraut more than just a tangy topping? A new University of California, Davis, study published in Applied and Environmental Microbiology suggests that the fermented cabbage could help protect your gut, which is an essential part of overall health, supporting digestion and protecting against illness.

Authors Maria Marco, professor with the Department of Food Science and Technology, and Lei Wei, a postdoctoral researcher in Marco’s lab, looked at what happens during fermentation — specifically, how the metabolites in sauerkraut compared to those in raw cabbage.

Researchers tested whether sauerkraut’s nutrients could help protect intestinal cells from inflammation-related damage. The study compared raw cabbage, sauerkraut and the liquid brine left behind from the fermentation process. The sauerkraut samples included both store-bought products and fermented cabbage made in the lab.

They found that sauerkraut helped maintain the integrity of intestinal cells, while raw cabbage and brine did not. Marco said that there was also no noticeable difference between grocery store sauerkraut and the lab-made version.

“Some of the metabolites we find in the sauerkraut are the same kind of metabolites we're finding to be made by the gut microbiome, so that gives us a little more confidence that this connection we found between the metabolites in sauerkraut and good gut health makes sense,” Marco said. “It doesn't matter, in a way, if we make sauerkraut at home or we buy it from the store; both kinds of sauerkraut seemed to protect gut function.”

Digestive benefits

Chemical analysis shows that fermentation changes cabbage’s nutritional profile, increasing beneficial metabolites such as lactic acid, amino acids and plant-based chemicals linked to gut health. These changes may explain why fermented foods are often associated with digestive benefits.

Marco said she and Wei identified hundreds of different metabolites produced during fermentation and are now working to determine which ones play the biggest role in supporting long-term gut health.

“Along with eating more fiber and fresh fruits and vegetables, even if we have just a regular serving of sauerkraut, maybe putting these things more into our diet, we'll find that can help us in the long run against inflammation, for example, and make our digestive tract more resilient when we have a disturbance,” Marco said.

Fermented vegetables and foods are already a staple in many diets, but this research suggests they could be more than just a flavorful side dish. Marco said the next step is to conduct human trials to see if the gut-protective metabolites found in sauerkraut can have the same positive effects when included in everyday diets, as was shown in the lab.

“A little bit of sauerkraut could go a long way,” she said. “We should be thinking about including these fermented foods in our regular diets and not just as a side on our hot dogs.”

This research was funded by a grant from the California Department of Food and Agriculture, as well as a Jastro Shields Graduate Research Award from the UC Davis College of Agricultural and Environmental Sciences.

 

Study explores how food manufacturers respond to state regulations





University of Illinois College of Agricultural, Consumer and Environmental Sciences

Maria Kalaitzandonakes (left) and William Ridley. 

image: 

Maria Kalaitzandonakes (left) and William Ridley.

view more 

Credit: College of ACES




When West Virginia recently banned seven artificial food dyes in products to be sold within their borders, they joined an increasing number of individual U.S. states issuing their own regulations about food manufacturing practices, allowable ingredients, or product labeling. Consequently, food manufacturers must decide how to deal with different requirements in multiple markets. A new study from the University of Illinois Urbana-Champaign examines the various ways manufacturers respond to state regulations and what drives their choices.

“States have a lot of power constitutionally to protect the health and wellbeing of their citizens; however, a state-level food regulation approach can lead to a complex patchwork of regulation. That creates challenges for food manufacturers that sell their products across state lines. We wanted to examine how firms adhere to different rules across markets,” said Maria Kalaitzandonakes, assistant professor in the Department of Agricultural and Consumer Economics (ACE), part of the College of Agricultural, Consumer and Environmental Sciences at Illinois.

Kalaitzandonakes and co-author William Ridley, assistant professor in ACE, developed a modeling framework outlining potential responses, then consulted with food manufacturers to ensure their model aligned with actions producers were actually taking to address policy changes.

“Food manufacturing is an important industry in Illinois and across the country,” Ridley said. “After developing our model, we asked several food manufacturers about how they were responding to a variety of state laws, and we were excited to see that our model did a good job explaining firm strategies.”

The researchers identified four options food manufacturers selected in response to state food regulation: First, manufacturers can update their product to comply with the strictest standard and sell the new version across markets. Second, they can maintain two separate versions of the product — one sold to the regulated state or region and one for the rest of the country. Third, they can remove their product from the stricter market altogether and sell their original product in the remaining states. Finally, they may ignore the regulations and continue selling the original product with the potential for legal consequences. 

Which response a firm chooses will depend on a number of factors, including the cost of compliance, the size of the market of the regulating state, the cost and likelihood of penalties, and consequences for consumer demand. The researchers applied their model to three different case studies, examining manufacturers’ responses in each scenario.

In 2014, Vermont implemented a law that required mandatory labeling of genetically modified ingredients. Most firms created one version of their product, which met Vermont’s requirements, to be sold across the country. However, because Vermont is a smaller market, some producers chose to exit the state temporarily, until they had made production changes to comply with the law.

In 2019, Illinois enacted a law requiring allergen labeling for products containing sesame. Because the consequences for non-compliance with the law were minimal, some firms ignored the requirement.

The third case study addressed California’s recent ban of four food additives, which was enacted in 2023 and will take full effect in 2027. California’s market size makes stopping sales to the state unlikely for most firms. Keeping separate production and distribution lines would be complicated and costly. For most firms, reformulating products to comply with the law and selling the new products nationwide was the optimal course of action. However, this strategy becomes more complex as more state regulation on food additives — including West Virginia's recent expansion on food dyes — proliferates, the researchers note. 

“When multiple states legislate on a similar issue but the rules are not harmonized, the complexity is likely to increase dramatically. When state laws differ — for example in the ingredients covered, the exemptions, and the timelines — this can create additional hurdles and uncertainty for firms trying to comply with the rules,” Ridley said.

Sometimes, state regulation leads to eventual federal government involvement. For example, Congress passed both a national mandate to label genetically modified ingredients in food and expanded allergen labeling regulations to include sesame. This is an expected outcome, as the federal government is tasked with easing interstate commerce. 

“State regulation can be a powerful motivator for federal regulation. We’re increasingly seeing advocacy for changes to food regulation at the state level, both to change firm behavior and to drive changes to national regulation,” Kalaitzandonakes said. 

The paper, “Food Manufacturers’ Decision Making Under Varying State Regulation,” is published in the Journal of Food Distribution Research.

U$A

Experts stress importance of vaccination amidst measles outbreaks


Parents are urged to call pediatrician if child was exposed to measles or has symptoms



Ann & Robert H. Lurie Children's Hospital of Chicago




Pediatric infectious diseases experts stress the importance of vaccination against measles, one of the most contagious viruses, which is once more spreading in the United States. In the article published in Pediatrics, they update pediatricians on this vaccine-preventable disease, which was previously declared non-endemic in the U.S.

“The most effective way to prevent measles is vaccination,” said lead author Caitlin Naureckas Li, MD MHQS, infectious diseases specialist at Ann & Robert H. Lurie Children’s Hospital of Chicago and Assistant Professor of Pediatrics at Northwestern University Feinberg School of Medicine. “If parents are concerned that their child was exposed to measles or may have measles, they should call their child’s doctor. They should not attempt to treat measles on their own without a physician’s advice.”

Measles carries risk of serious complications that may require hospitalization. Dr. Li and colleagues point out that in 2024 in the U.S., 40 percent of people with confirmed measles were hospitalized, including 52 percent of children under 5 years and 25 percent of those 5-19 years.

The authors also highlight that in the U.S., the measles mortality rate is estimated to be one-three deaths per 1,000 infections. The risk of death is higher in those under 5 years of age.

One of the more common complications of measles is pneumonia, with the lungs involved in over 50 percent of measles cases. Measles also can impact the brain. Encephalitis – an illness that can be fatal or lead to long-term brain damage in survivors – occurs in about one out of every 1,000 cases. SSPE, a near-universally deadly brain disorder that occurs years after measles infection, is another potential complication that strikes one in 100,000 cases, with higher risk in children under 1 year of age.

“MMR vaccination is safe,” emphasized Dr. Li. “This vaccine is the best way for families to protect their children from potentially life-threatening complications.”

More information about measles symptoms and prevention is available on Lurie Children’s blog.

Ann & Robert H. Lurie Children’s Hospital of Chicago is a nonprofit organization committed to providing access to exceptional care for every child. It is the only independent, research-driven children’s hospital in Illinois and one of less than 35 nationally. This is where the top doctors go to train, practice pediatric medicine, teach, advocate, research and stay up to date on the latest treatments. Exclusively focused on children, all Lurie Children’s resources are devoted to serving their needs. Research at Lurie Children’s is conducted through Stanley Manne Children’s Research Institute, which is focused on improving child health, transforming pediatric medicine and ensuring healthier futures through the relentless pursuit of knowledge. Lurie Children’s is the pediatric training ground for Northwestern University Feinberg School of Medicine. It is ranked as one of the nation’s top children’s hospitals by U.S. News & World Report.

AMERIKAN EXCEPTIONALISM

One firearm injury was treated every 30 minutes in emergency departments in a study of 10 jurisdictions




American College of Physicians




Below please find summaries of new articles that will be published in the next issue of Annals of Internal Medicine. The summaries are not intended to substitute for the full articles as a source of information. This information is under strict embargo and by taking it into possession, media representatives are committing to the terms of the embargo not only on their own behalf, but also on behalf of the organization they represent.   
----------------------------      

1. One firearm injury was treated every 30 minutes in emergency departments in a study of 10 jurisdictions  

Abstract: https://www.acpjournals.org/doi/10.7326/ANNALS-24-02874

URL goes live when the embargo lifts            

A cross-sectional analysis of firearm injury-related emergency department (ED) visits found that between 2018 and 2023, there was approximately one firearm injury ED visit every 30 minutes in the 10 jurisdictions studied. The analysis also found that rates of firearm related ED visits were often highest during evenings, weekends, holidays and the summer months. According to the authors, this is the largest analysis to date of detailed temporal patterns in firearm injury using ED data. The results are published in Annals of Internal Medicine

 

Researchers from the Centers for Disease Control and Prevention (CDC) analyzed trends in firearm injury ED visits between 1 January 2018 and 30 August 2023 using data from the CDC’s Firearm Injury Surveillance Through Emergency Rooms program (FASTER). They analyzed data obtained from nine states (Florida, Georgia, New Mexico, North Carolina, Oregon, Utah, Virginia, Washington and West Virginia) and the District of Columbia. They calculated the rates of firearm injury ED visits per 100,000 total ED visits and assessed temporal variations by time of day, day of the week, month and U.S. public holidays or other days of interest (e.g. Independence Day and Superbowl Sunday). The researchers found that across the 5-year period, the overall rate of firearm injury ED visits was 73.9 per 100,000 ED visits with a total of 93,022 firearm injury ED visits identified. These results equal approximately one firearm injury ED visit every half hour.

 

The researchers also noted that the rate of firearm injury ED visits gradually increased from the afternoon into the night and hit their average nightly peak rate between 2:30 am and 3:00 am. The average daily rates were highest on Friday, Saturday and Sunday. The daily rate of firearm injury ED visits was highest on New Year’s Eve (31 December), and the monthly rate was highest in July. Other holidays with high rates of firearm injury ED visits compared to non-holidays included Independence Day, Memorial Day weekend and Halloween. These findings highlight significant temporal clustering of firearm injury ED visits, and these insights could inform health care staffing and emergency preparedness, potentially reducing mortality rates associated with firearm injuries. While the researchers acknowledge that these data are not nationally representative, they note that understanding the factors behind the identified temporal patterns of firearm injury can help inform future prevention efforts and programs. 

 

Media contacts: For an embargoed PDF, please contact Angela Collom at acollom@acponline.org. To contact corresponding author Adam Rowh, MD, please email media@cdc.gov.  

----------------------------      

2. More adults now use tirzepatide and semaglutide over conventional glucose and weight lowering medications

Abstract: https://www.acpjournals.org/doi/10.7326/ANNALS-24-02870

URL goes live when the embargo lifts            

A population-based cohort study measured trends in use of tirzepatide versus other glucose-lowering medications (GLMs) and weight-lowering medications (WLMs) after tirzepatide’s FDA approval. The study found that dispensations of tirzepatide for both type 2 diabetes (T2D) and weight loss increased sharply after its entry into the U.S. market, whereas use of conventional GLMs (e.g., metformin and insulin) and WLMs (e.g., phentermine) declined. These findings highlight the rapidly shifting landscape of prescribing patterns for GLMs and WLMs. The study is published in Annals of Internal Medicine.  

 

Researchers from Harvard University studied data from two cohorts of commercially insured adults with and without T2D taking GLMs or WLMs between January 2021 and December 2023. The researchers aimed to describe trends in pharmacy dispensing claims for GLMs among adults with T2D and for WLMs in adults without diabetes. GLMs examined included metformin, sodium–glucose cotransporter-2 inhibitors (SGLT2i), sulfonylureas, insulin, thiazolidinediones, DPP4i, tirzepatide and GLP-1 RA. WLMs included those with FDA approval for short-term (benzphetamine, diethylpropion, phendimetrazine tartrate, and phentermine) or long-term (liraglutide [3.0 mg], semaglutide [2.4 mg], naltrexone–bupropion, orlistat, and phentermine–topiramate). They also evaluated semaglutide (2.0 mg), oral semaglutide, liraglutide and dulaglutide. The researchers measured trends among populations with both incident (initiated medication without pharmacy dispensing claim for that medication) and any use (any pharmacy dispensing claim for a GLM or WLM) of GLMs or WLMs. Among over 1.8 million adults with T2D and any GLM use and over 1.2 million adults with T2D and incident GLM use between 2021 and 2023, metformin was the most initiated therapy followed by GLP-1 RA and SGLT2i. 4% of patients initiated tirzepatide. Patients initiating tirzepatide or other GLP-1 RAs were more likely to be younger and female, and incident users had a higher BMI than those initiating GLMs. Among adults without diabetes with any or incident use of WLMs, semaglutide (2.0 mg) was the most initiated medication, followed by tirzepatide. Incident users of tirzepatide were generally older and had a high prevalence of obesity-related complications. After its approval, any use of tirzepatide increased significantly, reaching 12.3% of all GLM use by December 2023. Overall, use of tirzepatide, GLP-1 RA and SGLT2i increased among adults with T2D between 2021 and 2023, while use of other GLMs, including metformin, declined rapidly. Tirzepatide and semaglutide (2.4 mg) use also increased among those without diabetes, whereas use of other WLMs declined. These findings enhance the understanding of the rapidly shifting landscape of GLM and WLM use in recent years. 

 

Media contacts: For an embargoed PDF, please contact Angela Collom at acollom@acponline.org. To speak with corresponding author John W. Ostrominski, MD, please email jostrominski@bwh.harvard.edu or the media office at mgbmediarelations@partners.org.

----------------------------      

3. Poor communication causes one in 10 patient safety incidents in hospitals

Abstract: https://www.acpjournals.org/doi/10.7326/ANNALS-24-02904

URL goes live when the embargo lifts           

A systematic review of studies quantifying the effect of poor communication on patient safety found that a quarter of patient safety incidents are attributed to poor communication from health care professionals. The findings highlight the need for health care professionals to develop and maintain effective communication skills to foster relationships with peers and patients. The findings are published in Annals of Internal Medicine.

 

A team of researchers from the University of Leicester examined 46 studies including patients from Europe, North and South America, Asia, and Australia, which comprised incidents involving 67,639 patients, published between 2013 and 2024. The included studies quantified patient safety incidents, included health care practitioners from any discipline, and evaluated communication between health care staff and between health care staff and patients or caregivers. The types of patient safety incidents measured across studies included adverse events, near misses, medical errors and medication errors. Methods of communication evaluated included verbal, written, electronic and nonverbal. Incidents reviewed within the study include a physician who accidentally shut off a patient’s amiodarone drip while trying to silence a beeping pump. The physician failed to notify the nurse the drip was stopped, leading to the patient having a dangerously high heartbeat. In another case, a patient died after a nurse had failed to explain to a surgeon that the patient was experiencing abdominal pains following surgery and had a low red blood cell count, which is indicative of internal bleeding. The patient later died from the hemorrhage that could have been prevented had there been adequate communication. Overall, the review found that poor communication contributed to 25% of patient safety incidents and was the only identified cause in one in 10 incidents. The findings suggest the need for interventions to improve patient safety through improved communication. Such interventions include policymakers commissioning evidence-based communication training to health care professionals that spans the continuum of health professions education. The authors also note that because the results are broadly comparable across cultural contexts, there is a need for shared problem-solving internationally to address the threat poor communication poses to patient safety.


----------------------------     

 

Emotions and levels of threat affect communities’ resilience during extreme events


Stevens Institute of Technology researchers use mathematical modeling to assess whether cohesive communities are more resilient



Stevens Institute of Technology



Hoboken, N.J., April 14, 2025 — Tightly connected communities tend to be more resilient when facing extreme events such as earthquakes, hurricanes, floods or wildfires, says Jose Ramirez-Marquez, who develops metrics to analyze, quantify and ultimately improve performance of urban systems. 

Ramirez-Marquez, associate professor and division director of Enterprise Science and Engineering at Stevens, who grew up in the earthquake-prone Mexico City knows this first-hand. “Whenever there's an earthquake, a city-wide alarm goes off and everybody leaves wherever they are and stays in the middle of the street — that’s a prevention phase,” he says. “Then there’s a restoration phase when people engage with others in the community, whether it’s sharing food and water or helping rescue people from under the debris.” The community’s solidarity and togetherness — one for all and all for one, per a Latin proverb — are key to bouncing back.

In scientific terms, this togetherness is defined as community cohesion, which encapsulates the sense of belonging, mutual support among members and shared values or sentiments, all of which boost community’s ability to withstand disasters. But whether this cohesion directly influences how well a community recovers from extreme events is not known, explains Alexander Gilgur who had studied this subject with Ramirez-Marquez as a Ph.D. student. “Resilience is a measure of how quickly and/or effortlessly the system recovers from a disturbance,” says Gilgur. “The causal relationship between cohesion and resilience appears logical, but it has not been proven mathematically.”

To address that issue, Gilgur and Ramirez-Marquez developed mathematical techniques to measure community cohesion and its resilience, which they outlined in a recent paper, published in the journal of Socio-Economic Planning Sciences. They investigated two case studies of the same San Francisco Bay Area community during 2020 wildfires and during 2022-23 rainstorms.

In their work, they found that during the less intense adverse events such the rainstorms, the community performance improved despite the increasing stress levels. However, in high-stress disturbances such as the wildfires, the community’s performance suffered. “We found that there’s a negative correlation between the resilience of a community and the strength of disturbance,” says Ramirez-Marquez. 

In fact, in some cases, the disturbance could be so strong that people may forsake their community. Ramirez-Marquez cites the recent Los Angeles fires example (which wasn’t part of the study, but is telling), where more affluent residents hired private firefighters to keep their houses safe. “So when the stress is very strong, some might say, ‘oh, well, I don't care about the community, I care about myself.’ The stress can be so high that the concept of community cohesion no longer stands.” 

The scientists also found that the emotion intensity has a strong effect on community cohesion. “For helping communities be more resilient, emotional engagement is a very important factor,” says Gulgur, adding that it doesn’t matter whether emotions are positive or negative. “Anger and fear are equally powerful as joy and love.” On the contrary, people’s economic level does not have a direct effect on the community cohesion, “because the disaster might affect everyone,” says Ramirez-Marquez.

He notes that developing metrics to assess community cohesiveness and resilience offers practical benefits. If we can establish the causal link between cohesiveness and resilience, we can then set thresholds, limits or targets — and use these metrics to implement policies that aim to reach the desired numbers to improve resilience.

“Community cohesiveness is essentially a social glue that holds people together,” Ramirez-Marquez says. Quantifying that glue is challenging, yet being able to do so can help indicate whether a given community is resilient or can be stronger. “These metrics can then be used by policymakers to implement policies that make communities more resilient.”