Thursday, April 10, 2025

New AI tool makes sense of public opinion data in minutes, not months


DECOTA transforms open-ended survey responses into clear themes — helping policymakers make better use of underutilized public feedback


University of Bath

Dr Lois Player DECOTA 

image: 

Dr Lois Player led the development of the DECOTA AI tool

view more 

Credit: University of Bath





  • AI tool DECOTA analyses free-text data rapidly, affordably, and with human-like accuracy
  • Free-text data is rich in insight, but is often underused due to the time and cost of analysing it manually
  • Research team at the University of Bath say DECOTA could help ensure more public voices are included in policy decisions

A powerful new AI tool, published today, offers a fast, low-cost way to understand public attitudes – by automatically identifying common themes in open-ended responses to surveys and policy consultations.

DECOTA – the Deep Computational Text Analyser – is the first open-access method for analysing free-text responses to surveys and consultations at scale. Detailed in a research paper published in Psychological Methods today (Monday 7 April), the tool delivers insights around 380 times faster and over 1,900 times more cheaply than human analysis, while achieving 92% agreement with human-coded results.

It uses fine-tuned large language models to identify key themes and sub-themes in open-ended responses – where people share their views in their own words. While rich in insight, this type of qualitative data is notoriously time-consuming to analyse – meaning it often goes unused.

Developed by a multidisciplinary team at the University of Bath – led by recent PhD graduates Dr Lois Player and Dr Ryan Hughes, with support from Professor Lorraine Whitmarsh – the tool is designed to help governments and organisations better understand the people they serve.

The tool came about initially to better understand opinions about climate policies; however, it can be applied to a wide range of applications. It has already garnered interest from four UK Governmental bodies, academic institutions, and global think tanks.

Dr Lois Player, who completed her PhD in Behavioural Science within Bath’s IAAPS Doctoral Training Centre, explains: “When thousands of people respond to surveys or consultations, it’s often impossible to analyse all that free-text data by hand. DECOTA makes it possible to summarise which themes are most common in large populations – in a way that simply wouldn’t be feasible otherwise.”

Detailed, human-like accuracy

DECOTA is grounded in a well-established qualitative analysis technique known as thematic analysis, which sees researchers manually group free-text data into common themes. Mirroring this, DECOTA uses a six-step approach involving two fine-tuned large language models and a clustering approach to identify the themes and sub-themes underlying the data.

The team compared DECOTA’s performance to human analysts on four example datasets. DECOTA detected 92% of the sub-themes found by analysts, and 90% of the broader themes. Remarkably, DECOTA generated insights in just 10 minutes, compared to an average of 63 hours for the human analysts – a startling 380 times faster.

These time savings have huge cost implications – with DECOTA analysing responses from around 1,000 participants for just $0.82, compared to approximately $1,575 using a human research assistant paid $25 per hour. DECOTA is even 240 times faster and 1,220 times cheaper than existing state-of-the-art computational methods, such as topic modelling.

“Importantly, DECOTA is not designed to replace human thematic analysis, but rather complement it,” explains Dr Player. “We want it to unlock the huge volumes of data going unanalysed, allowing more voices to be heard in policy and decision-making settings, and freeing up valuable researcher time for deeper, more interpretative work.”

Going beyond thematic analysis, the tool also determines which demographic groups are more likely to mention certain themes. For example, it can ascertain if women are more likely than men to mention a specific issue, or whether younger people are more likely than older people to highlight certain themes. It also draws out representative quotes for each sub-theme, aiding interpretation of results.

Transparency built-in

Dr Ryan Hughes, whose PhD focused on Mechatronics and Data Science, adds: “DECOTA doesn’t just summarise data. It also provides depth, showing who said what, and how often. It’s also transparent by design. It doesn’t hide how it processes data: researchers can inspect and edit each stage of the pipeline, and all the code is openly available on the Open Science Framework.”

Professor Lorraine Whitmarsh says: “DECOTA offers a huge leap forward in the analysis of open-ended questionnaire data. Applying machine learning to analyse large volumes of text will save time and money for researchers and policymakers wanting to understand public attitudes, allowing for a stronger role of public engagement in policy design.”

Openly accessible online, the tool is detailed in the research paper The Use of Large Language Models for Qualitative Research: the Deep Computational Text Analyser (DECOTA), published today in the journal Psychological Methods (DOI: 10.1037/met0000753).

The team say that DECOTA will continue to be developed over time, with plans for a user-friendly web application, accessible to those unfamiliar with code.

Parties interested in receiving updates about DECOTA or participating in the initial rollout can express their interest via a contact form at: https://tinyurl.com/DECOTAform  

 

ENDS

A copy of the research paper, and images of Dr Lois Player are available at: https://tinyurl.com/mdhdfnvu

For more information or to request interviews, contact Will McManus: wem25@bath.ac.uk / press@bath.ac.uk / +44(0)1225 385 798.

 

The University of Bath

The University of Bath is one of the UK's leading universities, with a reputation for high-impact research, excellence in education, student experience and graduate prospects. 

We are ranked in the top 10 of all of the UK’s major university guides. We are also ranked among the world’s top 10% of universities, placing 150th in the QS World University Rankings 2025. Bath was rated in the world’s top 10 universities for sport in the QS World University Rankings by Subject 2024.

Research from Bath is helping to change the world for the better. Across the University’s three Faculties and School of Management, our research is making an impact in society, leading to low-carbon living, positive digital futures, and improved health and wellbeing. Find out all about our Research with Impact: https://www.bath.ac.uk/campaigns/research-with-impact/

Journal

DOI

Method of Research

Subject of Research

Article Title

Article Publication Date


New method efficiently safeguards sensitive AI training data


The approach maintains an AI model’s accuracy while ensuring attackers can’t extract secret information




Massachusetts Institute of Technology





CAMBRIDGE, MA – Data privacy comes with a cost. There are security techniques that protect sensitive user data, like customer addresses, from attackers who may attempt to extract them from AI models — but they often make those models less accurate. 

MIT researchers recently developed a framework, based on a new privacy metric called PAC Privacy, that could maintain the performance of an AI model while ensuring sensitive data, such as medical images or financial records, remain safe from attackers. Now, they’ve taken this work a step further by making their technique more computationally efficient, improving the tradeoff between accuracy and privacy, and creating a formal template that can be used to privatize virtually any algorithm without needing access to that algorithm’s inner workings.

The team utilized their new version of PAC Privacy to privatize several classic algorithms for data analysis and machine-learning tasks.

They also demonstrated that more “stable” algorithms are easier to privatize with their method. A stable algorithm’s predictions remain consistent even when its training data are slightly modified. Greater stability helps an algorithm make more accurate predictions on previously unseen data.

The researchers say the increased efficiency of the new PAC Privacy framework, and the four-step template one can follow to implement it, would make the technique easier to deploy in real-world situations.

“We tend to consider robustness and privacy as unrelated to, or perhaps even in conflict with, constructing a high-performance algorithm. First, we make a working algorithm, then we make it robust, and then private. We’ve shown that is not always the right framing. If you make your algorithm perform better in a variety of settings, you can essentially get privacy for free,” says Mayuri Sridhar, an MIT graduate student and lead author of a paper on this privacy framework.

She is joined in the paper by Hanshen Xiao PhD ’24, who will start as an assistant professor at Purdue University in the fall; and senior author Srini Devadas, the Edwin Sibley Webster Professor of Electrical Engineering. The research will be presented at the IEEE Symposium on Security and Privacy.

Estimating noise

To protect sensitive data that were used to train an AI model, engineers often add noise, or generic randomness, to the model so it becomes harder for an adversary to guess the original training data. This noise reduces a model’s accuracy, so the less noise one can add, the better.

PAC Privacy automatically estimates the smallest amount of noise one needs to add to an algorithm to achieve a desired level of privacy.

The original PAC Privacy algorithm runs a user’s AI model many times on different samples of a dataset. It measures the variance as well as correlations among these many outputs and uses this information to estimate how much noise needs to be added to protect the data.

This new variant of PAC Privacy works the same way but does not need to represent the entire matrix of data correlations across the outputs; it just needs the output variances.

“Because the thing you are estimating is much, much smaller than the entire covariance matrix, you can do it much, much faster,” Sridhar explains. This means that one can scale up to much larger datasets.

Adding noise can hurt the utility of the results, and it is important to minimize utility loss.       Due to computational cost, the original PAC Privacy algorithm was limited to adding isotropic noise, which is added uniformly in all directions. Because the new variant estimates anisotropic noise, which is tailored to specific characteristics of the training data, a user could add less overall noise to achieve the same level of privacy, boosting the accuracy of the privatized algorithm.

Privacy and stability

As she studied PAC Privacy, Sridhar theorized that more stable algorithms would be easier to privatize with this technique. She used the more efficient variant of PAC Privacy to test this theory on several classical algorithms.

Algorithms that are more stable have less variance in their outputs when their training data change slightly. PAC Privacy breaks a dataset into chunks, runs the algorithm on each chunk of data, and measures the variance among outputs. The greater the variance, the more noise must be added to privatize the algorithm.

Employing stability techniques to decrease the variance in an algorithm’s outputs would also reduce the amount of noise that needs to be added to privatize it, she explains.

“In the best cases, we can get these win-win scenarios,” she says.

The team showed that these privacy guarantees remained strong despite the algorithm they tested, and that the new variant of PAC Privacy required an order of magnitude fewer trials to estimate the noise. They also tested the method in attack simulations, demonstrating that its privacy guarantees could withstand state-of-the-art attacks.

“We want to explore how algorithms could be co-designed with PAC Privacy, so the algorithm is more stable, secure, and robust from the beginning,” Devadas says. The researchers also want to test their method with more complex algorithms and further explore the privacy-utility tradeoff.

“The question now is, when do these win-win situations happen, and how can we make them happen more often?” Sridhar  says.

####

This research is supported, in part, by Cisco Systems, Capital One, the U.S. Department of Defense, and a MathWorks Fellowship.



AI surge to double data centre electricity demand by 2030: IEA


By AFP
April 10, 2025


Big Tech companies have become mega users of power that will increase in the race to adopt artificial intelligence - Copyright AFP RONNY HARTMANN

Nathalie ALONSO

Electricity consumption by data centres will more than double by 2030, driven by artificial intelligence applications that will create new challenges for energy security and CO2 emission goals, the IEA said Thursday.

At the same time, AI can unlock opportunities to produce and consume electricity more efficiently, the International Energy Agency (IEA) said in its first report on the energy implications of AI.

Data centres represented about 1.5 percent of global electricity consumption in 2024, but that has increased by 12 percent annually over the past five years. Generative AI requires colossal computing power to process information accumulated in gigantic databases.

Together, the United States, Europe, and China currently account for about 85 percent of data center consumption.

Big tech companies increasingly recognise their growing need for power. Google last year signed a deal to get electricity from small nuclear reactors to help power its part in the artificial intelligence race.

Microsoft is to use energy from new reactors at Three Mile Island, the site of America’s worst nuclear accident, when it went through a meltdown in 1979. Amazon also signed an accord last year to use nuclear power for its data centres.

At the current rate, data centres will consume about three percent of global energy by 2030, the report said.

According to the IEA, data centre electricity consumption will reach about 945 terawatt hours (TWH) by 2030.

“This is slightly more than Japan’s total electricity consumption today. AI is the most important driver of this growth, alongside growing demand for other digital services,” said the report.

One 100 megawatt data centre can use as much power as 100,000 households, the report said. But it highlighted that new data centres, already under construction, could use as much as two million households.

The Paris-based energy policy advisory group said that “artificial intelligence has the potential to transform the energy sector in the coming decade, driving a surge in electricity demand from data centers worldwide, while also unlocking significant opportunities to cut costs, enhance competitiveness, and reduce emissions”.

Hoping to keep ahead of China in the field of artificial intelligence, US President Donald Trump has launched the creation of a “National Council for Energy Dominance” tasked with boosting electricity production.

Right now, coal provides about 30 percent of the energy needed to power data centres, but renewables and natural gas will increase their shares because of their lower costs and wider availability in key markets.

The growth of data centers will inevitably increase carbon emissions linked to electricity consumption, from 180 million tonnes of CO2 today to 300 million tonnes by 2035, the IEA said. That remains a minimal share of the 41.6 billion tonnes of global emissions estimated in 2024.


Diagnoses and treatment recommendations given by AI were more accurate than those of physicians





Tel-Aviv University

Prof. Dan Zeltzer 

image: 

Prof. Dan Zeltzer

view more 

Credit: Richard Haldis



 The study, conducted at the virtual urgent care clinic Cedars-Sinai Connect in LA, compared recommendations given in about 500 visits of adult patients with relatively common symptoms – respiratory, urinary, eye, vaginal and dental.

 

A new study led by Prof. Dan Zeltzer, a digital health expert from the Berglas School of Economics at Tel Aviv University, compared the quality of diagnostic and treatment recommendations made by artificial intelligence (AI) and physicians at Cedars-Sinai Connect, a virtual urgent care clinic in Los Angeles, operated in collaboration with Israeli startup K Health. The paper was published in Annals of Internal Medicine and presented at the annual conference of the American College of Physicians (ACP). This work was supported with funding by K Health.

 

Prof. Zeltzer explains: "Cedars-Sinai operates a virtual urgent care clinic offering telemedical consultations with physicians who specialize in family and emergency care. Recently, an AI system was integrated into the clinic—an algorithm based on machine learning that conducts initial intake through a dedicated chat, incorporates data from the patient’s medical record, and provides the attending physician with detailed diagnostic and treatment suggestions at the start of the visit -including prescriptions, tests, and referrals. After interacting with the algorithm, patients proceed to a video visit with a physician who ultimately determines the diagnosis and treatment. To ensure reliable AI recommendations, the algorithm—trained on medical records from millions of cases—only offers suggestions when its confidence level is high, giving no recommendation in about one out of five cases. In this study, we compared the quality of the AI system's recommendations with the physicians' actual decisions in the clinic."

 

The researchers examined a sample of 461 online clinic visits over one month during the summer of 2024. The study focused on adult patients with relatively common symptoms—respiratory, urinary, eye, vaginal and dental. In all visits reviewed, patients were initially assessed by the algorithm, which provided recommendations, and then treated by a physician in a video consultation. Afterwards, all recommendations—from both the algorithm and the physicians—were evaluated by a panel of four doctors with at least ten years of clinical experience, who rated each recommendation on a four-point scale: optimal, reasonable, inadequate, or potentially harmful. The evaluators assessed the recommendations based on the patients' medical histories, the information collected during the visit, and transcripts of the video consultations.

 

The compiled ratings led to interesting conclusions: AI recommendations were rated as optimal in 77% of cases, compared to only 67% of the physicians' decisions; at the other end of the scale, AI recommendations were rated as potentially harmful in a smaller portion of cases than physicians' decisions (2.8% of AI recommendations versus 4.6% of physicians' decisions).  In 68% of the cases, the AI and the physician received the same score; in 21% of cases, the algorithm scored higher than the physician; and in 11% of cases, the physician's decision was considered better.

 

The explanations provided by the evaluators for the differences in ratings highlight several advantages of the AI system over human physicians: First, the AI more strictly adheres to medical association guidelines—for example, not prescribing antibiotics for a viral infection; second, AI more comprehensively identifies relevant information in the medical record—such as recurrent cases of a similar infection that may influence the appropriate course of treatment; and third, AI more precisely identifies symptoms that could indicate a more serious condition, such as eye pain reported by a contact lens wearer, which could signal an infection. Physicians, on the other hand, are more flexible than the algorithm and have an advantage in assessing the patient's real condition. For example, if a COVID-19 patient reports shortness of breath, a doctor may recognize it as a relatively mild respiratory congestion, whereas the AI, based solely on the patient's answers, might refer them unnecessarily to the emergency room.

 

Prof. Zeltzer concludes: "In this study, we found that AI, based on a targeted intake process, can provide diagnostic and treatment recommendations that are, in many cases, more accurate than those made by physicians. One limitation of the study is that we do not know which of the physicians reviewed the AI's recommendations in the available chart, or to what extent they relied on these recommendations. Thus, the study only measured the accuracy of the algorithm’s recommendations and not their impact on the physicians. The uniqueness of the study lies in the fact that it tested the algorithm in a real-world setting with actual cases, while most studies focus on examples from certification exams or textbooks. The relatively common conditions included in our study represent about two-thirds of the clinic's case volume, and thus the findings can be meaningful for assessing AI's readiness to serve as a decision-support tool in medical practice. We can envision a near future in which algorithms assist in an increasing portion of medical decisions, bringing certain data to the doctor's attention, and facilitating faster decisions with fewer human errors. Of course, many questions still remain about the best way to implement AI in the diagnostic and treatment process, as well as the optimal integration between human expertise and artificial intelligence in medicine."

 

Other authors involved in the study include Zehavi Kugler, MD; Lior Hayat, MD; Tamar Brufman, MD; Ran Ilan Ber, PhD; Keren Leibovich, PhD; Tom Beer, MSc; and Ilan Frank, MSc., Caroline Goldzweig, MD MSHS, and Joshua Pevnick, MD, MSHS.

 

Link to the article:

https://www.acpjournals.org/doi/10.7326/ANNALS-24-03283

 

The body remembers: OU researchers publish new study on Oklahoma City bombing survivors’ trauma ‘imprint’



University of Oklahoma
OKC bombing study 

image: 

Research published this year by the University of Oklahoma suggests that survivors of the 1995 Oklahoma City bombing carry a trauma "imprint" in their bodies even though they have gone on to live healthy and resilient lives.

view more 

Credit: University of Oklahoma





OKLAHOMA CITY – Recent research from the University of Oklahoma suggests that survivors of the 1995 Oklahoma City bombing carry physiological traces of the trauma, even though study participants have gone on to lead healthy and resilient lives. Essentially, their bodies “remember” the trauma even if they don’t have physical or mental health problems.

Previous studies have examined biological stress and psychological symptoms in terrorism survivors, but the recently published research is thought to be the first of its kind to study three different biological systems in medically healthy people who survived the same traumatic event: cortisol, which plays a crucial role in the body’s stress response; heart rate and blood pressure; and interleukins, which are inflammatory substances that play a role in the body’s immune system.

Research participants included 60 heavily impacted direct survivors of the Oklahoma City bombing, compared to a control group of local people who were not affected by the bombing. People in both groups were healthy. The study found that, counterintuitively, cortisol levels were lower in people who survived the bombing. Survivors had higher blood pressure but a lower heart rate in response to trauma cues, suggesting their response may have become blunted over time. Two interleukins were measured. Interleukin 1B, which is linked with inflammation, was significantly higher in survivors, and interleukin 2R, which plays a protective role, was lower.

“The main takeaway from the study is that the mind may be resilient and be able to put things behind it, but the body doesn’t forget. It may remain on alert, waiting for the next thing to happen,” said Phebe Tucker, M.D., lead author of the study and professor emeritus of psychiatry at the OU College of Medicine.

“We thought there would be a correlation between these biomarkers and the research participants’ psychological symptoms, but their PTSD and depression scores were not elevated and did not correlate with stress biomarkers,” she added. “That tells us there is a stress response in the body that is not present in the emotions they express. In addition, the elevated interleukin 1B is typically seen in people with illnesses and inflammation, but this group was pretty healthy. However, it raises concerns about potential long-term health problems.”

Tucker and her colleagues have regularly conducted studies involving bombing survivors beginning soon after the event occurred. In this new paper, they are using data obtained seven years after the bombing. At the time, they did not study the same biomarkers, making this new study unique.

“Basically, what this paper shows is that after you’ve experienced severe trauma, your biological systems may not be at a typical baseline any longer; things have changed,” said study co-author Rachel Zettl, M.D., clinical assistant professor in the Department of Psychiatry and Behavioral Sciences, OU College of Medicine. “It’s not just our minds that remember trauma; our biological processes do, too. It changes your actual physical being.”

Other authors of this paper were Betty Pfefferbaum, M.D., professor emeritus in the Department of Psychiatry and Behavioral Sciences, OU College of Medicine; Carol North, M.D., adjunct professor, University of Texas Southwestern Medical Center; Yan Daniel Zhao, Ph.D., professor, OU Hudson College of Public Health; Pascal Nitiema, Ph.D., Arizona State University; and Haekyung Jeon-Slaughter, Ph.D., University of Texas Southwestern Medical Center.

###

About the Project

Read the study, “Learning from Hindsight: Examining Autonomic, Inflammatory, and Endocrine Stress Biomarkers and Mental Health in Healthy Terrorism Survivors Many Years Later,” at https://doi.org/10.1017/S1049023X24000360. It is published in the journal Prehospital and Disaster Medicine.

About the University of Oklahoma

Founded in 1890, the University of Oklahoma is a public research university with campuses in Norman, Oklahoma City and Tulsa. As the state’s flagship university, OU serves the educational, cultural, economic and health care needs of the state, region and nation. In Oklahoma City, OU Health Sciences is one of the nation’s few academic health centers with seven health profession colleges located on the same campus. OU Health Sciences serves approximately 4,000 students in more than 70 undergraduate and graduate degree programs spanning Oklahoma City and Tulsa and is the leading research institution in Oklahoma. For more information about OU Health Sciences, visit www.ouhsc.edu.

 

Three-quarters of survey respondents supported an overdose prevention center in their neighborhood



An assessment by researchers at the Brown University School of Public Health revealed that before the opening of an OPC in Providence, people living and working in the area were generally supportive




Brown University

Overdose prevention center in Providence, R.I. 

image: 

Rhode Island's first overdose prevention center is located in the hospital district of Providence. 

view more 

Credit: Image courtesy of Brown University




PROVIDENCE, R.I. [Brown University] — Overdose prevention centers (OPCs) offer life-saving interventions in the event of an overdose along with on-site harm reduction services. While studies of OPCs in other countries have shown that they can reduce overdose deaths without increasing crime, they remain a novel concept in the United States.

Before the recent opening of the nation’s first state-sanctioned OPC, researchers at the Brown University School of Public Health surveyed people living and working in the Providence, Rhode Island, neighborhood where it is located, to ask about their perceptions of the center. They found that 74% of survey participants supported an OPC opening in their neighborhood and 81% supported an OPC elsewhere in the city, while 13% expressed neutrality.

The results of the survey are published in the Journal of Urban Health.

“This study contributes to the growing body of evidence that overdose prevention centers are widely accepted by local communities, even though policymakers have been slower to support and expand them,” said study author Alexandria Macmadu, an assistant professor of epidemiology at Brown’s School of Public Health.

More than 200 OPCs operate globally, but there are only three in the U.S.: two city-sanctioned centers in New York City and a third state-sanctioned center in Providence near several of the city’s largest hospitals. The Providence OPC, which is operated by the nonprofit Project Weber/RENEW, opened in January 2025. Two years ago, as plans came together for the center’s opening, the Brown researchers launched a project to evaluate its effectiveness and impact on the local community. 

This smaller, separate study, published on Thursday, April 10, complements the team’s ongoing work.

In September and October 2024, members of the research team conducted surveys by knocking on residential doors, visiting businesses and talking with pedestrians within a three-quarter-mile radius of the OPC. Eligible participants were 18 years or older and lived or worked in the area. Respondents were asked whether they were in favor of an OPC opening in their neighborhood or in another part of the city, as well as about their experiences in the neighborhood in the past two months. The researchers also collected basic demographic information.

Of the 125 people surveyed, 74% were in favor of an OPC opening in the surveyed neighborhood, with an additional 13% expressing neutrality and 11% expressing lack of support. A slightly higher proportion of respondents (81%) were in favor of an OPC opening in a different neighborhood. While participants were generally supportive, some expressed concerns about increased drug activity.

Respondents who supported the OPC tended to be younger and to have reported seeing someone who appeared to be homeless in the area during the prior two months. There weren’t any other significant sociodemographic differences between those who supported the OPC vs. those who opposed it.

While researchers had hypothesized that younger respondents might be more likely to express support for the OPC, they hadn’t predicted that perceived visibility of homelessness would be a factor. 

“OPCs help some of the most vulnerable members of our community, including people experiencing homelessness,” Macmadu said. “It’s encouraging to see that community members may be connecting what they see around them — like visible homelessness — with the need for more services and support.”

The authors noted that their results emphasize the importance of engaging with community members to build support for evidence-based harm reduction interventions such as OPCs. Nearly half of respondents (45%) had heard of OPCs prior to the day of the survey, which Macmadu said is likely due to the grassroots education and public awareness campaigns by Project Weber/RENEW. 

The study is the first to assess community acceptance for an OPC in the United States prior to the opening of such a center, Macmadu said. The team plans to continue to evaluate community impact over time.

Funding for the study came from Open Society Foundations (OR2022-87525) and the National Institute of General Medical Sciences (P20-GM125507).

 

New free training course helps reframe conversations about nursing home care


The Gerontological Society of America





free, on-demand course provides research-backed strategies to reshape public perceptions and elevate the value of nursing home care. The training, developed by the National Center to Reframe Aging in partnership with the FrameWorks Institute and LeadingAge, is available to organizations that serve and care for older adults, advocates, and anyone communicating about nursing homes in the United States.

Findings from the FrameWorks Insitute’s “Communicating About Nursing Home Care: Findings and Emerging Recommendations” brief indicate the public views nursing homes as a last-resort option for care of older adults, intended mostly to triage various health concerns rather than support the overall wellbeing of older people. In addition, public thinking defaults to reactive solutions rather than the preventive reforms that advocates tell us are essential to proactively support nursing home residents. To address these issues, the National Center to Reframe Aging, with support from The John A. Hartford Foundation, created actionable communication strategies that counter negative attitudes about nursing homes.

“Effective communication is key to reshaping public perceptions and highlighting the essential role nursing homes play in providing quality care to older people,” said Patricia D’Antonio, BSPharm, MS, MBA, BCGP, executive director of the National Center and the vice president for policy and professional affairs at the Gerontological Society of America. “The National Center is proud to share this training broadly with this network.”

The new course, in series of five short videos, offers strategies and tips to effectively convey the impact, value and essential role of nursing homes’ unique, 24/7 services, to help navigate public misperceptions and foster more productive conversations about nursing home care.

“This training comes at a pivotal moment for the nursing home sector. It builds on our ongoing commitment to improving nursing home care, and it will spark critical and reframed conversations.” said Marcus R. Escobedo, MPA, vice president of communications and senior program officer at The John A. Hartford Foundation.

 

Course Highlights

The strategies shared in the course leverage original research conducted by the FrameWorks Institute in 2022 to better understand the gaps between how experts and the public each viewed nursing home care in the U.S.

The resulting recommendations seek to advance a more realistic description of aging, create a clear image of what quality nursing home care looks like, and focus attention on critical roles and relationships within nursing homes.

The course shares those strategies and offers practical, accessible solutions for anyone seeking to improve public understanding of nursing homes and their impact. In addition to self-guided learning modules, the training includes downloadable materials to help participants implement concepts immediately.

“Impactful messaging about the care and services delivered in nursing homes — which is unlike any other offered in our country’s health care system — is more important than ever as America’s population is aging rapidly. This course offers essential tools to help our members communicate more effectively about the vital work they do,” said Katie Smith Sloan, president and CEO of LeadingAge, the association of nonprofit and mission-driven providers of aging services, including nursing homes.

The course is available at The National Center to Reframing Aging’s Learning Center and on the LeadingAge LearningHub.

  ###

The National Center to Reframe Aging is dedicated to ending ageism by advancing a complete story about aging in America. The center is the trusted source for proven communication strategies and tools to effectively frame aging issues. It is the nation’s leading organization, cultivating an active community of individuals and organizations to spread awareness of unproductive attitudes towards aging and influence policies and programs that benefit all of us as we age. Led by the Gerontological Society of America, the National Center acts on behalf of and amplifies efforts of the ten Leaders of Aging Organizations. Support for the National Center comes from Archstone Foundation, The John A. Hartford Foundation, RRF Foundation for Aging, and The SCAN Foundation.

The Gerontological Society of America (GSA), founded in 1945, is the nation’s oldest and largest interdisciplinary organization focused on aging. It serves more than 6,000 members in over 50 countries. GSA’s vision, meaningful lives as we age, is supported by its mission to foster excellence, innovation, and collaboration to advance aging research, education, practice, and policy. GSA is home to the National Academy on an Aging Society (a nonpartisan public policy institute) and the National Center to Reframe Aging.