Thursday, June 19, 2025

 

Doctors need better guidance on AI



To avoid burnout and medical mistakes, health care organizations should train physicians in AI-assisted decision-making




University of Texas at Austin





Artificial intelligence is everywhere — whether you know it or not. In many fields, AI is being touted as a way to help workers at all levels accomplish tasks, from routine to complicated. Not even physicians are immune.

But AI puts doctors in a bind, says Shefali Patil, associate professor of management at Texas McCombs, in a recent article. Health care organizations are increasingly pushing them to rely on assistive AI to minimize medical errors. But they lack direct support for how to use it.

The result, Patil says, is that physicians risk burnout, as society decides whom to hold accountable when AI is involved in medical decisions. Paradoxically, they also face greater chances of making medical mistakes. This interview has been edited for length and clarity.

Your article discusses the phenomenon of superhumanization. Unlike the rest of us, doctors are thought to have extraordinary mental, physical, and moral capacities, and they may be held to unrealistic standards. What pressures does this place on medical professionals?

AI is generally meant to aid and enhance clinical decisions. When an adverse patient outcome arises, who gets the blame? It’s up to the physician now to decide whether to take the machine’s recommendation and to anticipate what will happen if there’s an adverse patient result.

There are two possible types of errors — false positives and false negatives. The doctor has to determine if the illness is really bad and to do treatments that are potentially unnecessary, if it turns out to be a false positive. The other is a false negative, where the patient is super sick and the doctor doesn’t catch it.

The doctor has to figure out how to use AI software systems but has no control over the systems that the hospital buys. It all has to do with liability. There are no tight regulations around AI.

AI diagnoses, which are supposed to make doctors’ lives easier and reduce medical errors, are potentially having an opposite effect. Why?

The promise for AI is to alleviate some of the decision-making pressures on physicians. The promise is to make their jobs easier and lead to less burnout.

But these come with liability issues. AI vendors do not reveal the way the algorithms actually work. There’s limited transparency on how the algorithms are making a decision, so it’s difficult to calibrate when to use AI and when not to.

If you don’t use it, and there’s a mistake, you’ll be asked why you did not take the AI recommendation. Or, if AI makes a mistake, you’re held responsible, because it’s not a human being. That’s the tension.

What risks does this situation pose to patient care?

People want a physician who’s competent and decisive without feeling a sense of analysis paralysis because of information overload. Decision-making uncertainty and anxiety cause physicians to second-guess themselves. That leads to poor decision-making and, subsequently, poor patient care.

You predict that medical liability will depend on who people believe is at fault for a mistake. How could that expectation increase the risk of doctor burnout and mistakes?

Decision-making research suggests that people who suffer from performance anxiety and constantly second-guess themselves are not thinking logically through decisions. They’re questioning their own judgments.

That’s a very strong, accepted finding in the field of organizational behavior. It’s not specific to doctors, but we’re extrapolating to them.

What strategies can health care organizations use to alleviate those pressures and support physicians in using AI?

One of the big things that needs to be implemented with medical education is simulation training. It can be done as part of continuing education. It’s going to be very significant, because this is the future of medicine. There’s no turning back.

Learning how these systems actually work and understanding how they update and make a recommendation based on medical literature and past case outcomes is important in effective decision-making.

What do you mean when you write about a “regulatory gap?”

We mean that legal regulations always lag behind technological advances. You’re never going to be able to get fair and effective regulations that meet everybody’s interests. The liability risk always happens. The perception of blame is always after the fact. That’s why we’re trying to say the onus should be on administrators to help physicians deal with this issue.

Can you offer some practical advice for doctors, suggesting some do’s and don’ts for using AI assistance?

Right now, there is very little assistance from hospital administrators in teaching physicians how to calibrate the use of AI. More needs to be done.

Administrators need to implement more practical support that heavily relies on feedback from clinicians. At the moment, administrators don’t get that feedback. Performance outcomes, such as what was useful and what was not, need to be tracked.

Calibrating AI Reliance – A Physician’s Superhuman Dilemma,” co-authored with Christopher Myers of Johns Hopkins University and Yemeng Lu-Myers of Johns Hopkins Medicine, is published in JAMA Health Forum.

 

CARNIVORE STUDIES

Study identifies molecular markers related to meat quality in Nelore cattle



By combining different techniques, researchers at São Paulo State University in Brazil have revealed biological pathways related to tenderness, fat deposition, and other relevant characteristics of the meat of the predominant cattle breed in Brazil




Fundação de Amparo à Pesquisa do Estado de São Paulo





Researchers at São Paulo State University (UNESP) in Brazil have identified a robust set of genetic markers associated with meat quality in the Nelore cattle breed (Bos taurus indicus) genome. The results pave the way for substantial progress in the genetic enhancement of the Zebu breed, which accounts for about 80% of the Brazilian beef herd. The research has direct implications for the productivity and quality of Brazilian beef, reinforcing the country’s standing as a major beef exporter. The results were published in the journal Scientific Reports.

In previous studies, the group had identified genes and proteins by studying meat and carcass characteristics separately using different techniques. For the current study, however, the researchers integrated these techniques and examined multiple characteristics using data from 6,910 young Nelore bulls from four commercial genetic improvement programs.

Because the biological material was collected immediately after slaughter, a comprehensive and detailed assessment of characteristics directly influencing meat quality was possible.

“The group had already made significant progress using different ‘omics’ [genomics, transcriptomics, and proteomics] approaches, but it became increasingly clear that no single technique is sufficient to understand the complexity of the biological systems that control variation in meat and carcass quality,” says Gabriela Frezarim, first author of the study. She conducted the study during her PhD at the Faculty of Agricultural and Veterinary Sciences (FCAV) at UNESP in Jaboticabal.

“By integrating these omics techniques, we were able to elucidate not only the isolated genes but also the biological networks involved in the variation of certain animal phenotypes,” adds the researcher, who was supervised by Lucia Galvão de Albuquerque, a professor at FCAV-UNESP. 

Albuquerque is coordinating the project “Genetic aspects of quality, efficiency, and sustainability of meat production in Nelore cattle”, which is supported by FAPESP. Some of the work was conducted during a previous project, which was also funded by the Foundation (read more at: agencia.fapesp.br/28157).

Largest exporter in the world

The results of the current study provide researchers with a series of accurate molecular data that can be used by the productive sector in the future to select animals that produce higher-quality meat and carcasses with a higher commercial yield.

Brazil leads the world in meat exports with 2.89 million tons in 2024. However, the Nelore zebu is known for having meat that is less tender than that of taurine breeds, such as the European Angus (Bos taurus taurus).

In recent years, other research groups have made progress in developing chips that can identify genetic variations associated with characteristics that are of interest for selecting Nelore cattle (read more at: revistapesquisa.fapesp.br/en/cattle-genes/). 

The present study goes further by investigating not only isolated genes, but also complex molecular networks related to extreme performance (high and low) in meat quality traits. The study emphasizes genes and metabolic pathways that explain the differences between individuals. These findings expand the possibilities for genetic selection.

“When working at the DNA level, there’s no guarantee that a particular variation in the genome will necessarily result in the production of a specific RNA or protein because there are complex biological processes involved in these stages that aren’t yet fully understood. Our study seeks to understand these pathways and identify the molecular basis of meat and carcass characteristic expression,” explains Larissa Fonseca, co-supervisor of the study. She is a postdoctoral researcher at FCAV-UNESP and a FAPESP scholarship recipient.

From muscle to meat

“Understanding the molecular mechanisms that influence meat and carcass quality is essential to explaining the phenotypic variability observed in Nelore cattle. The study offers an integrated view of the biological pathways involved in these characteristics and identifies genes and proteins that directly affect tenderness, marbling, and subcutaneous fat thickness,” says Lúcio Mota, another co-author of the study who is conducting postdoctoral research at FCAV-UNESP with a scholarship from FAPESP.

The researcher explains that this approach provides clear molecular bases for the phenotypic differences observed between animals, allowing for more precise selection strategies in genetic improvement.

For example, the researchers found that genes associated with growth, cell cycle regulation, and heat shock proteins are directly linked to meat tenderness.

These proteins maintain muscle structure and control fiber degradation after slaughter, which directly impacts meat tenderness. The expression levels of these genes and proteins differ between cattle, which helps explain why some Nelore cattle have more tender meat, as they favor a more efficient breakdown of muscle fibers after slaughter.

Another conclusion was that genes, transcripts, and proteins involved in organizing the cytoskeleton (the structure that maintains cell shape) and in programmed cell death (apoptosis) directly influence muscle development and consequently the loin eye area (LEA). LEA is an international measure that indicates muscle mass and carcass yield.

Regarding marbling, the intramuscular fat responsible for the flavor and juiciness of meat, proteins related to fatty acid synthesis and composition were identified, along with proteins involved in actin binding and microtubule formation, which are essential for various cellular functions. These findings suggest that the regulation of these proteins may directly impact intramuscular fat deposition, thereby influencing the sensory quality of meat.

Finally, genes associated with the regulation of energy metabolism and muscle tissue remodeling were identified as important for subcutaneous fat thickness, which is a relevant measure of carcass quality.

“This is an initial study, but it provides important guidelines for genetic improvement programs and the development of more effective selection strategies in Nelore cattle for meat and carcass traits,” says Albuquerque.

The researchers now plan to expand their analyses to achieve an even higher level of accuracy. This could allow them to more accurately select animals that will directly improve the quality of Brazilian beef.

About São Paulo Research Foundation (FAPESP)
The São Paulo Research Foundation (FAPESP) is a public institution with the mission of supporting scientific research in all fields of knowledge by awarding scholarships, fellowships and grants to investigators linked with higher education and research institutions in the State of São Paulo, Brazil. FAPESP is aware that the very best research can only be done by working with the best researchers internationally. Therefore, it has established partnerships with funding agencies, higher education, private companies, and research organizations in other countries known for the quality of their research and has been encouraging scientists funded by its grants to further develop their international collaboration. You can learn more about FAPESP at www.fapesp.br/en and visit FAPESP news agency at www.agencia.fapesp.br/en to keep updated with the latest scientific breakthroughs FAPESP helps achieve through its many programs, awards and research centers. You may also subscribe to FAPESP news agency at http://agencia.fapesp.br/subscribe.
 

Chronic pain hits US rural residents hardest

Pain may explain higher opioid use in rural areas, UTA study finds



University of Texas at Arlington





A new study from The University of Texas at Arlington reveals that people who live in rural areas are more likely to have chronic pain than those in urban settings. They’re also more likely to go from having no pain or occasional pain to chronic pain. The findings may help explain higher opioid prescription rates in rural communities and could guide future research into the root causes of this disparity.

“We already know about the rural-urban gap in mortality and life expectancy,” said Feinuo Sun, UT Arlington assistant professor of kinesiology and lead author of the study in The Journal of Rural Health. “But when you look at pain, especially chronic pain, it becomes clear that rural residents face additional burdens.”

Chronic pain has been previously linked to higher risks of disability and mortality and contributes to increased health care costs—an estimated $261 billion and $300 billion annually in the U.S. One key takeaway from Dr. Sun’s study is the importance of timely intervention for middle-aged adults in rural communities as they are among the most vulnerable to developing chronic pain.

“Without early intervention, it can have serious long-term consequences, including premature mortality,” Sun said. “That’s why targeted outreach and early pain management strategies are so important.”

In her research, Sun, who has expertise in demography and population health, uses national data and a spatial analysis approach—a way of mapping how factors like health care services, job types and regional economic conditions shape health outcomes depending on where people live. In a 2024 study she authored, she found that rural residents expect to live more years with chronic pain than suburban and urban residents.

Sun’s latest findings suggest the chronic pain disparities are not solely due to limited access to health care in rural communities. Rural residents are more likely to work physically demanding jobs and experience higher poverty rates, both of which contribute to chronic pain. Elevated pain levels, along with fewer treatment options, may help explain the heavier reliance on opioids in these communities.

Sun’s research seeks to distinguish the root causes for higher opioid demand in rural areas.

“The goal for future research is to understand the causes of these disparities and to examine how differences in pain treatment between rural and urban areas contribute to the overall pain gap,” she said.

About The University of Texas at Arlington (UTA)

Celebrating its 130th anniversary in 2025, The University of Texas at Arlington is a growing public research university in the heart of the thriving Dallas-Fort Worth metroplex. With a student body of over 41,000, UTA is the second-largest institution in the University of Texas System, offering more than 180 undergraduate and graduate degree programs. Recognized as a Carnegie R-1 university, UTA stands among the nation’s top 5% of institutions for research activity. UTA and its 280,000 alumni generate an annual economic impact of $28.8 billion for the state. The University has received the Innovation and Economic Prosperity designation from the Association of Public and Land Grant Universities and has earned recognition for its focus on student access and success, considered key drivers to economic growth and social progress for North Texas and beyond.

 

The hidden bias pushing US women out of computer science



Stevens professor’s research reveals systemic undervaluation of applied research that disproportionately affects women




Stevens Institute of Technology





Hoboken, N.J., June 17, 2025 – At the dawn of computing, women were the early adopters of computational technology, working with punch cards in what was then considered secretarial work. As computer science evolved into a prestigious field focused on algorithms and theory, women became – and remained – underrepresented. Today, only 23% of bachelor's and doctoral degrees in computer science are awarded to women, and just 18% of full professors are women — fewer than in the 1980s.

A new study by Dr. Samantha Kleinberg, Farber Chair Professor of Computer Science at Stevens Institute of Technology, reveals a troubling pattern that may help explain this persistent gap: The type of research that successfully attracts women to computing is systematically devalued once they enter the field.

The Applied vs. Theoretical Divide

Research in many fields generally falls into two categories. Applied research aims to create new products, technologies or solutions to specific real-world problems — like developing algorithms to improve medical diagnoses or creating systems to address social inequities. In computing, theoretical research seeks to gain deeper insight into fundamental principles— such as proving the mathematical properties of algorithms or advancing our understanding of computational complexity.

“When you walk into a room at an applied computing conference, you’ll see a balance between women and men attendees,” Kleinberg observes. “At conferences that focus more on theory, the room looks vastly different. There are significantly fewer women than men.”

While both types of research are essential for advancing computer science, Kleinberg’s study reveals they are not valued equally by the academic community. This may reflect traditional academic preferences for theoretical work requiring deep mathematical expertise, though many researchers contribute to both areas throughout their careers. This pattern echoes prior research showing that male-dominated subfields like theory of computer science tend to have higher institutional prestige than female-represented areas like human-computer interaction. Kleinberg's work goes further by examining specific perceptions, funding decisions and citation patterns.

Uncovering Systematic Bias

That disparity, combined with her personal experiences with negative views of applied research, prompted Kleinberg to conduct a comprehensive study with collaborator Jessecae Marsh, professor of psychology at Lehigh University. They surveyed tenured and tenure-track faculty across the top 100 computer science departments in the United States to understand perceptions of researchers who engage in applied versus theoretical work.

The findings, published in the journal IEEE Access as Where the Women Are: Gender Imbalance in Computing and Faculty Perceptions of Theoretical and Applied Research, reveal significant bias against applied researchers and their work.

Faculty rated researchers engaged in applied research as less likely to publish their work in prestigious venues, receive tenure or promotion, obtain awards and get funding. More concerning, faculty rated these researchers as less brilliant, creative and technically skilled than their theory-focused counterparts — despite rating the applied work itself as equally important and worth doing.

“I wanted to understand this dynamic I was seeing," Kleinberg explains. "So we thought, let’s find out what people actually think about this research and the people who do it."

The Data Confirms the Bias

Comprehensive analysis confirmed the survey findings. Data from publications, hiring, funding and awards shows that applied research does indeed lead to worse career outcomes.

Kleinberg then used data from authors of publications and grants to test the hypothesis that women were more represented in applied research. To ensure accuracy in her analysis, rather than using tools that match first names to gender, Kleinberg manually examined over 11,000 American academics’ profiles. “I looked up all 11,524,” she shares. “There are tools to do it automatically based on first name, but they’re less accurate for Chinese names and others that are not strongly gendered, so I had to do this manually.”

Kleinberg found that women are more highly represented in applied research areas than theoretical ones, meaning this bias disproportionately affects their career prospects.

The Recruitment Paradox

The irony is striking: Universities have successfully increased women’s participation in computer science by highlighting its applications. When universities introduced interdisciplinary CS+X programs – combining computing with fields like anthropology, biology, or music – the number of women students grew significantly. These programs appeal to students who want to apply their coding and algorithm-building skills to solving real-world problems rather than pursuing computing for its own sake.

“It’s not clear whether it’s actually their interest or the culture of the field that makes theoretical work unappealing,” Kleinberg says. “It might be that women do want to do theory but feel less welcomed in those spaces.”

The research suggests academia may be pushing women away from theoretical computing into applied fields through cultural barriers, then penalizing them for that work.

Why This Matters Beyond Academia

Computer science benefits from varied perspectives and viewpoints — and suffers when there’s a lack of them. Just as early clinical trials that excluded women as subjects led to treatments that were less effective for women, computing research needs diverse voices to create algorithms and tools that work for everyone.

“I do research in health,” Kleinberg notes. “Ultimately, we want our algorithms and tools to be used by everyone and to be applied to everyone. Science is better when it reflects everybody.”

The study’s implications extend beyond gender equity. As applied computing already transforms healthcare, criminal justice and accessibility technology, systematic devaluation of this work could discourage crucial research that addresses society’s most pressing challenges.

Moving Forward

Kleinberg draws parallels with how academic institutions typically undervalue teaching and service compared to research. “It’s interesting to see the same divide when it comes to theoretical and applied research, where academics believe the work itself is worth doing, but it’s not as rewarded.”

Addressing this bias will require systemic changes in how universities evaluate research impact, train faculty to recognize unconscious bias and structure promotion and tenure decisions to value both theoretical advances and practical applications.

About the Research

The study appears in IEEE Access and was conducted with approval from Stevens Institute of Technology’s IRB. The research involved surveys of 100 faculty members from top-ranked computer science departments and analysis of publication, funding and award data across multiple venues and programs.