Wednesday, October 15, 2025

New AI tool makes medical imaging process 90% more efficient



Rice approach sets standard for brain and other medical imaging



Rice University

Kushal Vyas 

image: 

Kushal Vyas is an electrical and computer engineering doctoral student at Rice University and first author on a paper presented at the Medical Image Computing and Computer Assisted Intervention Society, or MICCAI. (Photo by Jeff Fitlow/Rice University)

view more 

Credit: Photo by Jeff Fitlow/Rice University




HOUSTON – (Oct. 14, 2025) – When doctors analyze a medical scan of an organ or area in the body, each part of the image has to be assigned an anatomical label. If the brain is under scrutiny for instance, its different parts have to be labeled as such, pixel by pixel: cerebral cortex, brain stem, cerebellum, etc. The process, called medical image segmentation, guides diagnosis, surgery planning and research.

In the days before artificial intelligence (AI) and machine learning (ML), clinicians performed this crucial yet painstaking and time-consuming task by hand, but over the past decade, U-nets ⎯ a type of AI architecture specifically designed for medical image segmentation ⎯ have been the go-to instead. However, U-nets require large amounts of data and resources to be trained.

“For large and/or 3D images, these demands are costly,” said Kushal Vyas, a Rice electrical and computer engineering doctoral student and first author on a paper presented at the Medical Image Computing and Computer Assisted Intervention Society, or MICCAI, the leading conference in the field. “In this study, we proposed MetaSeg, a completely new way of performing image segmentation.”

In experiments using 2D and 3D brain magnetic resonance imaging (MRI) data, MetaSeg was shown to achieve the same segmentation performance as U-Nets while needing 90% fewer parameters ⎯ the key variables AI/ML models derive from training data and use to identify patterns and make predictions.

The study, titled “Fit Pixels, Get Labels: Meta-learned Implicit Networks for Image Segmentation,” won the best paper award at MICCAI, getting recognized from a pool of over 1,000 accepted submissions.

“Instead of U-Nets, MetaSeg leverages implicit neural representations ⎯ a neural network framework that has hitherto not been thought useful or explored for image segmentation,” Vyas said.

An implicit neural representation (INR) is an AI network that interprets a medical image as a mathematical formula that accounts for the signal value (color, brightness, etc.) of each and every pixel in a 2D image or every voxel in a 3D one.

While INRs offer a very detailed yet compact way to represent information, they are also highly specific, meaning they typically only work well for the single signal/image they trained on: An INR trained on a brain MRI cannot typically generalize rules about what different parts of the brain look like, so if provided with an image of a different brain, the INR would typically falter.

“INRs have been used in the computer vision and medical imaging communities for tasks such as 3D scene reconstruction and signal compression, which only require modeling one signal at a time,” Vyas said. “However, it was not obvious before MetaSeg how to use them for tasks such as segmentation, which require learning patterns over many signals.”

To make it useful for medical image segmentation, the researchers taught INRs to predict both the signal values and the specific segmentation labels for a given image. To do so, they used meta-learning, an AI training strategy whose literal translation is “learning to learn” that helps models rapidly adapt to new information.

“We prime the INR model parameters in such a way so that they are further optimized on an unseen image at test time, which enables the model to decode the image features into accurate labels,” Vyas said.

This special training allows the INRs to not only quickly adjust themselves to match the pixels or voxels of a previously unseen medical image but to then also decode its labels, instantly predicting where the outlines for different anatomical regions should go.

“MetaSeg offers a fresh, scalable perspective to the field of medical image segmentation that has been dominated for a decade by U-Nets,” said Guha Balakrishnan, assistant professor of electrical and computer engineering at Rice and a member of the university’s Ken Kennedy Institute. “Our research results promise to make medical image segmentation far more cost-effective while delivering top performance.”

Balakrishnan, the corresponding author on the study, is part of a thriving ecosystem of Rice researchers at the forefront of digital health innovation, which includes the Digital Health Initiative and the joint Rice-Houston Methodist Digital Health InstituteAshok Veeraraghavan, chair of the Department of Electrical and Computer Engineering and professor of electrical and computer engineering and computer science at Rice, is also an author on the study.

While MetaSeg can be applied to a range of imaging contexts, its demonstrated potential to enhance brain imaging illustrates the kind of research Proposition 14 ⎯ on the ballot in Texas Nov. 4 ⎯ could help expand statewide.

The research was supported by the U.S. National Institutes of Health (R01DE032051), the Advanced Research Projects Agency for Health (D24AC00296) and the National Science Foundation (2107313, 1648449). The content herein is solely the responsibility of the authors and does not necessarily represent the official views of the funding organizations and institutions.


-30-

This news release can be found online at news.rice.edu.

Follow Rice News and Media Relations via Twitter @RiceUNews.

Peer-reviewed paper:

Fit Pixels, Get Labels: Meta-learned Implicit Networks for Image Segmentation | The Medical Image Computing and Computer Assisted Intervention Society - MICCAI 2025 | DOI: 10.1007/978-3-032-04947-6_19

Authors: Kushal Vyas, Ashok Veeraraghavan, Guha Balakrishnan

https://doi.org/10.1007/978-3-032-04947-6_19

Access associated media files:

https://rice.box.com/s/po3ew9sf4mpgxfhdh2i2k0t7wd0vp2ke
(Photos by Jeff Fitlow/Rice University)


About Rice:

Located on a 300-acre forested campus in Houston, Texas, Rice University is consistently ranked among the nation’s top 20 universities by U.S. News & World Report. Rice has highly respected schools of architecture, business, continuing studies, engineering and computing, humanities, music, natural sciences and social sciences and is home to the Baker Institute for Public Policy. Internationally, the university maintains the Rice Global Paris Center, a hub for innovative collaboration, research and inspired teaching located in the heart of Paris. With 4,776 undergraduates and 4,104 graduate students, Rice’s undergraduate student-to-faculty ratio is just under 6-to-1. Its residential college system builds close-knit communities and lifelong friendships, just one reason why Rice is ranked No. 1 for lots of race/class interaction and No. 7 for best-run colleges by the Princeton Review. Rice is also rated as a best value among private universities by the Wall Street Journal and is included on Forbes’ exclusive list of “New Ivies.”

COMMODITY FETISH  Pas de deux

Generative art enhances virtual shopping experience



Cornell University





ITHACA, N.Y. –  Art infusion theory – the idea that displaying art in retail settings can positively impact consumer behavior – can be applied to the metaverse with similar results, a Cornell design researcher has shown.

Employing algorithm-fueled generative art, So-Yeon Yoon, professor of human centered design at Cornell University, found that the installation in a virtual store enhanced perceptions of exclusivity and aesthetic pleasure for both mass-market and luxury retailers.

“When we think about art, we think it’s more closely aligned with the luxury market,” Yoon said. “But this AI-powered generative art has this capacity to be more practical, affordable and sustainable compared to the expensive artwork that only a luxury market may afford. I think it’s encouraging to see that the mass brand can benefit, as well.”

Yoon is senior author of “Exploring the Impact of Generative Art in Virtual Stores: A Metaverse Study on Consumer Perception and Approach Intention,” which published recently in the Journal of Retailing and Consumer Services. 

“Big events incorporate generative art a lot, but I think it’s more accessible than people might think,” she said. “I see this research as an opportunity to show that it can be adopted in virtual retail stores, and see what effect it actually has on customers.”

The researchers conducted two experiments involving generative art, which can be both dynamic and static, in virtual fashion retailers. For the first, they created pairs of virtual stores (two mass-market and two luxury brands) that were identical except that one contained a generative art display – a video projected onto a white wall, with ever-changing black-and-white patterns. The other featured a plain white wall.

The 120 study participants were all women, with an average age of around 28. The team found that the presence of art was met with positive reactions in both types of stores; answers were overwhelmingly positive to survey measurement statements such as “I perceive this store as luxurious” and “All in all, this store is attractive to me.”

“We actually found that it had more of an effect on the mass brand shoppers versus the luxury brand,” Yoon said. “It worked better for the participants less familiar with art, which was surprising.”

The second study, involving 90 women, sought to determine which form of art (static or dynamic) was more affecting in terms of consumer behavior, including the likelihood of spreading the word online, known as electronic word-of-mouth (e-WOM). Perceptions of exclusivity and aesthetic pleasure were greater in the dynamic-art condition as opposed to the static condition, as were e-WOM intentions.

Yoon said that while the addition of fine art in a mass-market retail outlet might not make economic sense, installing some form of computer-generated art – already popular in major sports events, concerts and interactive gallery shows – could be an option. She is exploring other settings that could benefit from this type of dynamic installation.

“I’d like to explore contexts beyond the retail market – like assisted-living or health care facilities, or  retirement communities,” she said. “You don’t need artists creating one after another. This is constant, like a living art form. It's dynamic, and once it’s created, you have an unlimited number of variations.”

For additional information, see this Cornell Chronicle story.

Cornell University has dedicated television and audio studios available for media interviews.

-30-

 

Concordia study links urban heat in Montreal to unequal greenspace access



Neighbourhoods with lower incomes, less access to education, and higher proportions of racialized residents tend to be hotter due to less vegetation




Concordia University

Lingshan Li 

image: 

Lingshan Li: “We need to care more about people who are most exposed to excess heat in urban areas.”

view more 

Credit: Concordia University




Trees are essential to cooling down cities. However, a study by Concordia researchers at the Next Generation Cities Institute and the Loyola Sustainability Research Centre shows how tree distribution influences how some residents benefit more from them than others.

In a paper published Urban Forestry & Urban Greening, the authors studied the layout of Montreal’s vegetation — its trees, shrubs and grass — and compared it to daytime temperature readings on the ground, or land surface.

Using satellite imagery and laser imaging, detection, and ranging (LiDAR) technology data, the researchers found that a 10 per cent increase in tree coverage can lower land surface temperature by 1.4˚C. A similar increase in shrubs and grass lowers temperatures by about 0.8°C. They also learned that large, continuous patches of trees cool their surroundings better than small, scattered groupings.

The researchers analyzed and compared vegetation coverage using demographic information from the 2021 Canadian Census. The results revealed that neighborhoods with higher incomes, higher levels of education, and predominantly white populations tended to have access to higher quality green infrastructure. In contrast, poorer, more racially diverse areas received less cooling benefit from green infrastructure.

Underserved areas also had higher populations of vulnerable age groups, meaning those under five years old and those over 65.

“Demand for the cooling provided by urban vegetation is based on the population of vulnerable groups,” says lead author Lingshan Li, a PhD candidate in the Department of Geography, Planning and Environment. “We need to care more about people who are vulnerable and most exposed to excess heat in urban areas.”

Finding the cooling mismatches

The model draws on three key indicators, developed using data from several sources:

  • Heat exposure – Measured using land surface temperature data from Landsat satellite imagery provided by the US Geological Survey;
  • Vegetation coverage – Assessed through LiDAR and aerial imagery from the Communauté métropolitaine de Montréal’s Metropolitan Canopy Index, which maps vegetation coverage across the island of Montreal
  • Population data  Drawn from the 2021 Canadian Census, including statistics on age, education, income and visible minority status.

Next, the researchers created a statistical model to predict how vegetation affects surface temperatures. They used three variables: percentage of high vegetation (tree canopy), percentage of low vegetation (shrubs and grass) and a “large patch index of high vegetation,” which measured how extensive and uninterrupted the main tree clusters were within each study area.

Their model explained roughly 80 per cent of the variation in surface temperatures across the island. It also showed that temperatures can be reduced by increasing vegetation coverage, and that larger, connected patches of trees amplify cooling.

Cooling supply and demand

With this information, they developed a “cooling supply index” – which assigned a value between 0 (low cooling) and 1 (high cooling) – and a “cooling demand index,” which reflected the proportion of residents in vulnerable age groups. Neighbourhoods with higher numbers of these residents were determined to have higher demand for cooling.

Comparing these indices showed where mismatches occurred.

Wealthier and better educated areas like Outremont and the West Island had more tree cover and thus greater cooling, whereas Saint-Léonard, Montréal-Nord, and Anjou, which have higher proportions of visible minorities or lower average household incomes, were found to have fewer trees and more heat-vulnerable residents.

Li says this study can help planners and municipal authorities prioritize where to build parks and greenspaces so they can make their cities more equitable.

“Urban areas have limited space, so we cannot create as many green spaces as we would like,” she says. “We have to better understand how to manage our urban green infrastructure to maximize its benefits.”

Contributors to this study include Angela Kross, associate professor, Geography, Planning and EnvironmentCarly Ziter, assistant professor, Biology; and Ursula Eicker, professor, Building, Civil and Environmental Engineering.

Financial support for this study was provided by the Trottier Family Foundation and the Natural Sciences and Engineering Research Council of Canada.

Read the cited paper: “Analyzing spatial patterns of urban green infrastructure for urban cooling and social equity.