Wednesday, June 18, 2025

  

AI-driven personalized pricing may not help consumers




Carnegie Mellon University





The autonomous nature and adaptability of artificial intelligence (AI)-powered pricing algorithms make them attractive for optimizing pricing strategies in dynamic market environments. However, certain pricing algorithms may learn to engage in tacit collusion in competitive scenarios, resulting in overly competitive prices and potentially harmful consequences for consumer welfare. This has prompted policymakers and scholars to emphasize the importance of designing rules to promote competitive behavior in marketplaces.

In a new study in Marketing Science, researchers at Carnegie Mellon University investigated the role of product ranking systems on e-commerce platforms in influencing the ability of certain pricing algorithms to charge higher prices. The study’s findings suggest that even absent price discrimination, personalized ranking systems may not benefit consumers.

“We examined the effects of personalized and unpersonalized ranking systems on algorithmic pricing outcomes and consumer welfare,” explains Param Vir Singh, Carnegie Bosch Professor of Business Technologies and Marketing and Associate Dean for Research the Tepper School of Business, who coauthored the study.

Because of the number of options available, online consumers face a difficult, time-consuming search process. This has led to the emergence of online search intermediaries (e.g., Amazon, Expedia, Yelp), which use algorithms to rank and provide consumers with ordered lists of third-party sellers’ products in response to their queries. These intermediaries reduce search costs and boost consumer welfare by helping consumers find suitable products more efficiently.

In this study, researchers examined two extreme scenarios of product ranking systems that differed in how they incorporated consumer information for generating product rankings. The first system, personalized ranking, used detailed consumer information to rank products based on predicted utility for each individual. The second, called unpersonalized ranking, relied solely on aggregate information across the entire population, resulting in an inability to customize the rankings for individual consumers.

Researchers used a consumer demand model characterized by search behavior, in which consumers searched sequentially to learn about the utilities they could obtain from various products. In this model, the ranking algorithm suggested by the intermediary affects the order in which consumers evaluate products, with each evaluation incurring a search cost. Consumers engage in optimal search and purchase behavior to maximize their utility. Hence, an intermediary's specific ranking system can steer demand in different ways, which have important implications for pricing outcomes.

“We compared these two systems to provide a clear understanding of the pricing implications of personalization in ranking technologies, specifically reinforcement learning (RL) algorithms, for AI-powered pricing algorithms,” says Liying Qiu, a Ph.D. student at Carnegie Mellon’s Tepper School, who led the study.

“Studying RL pricing algorithms in the context of realistic consumer behavior models is challenging due to the complexity of the dynamics they create, but by setting up controlled simulated environments, we were able to examine how these algorithms evolve and interact over time experimentally,” notes Yan Huang, Associate Professor of Business Technologies at Carnegie Mellon’s Tepper School, who coauthored the study.

Personalized ranking systems, which rank products in decreasing order of consumers’ utilities, may encourage higher prices charged by pricing algorithms, especially when consumers search for products sequentially on a third-party platform. This is because personalized ranking significantly reduces the ranking-mediated price elasticity of demand and thus incentives to lower prices.

Conversely, unpersonalized ranking systems lead to significantly lower prices and greater consumer welfare. These findings suggest that even without price discrimination, personalization may not necessarily benefit consumers since pricing algorithms can undermine consumer welfare through higher prices. Thus, the study highlights the crucial role of ranking systems in shaping algorithmic pricing behaviors and consumer welfare. 

The study’s results remained the same across various values of RL learning parameters, different value of outside goods, different types of reinforcement learning algorithms, and multiple firms in the market.

“We conclude that the effectiveness of personalized ranking in improving the match between consumers and products needs to be carefully evaluated against its impact on consumer welfare when pricing is delegated to algorithms,” suggests Kannan Srinivasan, Professor of Management, Marketing, and Business Technology at Carnegie Mellon’s Tepper School, who coauthored the study.

The findings offer insights for policymakers and platform operators responsible for regulating the use of pricing algorithms and designing ranking systems. It is essential to consider the design of ranking systems when regulating AI pricing algorithms to promote competition and consumer welfare.

The study also has implications for consumer data sharing. Increased consumer data sharing may not always result in improved outcomes, even in the absence of price discrimination, since personalized ranking, empowered by access to more detailed consumer data, facilitates algorithms to charge higher prices. The negative effect of the higher product prices can outweigh the positive impact of improved product fit, leading to a decline in consumer welfare.

AI TechX grant to advance cattle disease detection



UTIA and Enterprise Sensor Systems team to identify infectious diseases using AI and hyperspectral imaging



University of Tennessee Institute of Agriculture

Sensor #2 ESS 

image: 

The EnSenSys Sensor Prototype 2 in operation at a commercial veterinarian location gathering virus signatures from cattle breath. Ted Moore, Chief Technology Manager, left, works with Jerry Dunlap, Senior Vice President for Business Management, to gather the data.

view more 

Credit: Photo couresty Enterprise Sensor Systems LLC.




The University of Tennessee Institute of Agriculture (UTIA) AgResearch, in partnership with the Enterprise Sensor Systems LLC (EnSenSys) of Alamo, Tennessee, has been awarded a grant through the AI TechX Seed Fund to collaborate on “Rapid Identification of Cattle with Infectious Diseases Using AI and Hyperspectral Imaging.” The award, announced on June 11, 2025, will run from July 1, 2025, through June 30, 2026.

AI TechX is an initiative of AI Tennessee, which aims to accelerate the development and real-world application of artificial intelligence through academic-industry collaboration. The AI TechX Seed Fund specifically supports efforts to build high-impact, interdisciplinary research teams that tackle industrial challenges through innovative AI-driven solutions.

The funded project will expand EnSenSys’s ESS Protect patented technology—originally developed to detect viral signatures in human breath—to detect viral signatures in animals. ESS Protect – Animal, will offer rapid, non-invasive, and contactless screening for bovine respiratory disease (BRD) using hyperspectral imaging and advanced machine learning.

The award also marks EnSenSys’s entry to the AI TechX consortium, expanding access to research and collaborative opportunities. “This project strengthens our commitment to delivering breakthrough biosensing tools that protect animal health and food security,” said LtGen John ‘Glad’ Castellaw, USMC (Ret.), CEO of EnSenSys. “It also underscores the importance of university-industry collaboration in transforming AI research into scalable, deployable solutions.”

The research team includes UT faculty members and EnSenSys leadership and subject matter experts. Leveraging hyperspectral data from a 2024 field study at the UTIA Middle Tennessee AgResearch and Education Center in Spring Hill, Tennessee, and from other collaborations with UT and private industry, they will train AI models capable of detecting disease-specific spectral signatures in bovine breath.

Members of the Enterprise Sensor Systems research team record data from cattle breath image captures at the UT AgResearch and Education Center at Spring Hill, Tennessee. Shown left to right are Chance Weldon, Chief Sensor Operator; Dereck Seaton, Senior Vice President, Logistics; and Wanda Castellaw, Manager, Administrative Operations Support. Photo couresty Enterprise Sensor Systems LLC.

Key outcomes include preliminary design of a field-deployable hyperspectral sensing unit, machine learning models for cattle health status, and a strategy to scale the technology for other livestock and farming environments. Long-term partnership aims to support the establishment of an AgriAI Center of Excellence at UTIA, an innovation hub for integrating AI and data-driven tools into modern farming. The center would equip producers with predictive analytics, automation, and precision technologies to improve productivity, sustainability, and economic resilience across Tennessee agriculture.

AI TechX is the flagship initiative of AI Tennessee, designed to bridge foundational AI research and industrial application through strategic partnerships, interdisciplinary innovation, and workforce development.

The University of Tennessee Institute of Agriculture is comprised of the Herbert College of Agriculture, UT College of Veterinary Medicine, UT AgResearch and UT Extension. Through its land-grant mission of teaching, research and outreach, the Institute touches lives and provides Real. Life. Solutions. to Tennesseans and beyond. utia.tennessee.edu.

  

Members of the Enterprise Sensor Systems research team record data from cattle breath image captures at the UT AgResearch and Education Center at Spring Hill, Tennessee. Shown left to right are Chance Weldon, Chief Sensor Operator; Dereck Seaton, Senior Vice President, Logistics; and Wanda Castellaw, Manager, Administrative Operations Support. Photo couresty Enterprise Sensor Systems LLC.


A group of UTIA researchers first met with researchers with Enterprise Sensor Systems in May 2024. Shown from left to right are UTIA Vice Chancellor for Advancement Charley Deal; EnSenSys representatives James Buck, Steve Wakham, and John Castellaw; UT AgResearch Dean Hongwei Xin; and UTIA researchers Hao Gan, Yang Zhao, Ashley Morgan and Debra Miller.

Credit

Photo couresty Enterprise Sensor Systems LLC.



Machine learning method helps bring diagnostic testing out of the lab




Carl R. Woese Institute for Genomic Biology, University of Illinois at Urbana-Champaign
Researcher Image 

image: 

Graduate student in electrical and computer engineering Han Lee (left) and Professor Brian Cunningham (right)

view more 

Credit: Julia Pollack





What if people could detect cancer and other diseases with the same speed and ease of a pregnancy test or blood glucose meter? Researchers at the Carl R. Woese Institute for Genomic Biology are a step closer to realizing this goal by integrating machine learning-based analysis into point-of-care biosensing technologies. 

The new method, dubbed LOCA-PRAM, was reported in the journal Biosensors and Bioelectronics and improves the accessibility of biomarker detection by eliminating the need for technical experts to perform the image analysis.

Traditional medical diagnostic techniques require doctors to send patients’ blood or tissue samples to clinical laboratories where expert scientists carry out the testing procedures and data analysis. 

“Current technologies require patients to visit hospitals to get diagnostics which takes time. A lot of people also have barriers where more appointments may not be financially or spatially feasible,” said Han Lee, first author of the study and a graduate student in the Nanosensors research group. “I think that we can make a difference by developing more point-of-care technologies that are available for people.”

Point-of-care diagnostics are performed and yield results at the site of patient care, whether at home, the doctor’s office, or anywhere in between. This allows for lower cost, easy-to-use, and rapid tests that can help inform next steps. Some examples already adopted in everyday life include urine pregnancy tests, COVID-19 antigen testing kits, and blood glucose meters which allow people with diabetes to respond to dips and spikes in their blood glucose levels throughout the day.

In the point-of-care field, researchers are investigating new ways to integrate these types of technologies into patient care settings, such as appointments with specialists like oncologists or oral surgeons. This would help to reduce the time and financial burden on patients, while improving real-time decision making for physicians.

“Physicians say they would like something similar to when you go in with bacterial infection. They do a test on you right there and then send you home from your appointment with the right antibiotics that will treat the particular bacteria that you have,” said Brian Cunningham (CGD leader), a professor of electrical and computer engineering. “So why not do a similar thing for choosing the right anti-cancer drug or determining if the drug you've been taking for a couple weeks is starting to work or not.”

Previously, the group reported a new biosensing method called Photonic Resonator Absorption Microscopy, or PRAM, to detect molecular biomarkers—molecules in the body whose presence and levels indicate healthy or disease states. PRAM enables the detection of single biomarker molecules including nucleic acids, antigens, and antibodies; common biosensing techniques instead detect the cumulative signal of hundreds to thousands of molecules. 

Cunningham said, “Basically, what we're doing is shining a red LED light at the bottom of a sensor. Then on the top of the sensor, molecules are landing and getting detected whenever they have a tiny particle made out of gold—which we call gold nanoparticles or AuNPs—attached to it.”

The images generated using PRAM depict a red background with little black spots sprinkled across it. But while these images themselves seem relatively simple, obtaining an accurate count requires a trained eye that can decipher what spots truly correspond to the AuNP-tagged biomarker molecules.

“There are many kinds of artifacts such as dust particles or aggregates of the nanoparticles. If you don’t have a lot of experience, it’s hard to distinguish them,” Lee said. “The conventional counting algorithm that we’ve been using requires adjusting a lot of parameters to get rid of those artifacts.”

In order to move this process out of the laboratory and make it better suited for point-of-care environments, Lee proposed integrating machine learning into the image analysis process.

“Han really on his own developed an interest in machine learning after taking a class here at the university just to learn about it,” Cunningham said. “He came to me one day and said that he thought he could make a machine learning algorithm count our black spots more accurately.”

Compared to other biosensing techniques, PRAM lends itself well for incorporating deep learning algorithms because it generates microscope images, rather than just detecting optical signals. But because these algorithms are only as good as the data that trained it, Lee decided to image the same samples using both PRAM and scanning electron microscopy. 

The AuNPs, which are 1000 times smaller than human hair and only show up as small black spots in the PRAM images, can be more clearly visualized on the electron microscope. In a time intensive process, Lee cross referenced every spot in the PRAM images with the electron microscope images to obtain highly accurate data for the machine learning training set. 

“Finding the right spot to compare to was actually very challenging because it's like finding a needle in a desert. One way that I devised was to create a reference point, like a lighthouse in a sea. Then from there we can find the exact same spot for registrations,” Lee said.

The resulting deep learning-based method, called Localization with Context Awareness, integrated with PRAM, enables real-time, high precision detection of molecular biomarkers without needing the eyes and experience of a technical expert. When tested, the team found that LOCA-PRAM surpassed conventional techniques in accuracy, detecting lower levels of the biomarkers and minimizing rates of false-positive and negatives.

“The whole journey of my PhD was started because I wanted to make changes in the point-of-care field,” Lee said. “I just want to do everything in my power to develop more advanced technologies that can be impactful in the future.”

The publication, “Physically grounded deep learning-enabled gold nanoparticle localization and quantification in photonic resonator absorption microscopy for digital resolution molecular diagnostics” can be found at https://doi.org/10.1016/j.bios.2025.117455 and was supported by the National Institutes of Health, USDA AFRI Nanotechnology grant, and National Science Foundation.

Panel discussion: Moderated by Vice Director ISHII Kaori, Chuo University ELSI Center. Associate Professor KUDO Fumiko of The University of Osaka (Information Law and Policy), Associate Professor SAITO Kunifumi of Keio University (Civil Law), President TOKUDA Hideyuki of NICT (Computer Science), Senior Vice President KINOSHITA Shingo, Head of R&D Planning Department at NTT Corporation, and Professor KIMURA Tadamasa of Rikkyo University (Cultural Anthropology). 



Picture 2. keynote speeches by Professor SUDO Osamu. 

keynote speeches by Professor SUDO Osamu: Professor SUDO of Chuo University predicted that the current situation of "AI democratization and popularization," brought about by using advanced AI through natural language (prompts), will accelerate further, leading to the development of highly versatile multimodal AI and robots and their social proliferation.

Credit

Photo: Chuo University


Credit

Photo: Chuo University


AI-powered study shows surge in global rheumatoid arthritis since 1980, revealing local hotspots



A novel analysis published in the Annals of the Rheumatic Diseases details significant socioeconomic disparities and worsening inequalities in disease burden



Elsevier

AI-Powered Study Shows Surge in Global Rheumatoid Arthritis Since 1980, Revealing Local Hotspots 

image: 

An AI-powered study in the Annals of the Rheumatic Diseases shows a surge in global rheumatoid arthritis since 1980, revealing local hotspots, socioeconomic disparities, and worsening inequalities in disease burden. This image depicts the geographic distribution of age-standardized incidence rates in 2021.

view more 

Credit: Annals of the Rheumatic Diseases / Jin et al.




Philadelphia, June 16, 2025 – The most comprehensive analysis of rheumatoid arthritis data to date reveals that demographic changes and uneven health infrastructure have exacerbated the rheumatoid arthritis burden since 1980 and shows global disparities on a granular level. The AI-powered study in the Annals of the Rheumatic Diseases, published by Elsevier, utilized deep learning techniques and policy simulations to uncover actionable insights for localized interventions that national-level studies have previously missed. Its design yielded highly precise, dynamic projections of further disease burden to 2040.

Principal investigator Queran Lin, MPH, WHO Collaborating Centre for Public Health Education and Training, Faculty of Medicine, Imperial College London; and Clinical Research Design Division, Clinical Research Centre, Sun Yat-Sen Memorial Hospital, Guangzhou, explains, “While previous Global Burden of Disease (GBD) studies have provided important insights, they have largely focused on high-level descriptions and visualizations at global and national scales, failing to capture local disparities or the dynamic interactions between socioeconomic development and disease trends. With access to sufficient computational resources and advanced analytical capabilities, our Global-to-Local Burden of Disease Collaboration aims to unlock the full potential of the GBD dataset (pioneered by the Institute for Health Metrics and Evaluation, University of Washington). By employing cutting-edge approaches such as transformer-based deep learning models, we were able to generate the most granular disease burden estimates to date, offering a new framework for guiding precision public health across diverse populations.”

Using GBD data, the study integrates the largest spatiotemporal rheumatoid arthritis dataset spanning 953 global to local locations from 1980 to 2021 with a novel deep learning framework to reveal how demographic ageing, population growth, and uneven healthcare infrastructure exacerbate rheumatoid arthritis burdens differently across regions. It also enabled investigators to analyze the prevalence, incidence, mortality, disability-adjusted life years (DALYs), years of life lost (YLLs), and years lived with disability (YLDs) of rheumatoid arthritis, as well as their socioeconomic inequalities and achievable disease control based on socioeconomic development level (frontiers) and forecast long-term burdens until 2040 with scenario simulations.

The study observed that globally there were significant absolute and relative sociodemographic index (SDI)-related inequalities, with a disproportionately higher burden shouldered by countries with high and high-middle SDI. Among the key findings of the study are:

  • Global rheumatoid arthritis burden increased: From 1980 to 2021, the global rheumatoid arthritis burden kept rising, showing an increase among younger age groups and a wider range of geographic locations worldwide, with hotspots like the UK’s West Berkshire (incidence rate: 35.1/100,000) and Mexico’s Zacatecas (DALY rate: 112.6/100,000) bearing the highest burdens in 2021 among 652 subnational regions.
  • Widening inequalities: DALY-related inequality surged 62.55% from 1990, with Finland, Ireland, and New Zealand as the most unequal countries in 2021.
  • Failure to meet frontiers: As SDI increased over time, frontier deviations worsened, which indicated the burden of rheumatoid arthritis has been severely neglected.
  • Noneconomic disparities persisted: Economic factors alone are not the sole determinants of rheumatoid arthritis disease burden. High SDI regions such as Japan and the UK exhibited contrasting patterns in disease burden. Japan’s declining DALY rates despite high SDI may reflect nationwide early diagnosis programs, widespread use of biologic therapies, and a diet rich in anti-inflammatory components.
  • Forecasted increases and need for positive policy: By 2040, low-middle SDI regions may see increasing DALYs due to ageing/population growth, while DALYs in high SDI areas may decrease. Controlling smoking may reduce rheumatoid arthritis deaths by 16.8% and DALYs by 20.6% in high-smoking regions (e.g., China), offering significant benefits for medium/high SDI areas.

Co-lead author Baozhen Huang, PhD, Department of Biomedical Sciences, City University of Hong Kong, says, “Japan’s sustained decline in DALYs despite a high SDI proves that socioeconomic status alone doesn’t dictate outcomes; proactive healthcare policies such as early diagnosis programs can reverse trends.”

Many regions around the world still lack the necessary evidence base to inform precision health policy and targeted interventions. These data are intended to support more informed clinical decisions and health policy planning, especially in settings where reliable subnational evidence has historically been scarce.

Co-lead author Wenyi Jin, MD, PhD, Department of Orthopedics, Renmin Hospital of Wuhan University; and Department of Biomedical Sciences, City University of Hong Kong, concludes, “The adoption of this advanced framework quantifies the expected impact of feasible intervention scenarios in public health, supplying policymakers at global, national, and local levels with more reliable, dynamic evidence, redefining the very paradigm of health surveillance.”

From code to commands: prompt training technique helps users speak AI's language



Research from Carnegie Mellon's School of Computer Science suggests prompt engineering could be as important as coding



Carnegie Mellon University




Today's generative artificial intelligence models can create everything from images to computer applications, but the quality of their output depends largely on the prompt a human user provides.

Carnegie Mellon University researchers have proposed a new approach for teaching everyday users how to create these prompts and improving their interactions with generative artificial intelligence models.

The method, called Requirement-Oriented Prompt Engineering (ROPE), shifts the focus of prompt writing from clever tricks and templates to clearly stating what the AI should do. As large language models (LLMs) improve, the importance of coding skills may wane while expertise in prompt engineering could rise.

"You need to be able to tell the model exactly what you want. You can't expect it to guess all your customized needs," said Christina Ma, a Ph.D. student in the Human-Computer Interaction Institute (HCII). "We need to train humans in prompt engineering skills. Most people still struggle to tell the AI exactly what they want. ROPE helps them do that."

Prompt engineering refers to the precise instructions — the prompts —a user gives a generative AI model to produce a desired output. The better a user is at prompt engineering, the more likely an AI model will produce what the user intended.

In "What Should We Engineer in Prompts? Training Humans in Requirement-Driven LLM Use," recently accepted in the Association for Computing Machinery's Transactions on Computer-Human Interaction, the researchers describe their ROPE paradigm and a training module they created to teach and assess the method. ROPE is a human-LLM partnering strategy where humans maintain agency and control of the goals by specifying requirements for LLM prompts. The paradigm focuses on the importance of crafting accurate and complete requirements to achieve better results, especially for complex, customized tasks.

To test ROPE, the researchers asked 30 people to write prompts for an AI model to complete two separate tasks as a pretest: create a tic-tac-toe game and design a tool to help people develop content outlines. Half of the participants then received training through ROPE and the rest watched a YouTube tutorial on prompt engineering. The groups then wrote prompts for a different game and a different chatbot as a posttest.

When researchers compared the results of the exercises, they found that participants who received the ROPE training outperformed the people who watched the YouTube tutorial. Scores from pretest to posttest rose 20% for people who received the ROPE training and only 1% for those who did not.

"We not only proposed a new framework for teaching prompt engineering but also created a training tool to assess how well participants do and how well the paradigm works," said Ken Koedinger, a University Professor in the HCII. "It's not just our opinion that ROPE works. The training module backs it up."

Generative AI models have already altered the content of introductory programming and software engineering courses as traditional programming evolves into natural language programming. Instead of writing software, an engineer can write a prompt directing AI to develop the software. This paradigm shift could create new opportunities for students, allowing them to work on more complex development tasks earlier in their studies and advancing the field.

The researchers did not design ROPE solely for software engineers. As humans continue to integrate AI into daily life, clearly communicating with machines will become an important aspect of digital literacy. Armed with knowing how to write successful prompts and an AI model up to the task, people without coding or software engineering backgrounds can create applications that will benefit them.

"We want to empower more end users from the general public to use LLMs to build chatbots and apps," Ma said. "If you have an idea, and you understand how to communicate the requirements, you can write a prompt that will create that idea."

The researchers have open-sourced their training tools and materials, aiming to make prompt engineering more accessible to nonexperts.



No comments: