AI-driven personalized pricing may not help consumers
The autonomous nature and adaptability of artificial intelligence (AI)-powered pricing algorithms make them attractive for optimizing pricing strategies in dynamic market environments. However, certain pricing algorithms may learn to engage in tacit collusion in competitive scenarios, resulting in overly competitive prices and potentially harmful consequences for consumer welfare. This has prompted policymakers and scholars to emphasize the importance of designing rules to promote competitive behavior in marketplaces.
In a new study in Marketing Science, researchers at Carnegie Mellon University investigated the role of product ranking systems on e-commerce platforms in influencing the ability of certain pricing algorithms to charge higher prices. The study’s findings suggest that even absent price discrimination, personalized ranking systems may not benefit consumers.
“We examined the effects of personalized and unpersonalized ranking systems on algorithmic pricing outcomes and consumer welfare,” explains Param Vir Singh, Carnegie Bosch Professor of Business Technologies and Marketing and Associate Dean for Research the Tepper School of Business, who coauthored the study.
Because of the number of options available, online consumers face a difficult, time-consuming search process. This has led to the emergence of online search intermediaries (e.g., Amazon, Expedia, Yelp), which use algorithms to rank and provide consumers with ordered lists of third-party sellers’ products in response to their queries. These intermediaries reduce search costs and boost consumer welfare by helping consumers find suitable products more efficiently.
In this study, researchers examined two extreme scenarios of product ranking systems that differed in how they incorporated consumer information for generating product rankings. The first system, personalized ranking, used detailed consumer information to rank products based on predicted utility for each individual. The second, called unpersonalized ranking, relied solely on aggregate information across the entire population, resulting in an inability to customize the rankings for individual consumers.
Researchers used a consumer demand model characterized by search behavior, in which consumers searched sequentially to learn about the utilities they could obtain from various products. In this model, the ranking algorithm suggested by the intermediary affects the order in which consumers evaluate products, with each evaluation incurring a search cost. Consumers engage in optimal search and purchase behavior to maximize their utility. Hence, an intermediary's specific ranking system can steer demand in different ways, which have important implications for pricing outcomes.
“We compared these two systems to provide a clear understanding of the pricing implications of personalization in ranking technologies, specifically reinforcement learning (RL) algorithms, for AI-powered pricing algorithms,” says Liying Qiu, a Ph.D. student at Carnegie Mellon’s Tepper School, who led the study.
“Studying RL pricing algorithms in the context of realistic consumer behavior models is challenging due to the complexity of the dynamics they create, but by setting up controlled simulated environments, we were able to examine how these algorithms evolve and interact over time experimentally,” notes Yan Huang, Associate Professor of Business Technologies at Carnegie Mellon’s Tepper School, who coauthored the study.
Personalized ranking systems, which rank products in decreasing order of consumers’ utilities, may encourage higher prices charged by pricing algorithms, especially when consumers search for products sequentially on a third-party platform. This is because personalized ranking significantly reduces the ranking-mediated price elasticity of demand and thus incentives to lower prices.
Conversely, unpersonalized ranking systems lead to significantly lower prices and greater consumer welfare. These findings suggest that even without price discrimination, personalization may not necessarily benefit consumers since pricing algorithms can undermine consumer welfare through higher prices. Thus, the study highlights the crucial role of ranking systems in shaping algorithmic pricing behaviors and consumer welfare.
The study’s results remained the same across various values of RL learning parameters, different value of outside goods, different types of reinforcement learning algorithms, and multiple firms in the market.
“We conclude that the effectiveness of personalized ranking in improving the match between consumers and products needs to be carefully evaluated against its impact on consumer welfare when pricing is delegated to algorithms,” suggests Kannan Srinivasan, Professor of Management, Marketing, and Business Technology at Carnegie Mellon’s Tepper School, who coauthored the study.
The findings offer insights for policymakers and platform operators responsible for regulating the use of pricing algorithms and designing ranking systems. It is essential to consider the design of ranking systems when regulating AI pricing algorithms to promote competition and consumer welfare.
The study also has implications for consumer data sharing. Increased consumer data sharing may not always result in improved outcomes, even in the absence of price discrimination, since personalized ranking, empowered by access to more detailed consumer data, facilitates algorithms to charge higher prices. The negative effect of the higher product prices can outweigh the positive impact of improved product fit, leading to a decline in consumer welfare.
Journal
Marketing Science
Article Title
Personalization, Consumer Search, and Algorithmic Pricing
AI monitors wildlife behavior in the Swiss Alps
Scientists at EPFL have created MammAlps, a multi-view, multi-modal video dataset that captures how wild mammals behave in the Swiss Alps. This new resource could be a game-changer for wildlife monitoring and conservation efforts.
Have you ever wondered how wild animals behave when no one’s watching? Understanding these behaviors is vital for protecting ecosystems—especially as climate change and human expansion alter natural habitats. But collecting this kind of information without interfering has always been tricky.
Traditionally, researchers relied on direct observation or sensors strapped to animals—methods that are either disruptive or limited in scope. Camera traps offer a less invasive alternative, but they generate vast amounts of footage that's hard to analyze.
AI could help, but there's a catch: it needs annotated datasets to learn from. Most current video datasets are either scraped from the internet, missing the authenticity of real wild settings, or are small-scale field recordings lacking detail. And few include the kind of rich context—like multiple camera angles or audio—that’s needed to truly understand complex animal behavior.
Introducing MammAlps
To address this challenge, scientists at EPFL have collected and curated MammAlps, the first richly annotated, multi-view, multimodal wildlife behavior dataset in collaboration with the Swiss National Park. MammAlps is designed to train AI models for species and behavior recognition tasks, and ultimately to help researchers understand animal behavior better. This work could make conservation efforts faster, cheaper, and smarter.
MammAlps was developed by Valentin Gabeff, a PhD student at EPFL under the supervision of Professors Alexander Mathis and Devis Tuia, together with their respective research teams.
How MammAlps was developed
The researchers set up nine camera traps that recorded more than 43 hours of raw footage over the course of several weeks. The team then meticulously processed it, using AI tools to detect and track individual animals, resulting in 8.5 hours of material showing wildlife interaction.
They labeled behaviors using a hierarchical approach, categorizing each moment at two levels: high-level activities like foraging or playing, and finer actions like walking, grooming, or sniffing. This structure allows AI models to interpret behaviors more accurately by linking detailed movements to broader behavioral patterns.
To provide AI models with richer context, the team supplemented video with audio recordings and captured “reference scene maps” that documented environmental factors like water sources, bushes, and rocks. This addition al data enables better interpretation of habitat-specific behaviors. They also cross-referenced weather conditions and counts of individuals per event to create more complete scene descriptions.
“By incorporating other modalities alongside video, we've shown that AI models can better identify animal behavior,” explains Alexander Mathis. “This multi-modal approach gives us a more complete picture of wildlife behavior.”
A new standard to wildlife monitoring
MammAlps brings a new standard to wildlife monitoring: a full sensory snapshot of animal behavior across multiple angles, sounds, and contexts. It also introduces a “long-term event understanding" benchmark, meaning scientists can now study not just isolated behaviors from short clips, but broader ecological scenes over time—like a wolf stalking a deer across several camera views.
Research is still ongoing. The team is currently processing data collected in 2024 and carries out more fieldwork in 2025. These additional surveys are necessary to expand the set of recordings for rare species such as alpine hares and lynx and are also useful to develop methods for the temporal analysis of wildlife behavior over multiple seasons.
Building more datasets like MammAlps could radically scale up current wildlife monitoring efforts by enabling AI models to identify behaviors of interest from hundreds of hours of video. This would provide wildlife conservationists with timely, actionable insights. Over time, this could make it easier to track how climate change, human encroachment, or disease outbreaks impact wildlife behavior, and help protect vulnerable species.
For more information about MammAlps and access to the dataset, visit https://eceo-epfl.github.io/MammAlps/.
MammAlps was selected to feature as a Highlight at the IEEE/CVF Computer Vision and Pattern Recognition Conference (CVPR), a top computer vision conference (11-15 June 2025).
Article Title
MammAlps: A multi-view video behavior monitoring dataset of wild mammals in the Swiss Alps
AI TechX grant to advance cattle disease detection
UTIA and Enterprise Sensor Systems team to identify infectious diseases using AI and hyperspectral imaging
University of Tennessee Institute of Agriculture
image:
The EnSenSys Sensor Prototype 2 in operation at a commercial veterinarian location gathering virus signatures from cattle breath. Ted Moore, Chief Technology Manager, left, works with Jerry Dunlap, Senior Vice President for Business Management, to gather the data.
view moreCredit: Photo couresty Enterprise Sensor Systems LLC.
The University of Tennessee Institute of Agriculture (UTIA) AgResearch, in partnership with the Enterprise Sensor Systems LLC (EnSenSys) of Alamo, Tennessee, has been awarded a grant through the AI TechX Seed Fund to collaborate on “Rapid Identification of Cattle with Infectious Diseases Using AI and Hyperspectral Imaging.” The award, announced on June 11, 2025, will run from July 1, 2025, through June 30, 2026.
AI TechX is an initiative of AI Tennessee, which aims to accelerate the development and real-world application of artificial intelligence through academic-industry collaboration. The AI TechX Seed Fund specifically supports efforts to build high-impact, interdisciplinary research teams that tackle industrial challenges through innovative AI-driven solutions.
The funded project will expand EnSenSys’s ESS Protect patented technology—originally developed to detect viral signatures in human breath—to detect viral signatures in animals. ESS Protect – Animal, will offer rapid, non-invasive, and contactless screening for bovine respiratory disease (BRD) using hyperspectral imaging and advanced machine learning.
The award also marks EnSenSys’s entry to the AI TechX consortium, expanding access to research and collaborative opportunities. “This project strengthens our commitment to delivering breakthrough biosensing tools that protect animal health and food security,” said LtGen John ‘Glad’ Castellaw, USMC (Ret.), CEO of EnSenSys. “It also underscores the importance of university-industry collaboration in transforming AI research into scalable, deployable solutions.”
The research team includes UT faculty members and EnSenSys leadership and subject matter experts. Leveraging hyperspectral data from a 2024 field study at the UTIA Middle Tennessee AgResearch and Education Center in Spring Hill, Tennessee, and from other collaborations with UT and private industry, they will train AI models capable of detecting disease-specific spectral signatures in bovine breath.
Members of the Enterprise Sensor Systems research team record data from cattle breath image captures at the UT AgResearch and Education Center at Spring Hill, Tennessee. Shown left to right are Chance Weldon, Chief Sensor Operator; Dereck Seaton, Senior Vice President, Logistics; and Wanda Castellaw, Manager, Administrative Operations Support. Photo couresty Enterprise Sensor Systems LLC.
Key outcomes include preliminary design of a field-deployable hyperspectral sensing unit, machine learning models for cattle health status, and a strategy to scale the technology for other livestock and farming environments. Long-term partnership aims to support the establishment of an AgriAI Center of Excellence at UTIA, an innovation hub for integrating AI and data-driven tools into modern farming. The center would equip producers with predictive analytics, automation, and precision technologies to improve productivity, sustainability, and economic resilience across Tennessee agriculture.
AI TechX is the flagship initiative of AI Tennessee, designed to bridge foundational AI research and industrial application through strategic partnerships, interdisciplinary innovation, and workforce development.
The University of Tennessee Institute of Agriculture is comprised of the Herbert College of Agriculture, UT College of Veterinary Medicine, UT AgResearch and UT Extension. Through its land-grant mission of teaching, research and outreach, the Institute touches lives and provides Real. Life. Solutions. to Tennesseans and beyond. utia.tennessee.edu.
Members of the Enterprise Sensor Systems research team record data from cattle breath image captures at the UT AgResearch and Education Center at Spring Hill, Tennessee. Shown left to right are Chance Weldon, Chief Sensor Operator; Dereck Seaton, Senior Vice President, Logistics; and Wanda Castellaw, Manager, Administrative Operations Support. Photo couresty Enterprise Sensor Systems LLC.
A group of UTIA researchers first met with researchers with Enterprise Sensor Systems in May 2024. Shown from left to right are UTIA Vice Chancellor for Advancement Charley Deal; EnSenSys representatives James Buck, Steve Wakham, and John Castellaw; UT AgResearch Dean Hongwei Xin; and UTIA researchers Hao Gan, Yang Zhao, Ashley Morgan and Debra Miller.
Credit
Photo couresty Enterprise Sensor Systems LLC.
Machine learning method helps bring diagnostic testing out of the lab
image:
Graduate student in electrical and computer engineering Han Lee (left) and Professor Brian Cunningham (right)
view moreCredit: Julia Pollack
What if people could detect cancer and other diseases with the same speed and ease of a pregnancy test or blood glucose meter? Researchers at the Carl R. Woese Institute for Genomic Biology are a step closer to realizing this goal by integrating machine learning-based analysis into point-of-care biosensing technologies.
The new method, dubbed LOCA-PRAM, was reported in the journal Biosensors and Bioelectronics and improves the accessibility of biomarker detection by eliminating the need for technical experts to perform the image analysis.
Traditional medical diagnostic techniques require doctors to send patients’ blood or tissue samples to clinical laboratories where expert scientists carry out the testing procedures and data analysis.
“Current technologies require patients to visit hospitals to get diagnostics which takes time. A lot of people also have barriers where more appointments may not be financially or spatially feasible,” said Han Lee, first author of the study and a graduate student in the Nanosensors research group. “I think that we can make a difference by developing more point-of-care technologies that are available for people.”
Point-of-care diagnostics are performed and yield results at the site of patient care, whether at home, the doctor’s office, or anywhere in between. This allows for lower cost, easy-to-use, and rapid tests that can help inform next steps. Some examples already adopted in everyday life include urine pregnancy tests, COVID-19 antigen testing kits, and blood glucose meters which allow people with diabetes to respond to dips and spikes in their blood glucose levels throughout the day.
In the point-of-care field, researchers are investigating new ways to integrate these types of technologies into patient care settings, such as appointments with specialists like oncologists or oral surgeons. This would help to reduce the time and financial burden on patients, while improving real-time decision making for physicians.
“Physicians say they would like something similar to when you go in with bacterial infection. They do a test on you right there and then send you home from your appointment with the right antibiotics that will treat the particular bacteria that you have,” said Brian Cunningham (CGD leader), a professor of electrical and computer engineering. “So why not do a similar thing for choosing the right anti-cancer drug or determining if the drug you've been taking for a couple weeks is starting to work or not.”
Previously, the group reported a new biosensing method called Photonic Resonator Absorption Microscopy, or PRAM, to detect molecular biomarkers—molecules in the body whose presence and levels indicate healthy or disease states. PRAM enables the detection of single biomarker molecules including nucleic acids, antigens, and antibodies; common biosensing techniques instead detect the cumulative signal of hundreds to thousands of molecules.
Cunningham said, “Basically, what we're doing is shining a red LED light at the bottom of a sensor. Then on the top of the sensor, molecules are landing and getting detected whenever they have a tiny particle made out of gold—which we call gold nanoparticles or AuNPs—attached to it.”
The images generated using PRAM depict a red background with little black spots sprinkled across it. But while these images themselves seem relatively simple, obtaining an accurate count requires a trained eye that can decipher what spots truly correspond to the AuNP-tagged biomarker molecules.
“There are many kinds of artifacts such as dust particles or aggregates of the nanoparticles. If you don’t have a lot of experience, it’s hard to distinguish them,” Lee said. “The conventional counting algorithm that we’ve been using requires adjusting a lot of parameters to get rid of those artifacts.”
In order to move this process out of the laboratory and make it better suited for point-of-care environments, Lee proposed integrating machine learning into the image analysis process.
“Han really on his own developed an interest in machine learning after taking a class here at the university just to learn about it,” Cunningham said. “He came to me one day and said that he thought he could make a machine learning algorithm count our black spots more accurately.”
Compared to other biosensing techniques, PRAM lends itself well for incorporating deep learning algorithms because it generates microscope images, rather than just detecting optical signals. But because these algorithms are only as good as the data that trained it, Lee decided to image the same samples using both PRAM and scanning electron microscopy.
The AuNPs, which are 1000 times smaller than human hair and only show up as small black spots in the PRAM images, can be more clearly visualized on the electron microscope. In a time intensive process, Lee cross referenced every spot in the PRAM images with the electron microscope images to obtain highly accurate data for the machine learning training set.
“Finding the right spot to compare to was actually very challenging because it's like finding a needle in a desert. One way that I devised was to create a reference point, like a lighthouse in a sea. Then from there we can find the exact same spot for registrations,” Lee said.
The resulting deep learning-based method, called Localization with Context Awareness, integrated with PRAM, enables real-time, high precision detection of molecular biomarkers without needing the eyes and experience of a technical expert. When tested, the team found that LOCA-PRAM surpassed conventional techniques in accuracy, detecting lower levels of the biomarkers and minimizing rates of false-positive and negatives.
“The whole journey of my PhD was started because I wanted to make changes in the point-of-care field,” Lee said. “I just want to do everything in my power to develop more advanced technologies that can be impactful in the future.”
The publication, “Physically grounded deep learning-enabled gold nanoparticle localization and quantification in photonic resonator absorption microscopy for digital resolution molecular diagnostics” can be found at https://doi.org/10.1016/j.bios.2025.117455 and was supported by the National Institutes of Health, USDA AFRI Nanotechnology grant, and National Science Foundation.
Journal
Biosensors and Bioelectronics
Article Title
Physically grounded deep learning-enabled gold nanoparticle localization and quantification in photonic resonator absorption microscopy for digital resolution molecular diagnostics Author links open overlay panel
ELSI’s challenges in the era of advanced AI
"ELSI University Summit" in Japan brings together multi-stakeholders
Chuo University
image:
ELSI University Summit's key visual: The cat motif represents generative AI. And the wave represents the singularity.
view moreCredit: Illustration: Kashiwai
Chuo University ELSI Center and The University of Osaka established the Research Center on Ethical, Legal and Social Issues (The University of Osaka ELSI Center) jointly hosted the "University ELSI Summit", a two-day event held on March 15th and 16th, 2025 (Saturday-Sunday) at Chuo University's Korakuen Campus (Bunkyo-ku, Tokyo, Japan).
ELSI (Ethical, Legal and Social Issues) and RRI (Responsible Research and Innovation) are seeing rapid development globally. These approaches are being examined across various fields both domestically and internationally. The ELSI University Summit focused on ELSI and RRI initiatives in academia and industry. These initiatives concentrate on research areas related to advanced AI and social challenges. The summit included reports from multiple stakeholders responsible for development, utilization, and regulation — including the business community, government agencies, educational institutions, science and engineering researchers, and humanities researchers. In addition, through Q&A sessions and panel discussions, the summit engaged in intensive discussions about their respective roles and the importance of collaboration. The event attracted a total of 607 on-site participants and online ones.
The summit opened with keynote speeches by Professor SUDO Osamu, Director of Chuo University ELSI Center (at the time of the event), and Professor KISHIMOTO Atsuo, Director of The University of Osaka ELSI Center, which sparked the discussions.
Professor SUDO of Chuo University predicted that the current situation of "AI democratization and popularization," brought about by using advanced AI through natural language (prompts), will accelerate further, leading to the development of highly versatile multimodal AI and robots and their social proliferation. He stated that such developments would lead to major transformations in all important complex systems, including disaster prevention, administration, healthcare and welfare, finance, education, transportation, and national defense. He also introduced the emergence of AI that not only surpasses human abilities, exemplified by China's "DeepSeek-R1" which significantly impacted the world in January 2025, but also AI that performs advanced reasoning such as "reflection" without being explicitly programmed, such as DeepSeek-R1, and AI that enables intuitive responses, such as Chat-GPT4.5. Amid expectations of a future where AI capabilities will be dramatically expanded through the combination of such abilities, Professor SUDO emphasized his view, while referencing various government movements, that "what human society and institutions using AI should be like, and the ELSI perspective, are becoming increasingly important." Finally, he expressed his desire to strengthen university collaboration nationwide, centered on The University of Osaka ELSI Center and Chuo University ELSI Center, as well as to create flexible cooperation and collaborative activities with universities worldwide. He also pointed out that universities play an important role in setting evaluation indicators for Responsible AI, which is valued globally, and in facilitating multi-stakeholder discussions.
Professor KISHIMOTO began by mentioning the establishment of The University of Osaka ELSI Center as a research center that co-creates "social technology." "Social technology" refers to the knowledge that bridges the gap between technology and society. He explained that the absence of social technology leads to stumbling blocks during the social implementation of new technologies, or situations where social implementation cannot progress when potential issues arising from the utilization of developed technologies remain unresolved. Professor KISHIMOTO reported many achievements as examples of initiatives to co-create social technology, including the development of industry-academia collaboration in humanities and social sciences based on agile research styles, and the establishment of The University of Osaka's first humanities and social sciences collaborative research institute. In addressing previous ELSI challenges, they have sought solutions after exploring whether the causes of gaps between what is technically possible and what is socially acceptable lie in Legal, Ethics, or Social Issues that constitute ELSI. However, legal regulations, which have been heavily emphasized, not only lag behind the pace of technological innovation but also tend to create these gaps in unstable societies. He pointed out that in the uncertain times ahead, the importance of ethics will become relatively greater. As examples of this trend, Professor KISHIMOTO introduced that the introduction of research ethics review is actually progressing in corporate AI development, university research activities other than medical research, and education.
Subsequently, current challenges, future prospects, and advanced case studies were shared through seven special lectures and 14 general presentations. From academia, experts in law, sociology, policy studies, economics, philosophy, cultural anthropology, and other fields reported on ELSI initiatives at 12 universities from multiple perspectives. From the government standpoint, IIDA Yoichi, Special Negotiator for International Information and Communications Strategy, International Strategy Bureau, Ministry of Internal Affairs and Communications, shared the latest developments in international trends in AI governance based on the Hiroshima AI Process led by Japan. HIRAMOTO Kenji, Deputy Director of Japan AI Safety Institute (AISI), introduced his organization's efforts to support public-private initiatives through collaboration between 10 ministries and agencies and 5 government-related organizations centered on the Cabinet Office, clarifying Japan's position on AI utilization. TORISAWA Kentaro, Associate Director General of Universal Communication Research Institute at the National Institute, Information and Communications Technology (NICT), discussed the risks of generative AI from a developer's perspective and advocated for the necessity of utilizing generative AI to prevent its misuse. From the industrial sector, case reports were presented by NTT Corporation, IBM Japan, Ltd., and Microsoft Japan Co., Ltd. (MSJ), which are engaged in AI development and utilization. Additionally, TSUNODA Katsu, President and COO of The Asahi Shimbun Company, indicated that the important role of journalism in the AI era is to connect cutting-edge discussions among industry, government, and academia with society.
The panel discussion titled "The Future of AI and Human Imagination," held at the end of the second day, featured panelists Specially Appointed Associate Professor KUDO Fumiko of The University of Osaka (Information Law and Policy), Associate Professor SAITO Kunifumi of Keio University (Civil Law), President TOKUDA Hideyuki of NICT (Computer Science), Senior Vice President KINOSHITA Shingo, Head of R&D Planning Department at NTT Corporation, and Professor KIMURA Tadamasa of Rikkyo University (Cultural Anthropology). Moderated by Vice Director ISHII Kaori, Chuo University ELSI Center, discussions were conducted from a wide range of fields and standpoints. Panelists made comments expressing their expectation that ELSI would serve not only as brakes and guardrails for technology, but also play directional roles like a steering wheel and headlights. There were also comments envisioning a future where co-creation between development teams and ELSI teams in technology development, not limited to AI, becomes commonplace. These discussions explored future scenarios where ELSI/RRI will be needed more extensively and deeply.
Participants provided feedback such as "It was an event with a very multi-layered structure that was impressive," "I was able to take back many learnings," and "I could hear various expert opinions, both overlapping and differing, making me even more excited about future discussions," achieving the goal of the summit to connect multi-stakeholders. With many voices hoping for the continuation of such events, if continued connections deepen discussions with multi-stakeholders and further promote Japan's ELSI/RRI, it is expected to contribute to accelerating research and development and innovation in advanced science and technology. It will also contribute to exploring sophisticated responses to various social challenges and envisioning better future societies.
This event aligns with the goals of Rome Call for AI Ethics (https://www.romecall.org/the-call/), of which Chuo University is the only signatory institution among Japanese universities. Rome Call advocates "to establish an ethical approach to AI design, development, and deployment, make commitments related to ethics, rights, and education, and create the future with a shared sense of responsibility among international organizations, governments, researchers, academia and private companies." The two days were also dedicated to promoting the six principles of AI ethics: Transparency, Inclusion, Accountability, Impartiality, Reliability, Security and Privacy.
Panel discussion: Moderated by Vice Director ISHII Kaori, Chuo University ELSI Center. Associate Professor KUDO Fumiko of The University of Osaka (Information Law and Policy), Associate Professor SAITO Kunifumi of Keio University (Civil Law), President TOKUDA Hideyuki of NICT (Computer Science), Senior Vice President KINOSHITA Shingo, Head of R&D Planning Department at NTT Corporation, and Professor KIMURA Tadamasa of Rikkyo University (Cultural Anthropology).
keynote speeches by Professor SUDO Osamu: Professor SUDO of Chuo University predicted that the current situation of "AI democratization and popularization," brought about by using advanced AI through natural language (prompts), will accelerate further, leading to the development of highly versatile multimodal AI and robots and their social proliferation.
Credit
Photo: Chuo University
Credit
Photo: Chuo University
AI-powered study shows surge in global rheumatoid arthritis since 1980, revealing local hotspots
A novel analysis published in the Annals of the Rheumatic Diseases details significant socioeconomic disparities and worsening inequalities in disease burden
Elsevier
image:
An AI-powered study in the Annals of the Rheumatic Diseases shows a surge in global rheumatoid arthritis since 1980, revealing local hotspots, socioeconomic disparities, and worsening inequalities in disease burden. This image depicts the geographic distribution of age-standardized incidence rates in 2021.
view moreCredit: Annals of the Rheumatic Diseases / Jin et al.
Philadelphia, June 16, 2025 – The most comprehensive analysis of rheumatoid arthritis data to date reveals that demographic changes and uneven health infrastructure have exacerbated the rheumatoid arthritis burden since 1980 and shows global disparities on a granular level. The AI-powered study in the Annals of the Rheumatic Diseases, published by Elsevier, utilized deep learning techniques and policy simulations to uncover actionable insights for localized interventions that national-level studies have previously missed. Its design yielded highly precise, dynamic projections of further disease burden to 2040.
Principal investigator Queran Lin, MPH, WHO Collaborating Centre for Public Health Education and Training, Faculty of Medicine, Imperial College London; and Clinical Research Design Division, Clinical Research Centre, Sun Yat-Sen Memorial Hospital, Guangzhou, explains, “While previous Global Burden of Disease (GBD) studies have provided important insights, they have largely focused on high-level descriptions and visualizations at global and national scales, failing to capture local disparities or the dynamic interactions between socioeconomic development and disease trends. With access to sufficient computational resources and advanced analytical capabilities, our Global-to-Local Burden of Disease Collaboration aims to unlock the full potential of the GBD dataset (pioneered by the Institute for Health Metrics and Evaluation, University of Washington). By employing cutting-edge approaches such as transformer-based deep learning models, we were able to generate the most granular disease burden estimates to date, offering a new framework for guiding precision public health across diverse populations.”
Using GBD data, the study integrates the largest spatiotemporal rheumatoid arthritis dataset spanning 953 global to local locations from 1980 to 2021 with a novel deep learning framework to reveal how demographic ageing, population growth, and uneven healthcare infrastructure exacerbate rheumatoid arthritis burdens differently across regions. It also enabled investigators to analyze the prevalence, incidence, mortality, disability-adjusted life years (DALYs), years of life lost (YLLs), and years lived with disability (YLDs) of rheumatoid arthritis, as well as their socioeconomic inequalities and achievable disease control based on socioeconomic development level (frontiers) and forecast long-term burdens until 2040 with scenario simulations.
The study observed that globally there were significant absolute and relative sociodemographic index (SDI)-related inequalities, with a disproportionately higher burden shouldered by countries with high and high-middle SDI. Among the key findings of the study are:
- Global rheumatoid arthritis burden increased: From 1980 to 2021, the global rheumatoid arthritis burden kept rising, showing an increase among younger age groups and a wider range of geographic locations worldwide, with hotspots like the UK’s West Berkshire (incidence rate: 35.1/100,000) and Mexico’s Zacatecas (DALY rate: 112.6/100,000) bearing the highest burdens in 2021 among 652 subnational regions.
- Widening inequalities: DALY-related inequality surged 62.55% from 1990, with Finland, Ireland, and New Zealand as the most unequal countries in 2021.
- Failure to meet frontiers: As SDI increased over time, frontier deviations worsened, which indicated the burden of rheumatoid arthritis has been severely neglected.
- Noneconomic disparities persisted: Economic factors alone are not the sole determinants of rheumatoid arthritis disease burden. High SDI regions such as Japan and the UK exhibited contrasting patterns in disease burden. Japan’s declining DALY rates despite high SDI may reflect nationwide early diagnosis programs, widespread use of biologic therapies, and a diet rich in anti-inflammatory components.
- Forecasted increases and need for positive policy: By 2040, low-middle SDI regions may see increasing DALYs due to ageing/population growth, while DALYs in high SDI areas may decrease. Controlling smoking may reduce rheumatoid arthritis deaths by 16.8% and DALYs by 20.6% in high-smoking regions (e.g., China), offering significant benefits for medium/high SDI areas.
Co-lead author Baozhen Huang, PhD, Department of Biomedical Sciences, City University of Hong Kong, says, “Japan’s sustained decline in DALYs despite a high SDI proves that socioeconomic status alone doesn’t dictate outcomes; proactive healthcare policies such as early diagnosis programs can reverse trends.”
Many regions around the world still lack the necessary evidence base to inform precision health policy and targeted interventions. These data are intended to support more informed clinical decisions and health policy planning, especially in settings where reliable subnational evidence has historically been scarce.
Co-lead author Wenyi Jin, MD, PhD, Department of Orthopedics, Renmin Hospital of Wuhan University; and Department of Biomedical Sciences, City University of Hong Kong, concludes, “The adoption of this advanced framework quantifies the expected impact of feasible intervention scenarios in public health, supplying policymakers at global, national, and local levels with more reliable, dynamic evidence, redefining the very paradigm of health surveillance.”
Journal
Annals of the Rheumatic Diseases
Method of Research
Data/statistical analysis
Subject of Research
People
Article Title
Spatiotemporal distributions and regional disparities of rheumatoid arthritis in 953 global to local locations, 1980-2040, with deep learning-empowered forecasts and evaluation of interventional policies' benefits
Article Publication Date
16-Jun-2025
From code to commands: prompt training technique helps users speak AI's language
Research from Carnegie Mellon's School of Computer Science suggests prompt engineering could be as important as coding
Carnegie Mellon University
Today's generative artificial intelligence models can create everything from images to computer applications, but the quality of their output depends largely on the prompt a human user provides.
Carnegie Mellon University researchers have proposed a new approach for teaching everyday users how to create these prompts and improving their interactions with generative artificial intelligence models.
The method, called Requirement-Oriented Prompt Engineering (ROPE), shifts the focus of prompt writing from clever tricks and templates to clearly stating what the AI should do. As large language models (LLMs) improve, the importance of coding skills may wane while expertise in prompt engineering could rise.
"You need to be able to tell the model exactly what you want. You can't expect it to guess all your customized needs," said Christina Ma, a Ph.D. student in the Human-Computer Interaction Institute (HCII). "We need to train humans in prompt engineering skills. Most people still struggle to tell the AI exactly what they want. ROPE helps them do that."
Prompt engineering refers to the precise instructions — the prompts —a user gives a generative AI model to produce a desired output. The better a user is at prompt engineering, the more likely an AI model will produce what the user intended.
In "What Should We Engineer in Prompts? Training Humans in Requirement-Driven LLM Use," recently accepted in the Association for Computing Machinery's Transactions on Computer-Human Interaction, the researchers describe their ROPE paradigm and a training module they created to teach and assess the method. ROPE is a human-LLM partnering strategy where humans maintain agency and control of the goals by specifying requirements for LLM prompts. The paradigm focuses on the importance of crafting accurate and complete requirements to achieve better results, especially for complex, customized tasks.
To test ROPE, the researchers asked 30 people to write prompts for an AI model to complete two separate tasks as a pretest: create a tic-tac-toe game and design a tool to help people develop content outlines. Half of the participants then received training through ROPE and the rest watched a YouTube tutorial on prompt engineering. The groups then wrote prompts for a different game and a different chatbot as a posttest.
When researchers compared the results of the exercises, they found that participants who received the ROPE training outperformed the people who watched the YouTube tutorial. Scores from pretest to posttest rose 20% for people who received the ROPE training and only 1% for those who did not.
"We not only proposed a new framework for teaching prompt engineering but also created a training tool to assess how well participants do and how well the paradigm works," said Ken Koedinger, a University Professor in the HCII. "It's not just our opinion that ROPE works. The training module backs it up."
Generative AI models have already altered the content of introductory programming and software engineering courses as traditional programming evolves into natural language programming. Instead of writing software, an engineer can write a prompt directing AI to develop the software. This paradigm shift could create new opportunities for students, allowing them to work on more complex development tasks earlier in their studies and advancing the field.
The researchers did not design ROPE solely for software engineers. As humans continue to integrate AI into daily life, clearly communicating with machines will become an important aspect of digital literacy. Armed with knowing how to write successful prompts and an AI model up to the task, people without coding or software engineering backgrounds can create applications that will benefit them.
"We want to empower more end users from the general public to use LLMs to build chatbots and apps," Ma said. "If you have an idea, and you understand how to communicate the requirements, you can write a prompt that will create that idea."
The researchers have open-sourced their training tools and materials, aiming to make prompt engineering more accessible to nonexperts.
Journal
ACM Transactions on Computer-Human Interaction
Article Title
What Should We Engineer in Prompts? Training Humans in Requirement-Driven LLM Use
No comments:
Post a Comment