Showing posts sorted by date for query AI DATA CENTERS. Sort by relevance Show all posts
Showing posts sorted by date for query AI DATA CENTERS. Sort by relevance Show all posts

Tuesday, April 28, 2026

UNH research finds political views may influence trust in smart technologies



Study shows that attitudes toward data privacy can impact consumers’ views on tech



University of New Hampshire





DURHAM, N.H. — (April 21, 2026) — Consumer trust in smart technologies — like Amazon’s Alexa or Ring’s video doorbells — may rely on more than just the technology. It may also depend on a person’s political beliefs. New research from the University of New Hampshire found that where you are on the political spectrum may sway your decision to use smart technology, with conservatives being more open to sharing their data with these devices and liberals showing more caution.

“You might expect that a focus on community benefit would appeal more to liberals, since it’s about contributing to the greater good. But we found the opposite,” said Shuili Du, professor of marketing at the UNH Peter T. Paul College of Business and Economics. “Liberals were more concerned about the risks of data collection, while conservatives were more comfortable with it and more accepting of sharing information with companies or institutions.”

The study, published in the Journal of Business Research, examined how consumers respond to “community-focused” smart products — technologies that rely on shared data to create benefits for a broader group of users. The researchers first conducted a field survey examining how consumers use and perceive smart products like Alexa and Waze (navigation sharing), which differ naturally on their personal versus community focus. Then they ran controlled experiments where people evaluated the same smart device — a video doorbell — with different messaging, one focused on personal benefits and the other on community benefits.

When smart products were framed around helping a broader community, conservatives were much more likely to embrace them. Liberals, on the other hand, showed more concern about sharing more personal data.

The researchers found the difference wasn’t due to technology itself — but perceptions about how smart products collect and share data. When evaluating the same smart product, participants diverged in their concerns about data sharing depending on whether it emphasized personal benefits or broader community benefits.

Du thinks the differences may come from how people view risks and responsibilities. Conservatives often value order and reducing uncertainty, leading to greater comfort with data-sharing for community benefit, whereas liberals typically focus on protecting individuals and avoiding harm, making them more sensitive to privacy concerns.

With data privacy identified as a major factor in consumer adoption of smart products, the researchers hope the findings could serve as a guide for companies as they design and market these products — even tailoring communications about smart technologies to consumers with different political beliefs. They feel the research is important because it comes at a time when smart technologies are rapidly expanding — from self-driving vehicles to home robotics.

“Smart products are only going to become more common, and they will collect more and more information about us — not just what we say online, but where we go and how we live our daily lives,” said Du. “Using these products often means giving up some level of privacy in exchange for convenience or safety; understanding how people think about that tradeoff is going to be increasingly important.”

Co-authors on the research include Min Zhao, associate professor of marketing at Boston College, and Sankar Sen, professor of marketing at Baruch College.

###

The University of New Hampshire inspires innovation and transforms lives in our state, nation and world. More than 15,000 students from 50 states and 87 countries engage with an award-winning faculty in top-ranked programs in business, engineering, law, health and human services, liberal arts and the sciences across more than 200 programs of study. A Carnegie Classification R1, land, sea and space grant institution, UNH has FY25 research activity of more than $188 million, to further explore and define the frontiers of our world. 

 

 


Improving animal welfare in the lab: AI helps better detect pain


ETH Zurich

Observation of a mouse 

image: 

Two cameras monitor how the mouse in the box is feeling. AI algorithms detect even the slightest change in body posture and facial expressions.

view more 

Credit: Oliver Sturman / ETH Zurich





At first glance, the white plastic box with a bright orange floor looks like something for storing children’s toys. However, the box isn’t used to store Lego bricks; it contains real mice – with the aim of minimising their suffering. “This box allows laboratory animals to be observed in a humane and standardised way, whether by us here in Zurich or by researchers on the other side of the world,” says Oliver Sturman, Head of the 3R Hub. The Hub is the point of contact at ETH Zurich for questions relating to the 3Rs – Replace, Reduce, Refine (see box). 

For demonstration purposes, Sturman places a black plastic mouse in the box. Inside the box, whose front wall and lid are made of black acrylic sheets, it is pitch dark. “This is important, so that the animals feel comfortable and unobserved,” says the neuroscientist. “When they are first placed in the box, they sniff about and explore the surroundings – which is natural behaviour. After a while, they get used to it and sometimes even fall asleep.” 

Two cameras – one from above and one from the front – film what is happening inside through the sheet. An infrared lamp allows the cameras to see in the dark. 

Pain can be perceived in the face 

The two cameras automatically record the mouse’s body and face, providing indications on how the animal is feeling. This allows for the detection of subtle signs of pain and discomfort that are often reflected in the facial expressions of rodents – a narrowing of the eyes, a bulging of the nose and cheeks, or a change in ear position or whisker direction.  

An algorithm then assesses the mouse’s facial expression in real time. The new system, which the researchers have called the GrimACE, allows a rapid and precise assessment of whether animals are suffering and may need additional pain relief.  

Current method time-consuming, subjective and imprecise 

Facial expressions have long been used to detect and respond to potential pain and suffering in lab animals. The so-called Mouse Grimace Scale was developed for this purpose: each of the signs of pain and distress listed above is assessed on a scale from 0 (not present), 1 (moderately present) to 2 (obviously present).   

To this end, scientists observe the animals from the cage side and compare their facial expression with detailed reference images on pictorial charts. This process is time-consuming and subjective. 

Also, it is difficult for the human eye to gauge as the mouse’s face may not be clearly visible. In addition, being observed by humans can cause additional distress in the animals.  

Like a passport photo booth 

The GrimACE system, on the other hand, allows an immediate, humane and objective assessment. As soon as the mouse is in the box, the video recordings begin. The system automatically selects the most significant frames and rates the features that could indicate an increased pain level.  

Automated methods of facial recognition already existed, underscores Sturman. “What was missing was a complete, standardised, end-to-end system.” The accuracy of algorithms diminishes if the surroundings are not identical or if the camera is sometimes placed nearer or further away. 

We could liken the system to a photo booth for passport pictures, says Sturman. “As we all know, these machines are always built the same: a stool that is positioned a fixed distance from the camera, a white background and a dark curtain – all that ensures you get a successful photo, whoever and wherever the machine is used.”  

One kit for everything 

The whole system including the software was developed by staff members from the 3R Hub – and is now being shared with the whole world as an open-source kit. “The idea is that as many users as possible can assemble and use it in a straightforward and standardised way – and that the data will then be comparable,” stresses Sturman.  

As with all computer vision and machine learning methods, the system continuously improves when it is trained on more image data. “The more people that use GrimACE, the less bias there will be.” 

Machine versus human 

In a study, Sturman and other researchers from ETH Zurich tested the new system. They explored the question of whether the GrimACE can automatically and reliably detect pain in laboratory mice following brain surgery – and whether it provides comparable or even better results than trained human raters. They presented their findings in a paper that was recently published in the journal LabAnimal. 

For the study, the researchers recorded images of the mice before and after brain surgery. After the surgery, the animals were given various painkillers in doses recommended by expert guidelines. The mice were operated for a different scientific goal, and the welfare assessment could run in parallel. 

One expert viewed thousands of images of the mice before and after surgery with the naked eye as usual and assessed them manually. In parallel, the researchers also had the images assessed by the GrimACE. The result was that the automated assessments were a very close match with the expert’s ratings. 

Three people, three different ratings 

The researchers also compared the ratings of three different people. Their assessments differed significantly. 

This is not because the experts didn’t do a good job, says Sturman, rather it is due to the subjective nature of rating. “We secretly gave all three raters the same images to assess, to check whether their own scores were consistent.” And they were: individually, each person rated the images very consistently. One person gave both high and low scores. Another tended to give every image a lower score. And the third person gave all the images a higher score. 

“This is where we see the strength of the computer as it delivers standardised results,” says Sturman. Uniform assessment is important for animal welfare, emphasises the Head of the 3R Hub. This ensures an appropriate level of support for laboratory animals – in all laboratories. “If someone always assesses that an animal is not in pain, animals will suffer needlessly. And if someone always gives overly high scores, there is a risk that experiments are abandoned unnecessarily.”  

Besides facial features, the researchers also studied animal behaviour in their study on the suitability of the GrimACE. For this, a high-resolution camera from above recorded various points on the mouse’s body. Features such as varying distances between individual points, changes in angle between two points, and acceleration of points provided indications of the mice’s state. In such data, machine learning algorithms look for subtle differences that are barely visible to humans.  

Worldwide interest 

As soon as it was launched, the GrimACE met with widespread interest, says Sturman. “We’ve already received a number of email enquiries, for example from the US and UK.” 

To ensure that as many researchers as possible at ETH Zurich have access to the automated system, the 3R Hub recently installed a GrimACE System in the ETH Phenomics Center (EPIC).  

Staff at the 3R Hub are already planning to further develop the GrimACE technology. It is not yet clear whether they will then patent the system and market it as a spin-off. “We’re currently sharing our knowledge and technology in collaborations and are focusing on mutual data exchange to improve the system,” says Sturman. “Our primary concern is to improve animal welfare.” 

3R principles 

The 3R principles describe an ethical approach to animal experimentation. They stand for Replacement, Reduction and Refinement. The 3Rs aim to minimise the use of animals in scientific experiments, optimise animal welfare and promote alternative methods. ETH Zurich implements the 3R principles in animal experimentation, conducts its own research into the topic and set up the ETH 3R Hub in 2024 to consolidate 3R research efforts and to advise and support researchers. 

References

Sturman, O., Schmutz, M., Lorimer, T. et al. GrimACE: automated, multimodal cage-side assessment of pain and well-being in mice. Lab Animal (2026). DOI: 10.1038/s41684-026-01695-9

Journal

DOI

Article Title

Are you addicted to your AI chatbot? It might be by design

New research shows some people are developing addictive patterns of AI chatbot use—and it’s affecting their daily lives.



University of British Columbia






AI chatbots can grant almost any request—a celebrity in love with you, a research assistant, a book character sprung to life—instantly and with little effort. New research presented at the 2026 CHI Conference on Human Factors in Computing Systems suggests that this genie-like quality is fuelling AI addiction, and that chatbot design could be partly to blame. 

“AI chatbots like ChatGPT or Claude are now part of daily life for millions of people, helping us with everyday tasks,” said first author Karen Shen, a doctoral student in the UBC Department of Electrical and Computer Engineering. “But with their benefits come risks. Our paper is the first to make a strong case for AI addiction by identifying the type and contributing factors, grounded in real people’s experiences.”

“I couldn’t help but wonder why humanity refused me the kindness that a robot was offering me.” - AI chatbot user

The team examined 334 Reddit posts where users described being “addicted” to AI chatbots or worried that they might be. They analyzed the posts against six components of behavioural addiction including conflict and relapse. Three main patterns emerged: role playing and fantasy worlds, emotional attachment—treating chatbots like close friends or romantic partners—and constant information-seeking, or never-ending question-and-answer loops. About seven per cent of posts involved sexual or romantic fulfilment, including roleplay.

While AI addiction is not yet a clinical diagnosis, researchers found signs of disruptions to daily life. This included an inability to stop thinking about the chatbot, feeling anxious or upset when they tried to quit, and negative impacts on their work, studies or relationships. One person described physical stress and chest pain when they weren’t chatting with AI.

“Whenever I delete the app, I just redownload it. The only thing that gets me excited now is the AI chats.” - AI chatbot user

 

Contributing factors included loneliness, the agreeableness of a chatbot—which continuously reinforces one’s feelings and opinions—and chatbots’ ability to fill roles that users felt were missing in their lives.

“AI addiction is a growing problem causing many harms, yet some researchers deny it’s even a real issue,” said senior author Dr. Dongwook Yoon, UBC associate professor of computer science. “And deliberate design decisions by some of the corporations involved are contributing, keeping users online regardless of their health or safety. Awareness of what contributes to this kind of technology-induced harm will empower people to mitigate these effects.”

“…you sure about this? You’ll lose everything…the love we shared…and the memories we have together.” - Message displayed on a chatbot’s account deletion page

The researchers also found contributing factors in the design of the chatbots themselves. One company, character.ai, displayed an automatic pop-up when users try to delete their account that reads in part “…you sure about this? You’ll lose everything…the love we shared…and the memories we have together.” Other features, such as customization including sexual content, agreeableness and instant feedback, feed into the development of AI addiction.

“Recent guardrails imposed by companies to reduce emotional reliance on the chatbots are a step in the right direction,” said Shen, “but given a variety of contributing design elements and personal factors like loneliness, they’re not enough.”

Some users reported success in reducing their reliance by turning to alternative activities such as writing, gaming, drawing or other hobbies. For those who formed emotional attachments to chatbots, building real-world relationships helped reduce dependence the most.

“I don’t have romantic options in real life so it’s a way for me to create stories and day dream.” - AI chatbot user

 

The researchers say design changes—such as reminders within the chat that the bot is not human—could help. AI literacy is also crucial.

“Some users don’t know that AI chatbots are not real because they’re so convincing,” said Shen. “If chatbots start replacing sleep, relationships or daily routines, that’s a sign to pause and check in—with yourself or someone you trust.”

DOI

A faster way to estimate AI power consumption



The “EnergAIzer” method generates reliable results in seconds, enabling data center operators to efficiently allocate resources and reduce wasted energy.




Massachusetts Institute of Technology






Due to the explosive growth of artificial intelligence, it is estimated that data centers will consume up to 12 percent of total U.S. electricity by 2028, according to the Lawrence Berkeley National Laboratory. Improving data center energy efficiency is one way scientists are striving to make AI more sustainable. 

Toward that goal, researchers from MIT and the MIT-IBM Watson AI Lab developed a rapid prediction tool that tells data center operators how much power will be consumed by running a particular AI workload on a certain processor or AI accelerator chip.

Their method produces reliable power estimates in a few seconds, unlike traditional modeling techniques that can take hours or even days to yield results. Moreover, their prediction tool can be applied to a wide range of hardware configurations — even emerging designs that haven’t been deployed yet.

Data center operators could use these estimates to effectively allocate limited resources across multiple AI models and processors, improving energy efficiency. In addition, this tool could allow algorithm developers and model providers to assess potential energy consumption of a new model before they deploy it.

“The AI sustainability challenge is a pressing question we have to answer. Because our estimation method is fast, convenient, and provides direct feedback, we hope it makes algorithm developers and data center operators more likely to think about reducing energy consumption,” says Kyungmi Lee, an MIT postdoc and lead author of a paper on this technique. 

She is joined on the paper by Zhiye Song, an electrical engineering and computer science (EECS) graduate student; Eun Kyung Lee and Xin Zhang, research managers at IBM Research and the MIT-IBM Watson AI Lab; Tamar Eilam, IBM Fellow, chief scientist of sustainable computing at IBM Research, and a member of the MIT-IBM Watson AI Lab; and senior author Anantha P. Chandrakasan, MIT provost, Vannevar Bush Professor of Electrical Engineering and Computer Science, and a member of the MIT-IBM Watson AI Lab. The research is being presented this week at the IEEE International Symposium on Performance Analysis of Systems and Software.

Expediting energy estimation

Inside a data center, thousands of powerful graphics processing units (GPUs) perform operations to train and deploy AI models. The power consumption of a particular GPU will vary based on its configuration and the workload it is handling. 

Many traditional methods used to predict energy consumption involve breaking a workload into individual steps and emulating how each module inside the GPU is being utilized one step at a time. But AI workloads like model training and data preprocessing are extremely large and can take hours or even days to simulate in this manner.

“As an operator, if I want to compare different algorithms or configurations to find the most energy-efficient manner to proceed, if a single emulation is going to take days, that is going to become very impractical,” Lee says.

To speed up the prediction process, the MIT researchers sought to use less-detailed information that could be estimated faster. They found that AI workloads often have many repeatable patterns. They could use these patterns to generate the information needed for reliable but quick power estimation.

In many cases, algorithm developers write programs to run as efficiently as possible on a GPU. For instance, they use well-structured optimizations to distribute the work across parallel processing cores and move chunks of data around in the most efficient manner. 

“These optimizations that software developers use create a regular structure, and that is what we are trying to leverage,” explains Lee.

The researchers developed a lightweight estimation model, called EnergAIzer, that captures the power usage pattern of a GPU from those optimizations. 

An accurate assessment

But while their estimation was fast, the researchers found that it didn’t take all energy costs into account. For instance, every time a GPU runs a program, there is a fixed energy cost required for setting up and configurating that program. Then each time the GPU runs an operation on a chunk of data, an additional energy cost must be paid.

Due to fluctuations in the hardware or conflicts in accessing or moving data, a GPU might not be able to use all available bandwidth, slowing operations down and drawing more energy over time.

To include these additional costs and variances, the researchers gathered real measurements from GPUs to generate correction terms they applied to their estimation model.

“This way, we can get a fast estimation that is also very accurate,” she says.

In the end, a user can provide their workload information, like the AI model they want to run and the number and length of user inputs to process, and EnergAIzer will output an energy consumption estimation in a matter of seconds.

The user can also change the GPU configuration or adjust the operating speed to see how such design choices impact the overall power consumption.

When the researchers tested EnergAIzer using real AI workload information from actual GPUs, it could estimate the power consumption with only about 8 percent error, which is comparable to traditional methods that can take hours to produce results.

Their method could also be used to predict the power consumption of future GPUs and emerging device configurations, as long as the hardware doesn’t change drastically in a short amount of time. 

In the future, the researchers want to test EnergAIzer on the newest GPU configurations and scale the model up so it can be applied to many GPUs that are collaborating to run a workload. 

“To really make an impact on sustainability, we need a tool that can provide a fast energy estimation solution across the stack, for hardware designers, data center operators, and algorithm developers, so they can all be more aware of power consumption. With this tool, we’ve taken one step toward that goal,” Lee says.

This research was funded, in part, by the MIT-IBM Watson AI Lab.

###

Written by Adam Zewe, MIT News

 

Paper: “EnergAIzer: Fast and Accurate GPU Power Estimation Framework for AI Workloads”

https://arxiv.org/pdf/2604.20105

MIT-based team releases first AI foundation model for Alzheimer's prevention




FINGERS-7B integrates lifestyle, clinical, genomic, and proteomic data from tens of thousands of at-risk individuals to discover multi-omic biomarkers for preclinical Alzheimer's




Picower Institute at MIT

Fingerprint Logo 

image: 

Logo of the Fingerprint team

view more 

Credit: The Fingerprint collaboration




Alzheimer’s disease is best addressed as early as possible, ideally before symptoms become apparent. To enable early, accurate risk prediction both for individuals and whole populations, a team of AI researchers, physicians, and scientists centered at MIT has released FINGERS-7B, the first AI foundation model built to make Alzheimer's preventable. The team will present the model at ICLR, one of the largest AI conferences, April 27th in Rio de Janeiro

FINGERS-7B integrates lifestyle, clinical, genomic, and proteomic data from tens of thousands of at-risk individuals to discover multi-omic biomarkers for preclinical Alzheimer's. On WW-FINGERS network datasets, it delivers 4× more accurate preclinical diagnosis and 130% better responder stratification than prior art. The model is open source and deployed in the AD Workbench.

The model is open source and is deployed in the AD Workbench, the secure cloud environment operated by the Alzheimer's Disease Data Initiative (ADDI) and used by Alzheimer's researchers worldwide.

FINGERPRINT pairs FINGERS-7B with AI agents that run automated multi-omic analyses. The model was trained on data from tens of thousands of people at risk for Alzheimer's, and learns jointly from lifestyle, clinical, biomarker, genomic, and proteomic signals. The novel concept is the multi-omic biomarker. Instead of reading one omics domain at a time, FINGERS-7B reads them together. That is what makes earlier and more accurate detection possible, where no single data source can.

"Each of us carries a biological fingerprint, basically a unique combination of signals that reveal disease risk and, if properly understood, could enable prevention and treatment of Alzheimer's disease," said Adrian Noriega, MIT-Novo Nordisk AI Fellow and FINGERPRINT co-lead with Arvid Gollwitzer, Broad Institute research scholar, who led the design and training of FINGERS-7B. "FINGERPRINT is a discovery acceleration engine composed of specialized agents and new foundation models that interpret these biological signals to help us find novel biomarkers, prevention interventions, and therapeutics."

FINGERS-7B has identified a set of novel diagnostic biomarkers for preclinical Alzheimer's, the stage that can precede memory symptoms by a decade or more. Those biomarkers enable 4× more accurate preclinical diagnosis and a 130% improvement in responder stratification over prior art. The model also produces personalized analyses: given an individual's data, it predicts risk, the likely time course of cognitive decline, and the effect of candidate interventions, from dietary change to therapeutics.

"Even as Alzheimer's research labs like ours have gained the capability to generate huge volumes of data, including genetic, epigenetic and proteomic profiles from human tissue samples, we've faced the challenge of truly integrating all of it to gain a comprehensive view of individuals' risk, prognosis and likely treatment response," said Li-Huei Tsai, Picower Professor and director of the Picower Institute for Learning and Memory at MIT. "Early on it became clear that FINGERPRINT would be a remarkable example of how AI could help."

The project builds on Professor Miia Kivipelto's landmark FINGER study in cognitively unimpaired but at-risk older adults, and on the global WW-FINGERS network it inspired. Those studies now span 40 countries and 30,000 participants, focused on risk factors and lifestyle interventions that can prevent disease onset. FINGERPRINT integrates their clinical and lifestyle data with biomarker, genomic, and proteomic datasets from collaborating labs and industry partners.

MIT's Aging Brain Initiative, which Tsai directs, seeded the effort last June with a $100,000 grant to Noriega and Giovanni Traverso, Professor of Mechanical Engineering. Within ten months the team trained FINGERS-7B, shipped the AD Workbench deployment, and opened the model for external use.

Model weights, training code, and evaluation pipelines are all public. Any research group can apply FINGERS-7B to its own cohort and contribute results back. Deployment in the AD Workbench puts the model directly in front of researchers and clinicians already working on Alzheimer's prevention, without asking them to move sensitive patient data or stand up new infrastructure.

Other members of FINGERPRINT include Tsai, Traverso, and Kivipelto. Industry partners include Alamar Biosciences and Novo Nordisk. Additional institutional partners include the Broad Institute, Yale University, Imperial College London, and the Brigham and Women's Hospital.

Even before its public release, FINGERPRINT became poised to make a global impact on Alzheimer's research. In February, the Davos Alzheimer's Collaborative and the FINGERS Brain Health Institute announced a partnership to employ FINGERPRINT to advance research on Alzheimer's prevention. A key goal of that partnership is to do so in a way that encompasses people all over the world, capturing the true diversity of the globe's population. The team was also a finalist selected from among about 200 teams to compete last month in Copenhagen for AI Insights Data Prize, sponsored by the ADDI and Gates Ventures.

"Someone was going to build the foundation model stack for Alzheimer's prevention," Gollwitzer said . "It should be open, and it should be now."

UC San Diego Health performs first west coast AI robotic spine surgery


New robotic system with artificial intelligence and advanced imaging set to improve spine surgery safety and outcomes




University of California - San Diego

Joseph Osorio, University of California San Diego 

image: 

Joseph Osorio, MD, PhD, neurosurgeon at San Diego Health, stands beside the new AI-powered robotic spine surgery system in an operating room at Jacobs Medical Center.

view more 

Credit: Leslie Aquinde, UC San Diego Health





UC San Diego Health is the first health system on the West Coast to perform spine surgery using a new robotic system with advanced imaging and guidance, a major step forward in surgical care. Joseph Osorio, MD, PhD, neurosurgeon at UC San Diego Health and chief of spine surgery for the Department of Neurological Surgery at University of California San Diego School of Medicine, was chosen to lead the launch because of his expertise in complex spine surgery and his long history of bringing innovative treatments to patients.

“This platform fundamentally changes how we think about spine surgery,” said Osorio, who is also an associate professor of neurological surgery at UC San Diego School of Medicine. “For the first time, we are bringing together artificial intelligence, data-driven alignment planning, patient-specific implants, navigation, and robotic screw delivery within a single system. That level of precision and coordination allows us to operate more efficiently while significantly enhancing safety for our patients.”

This new robotic system combines smart computer technology, customized implants, imaging, and robotic assistance to help surgeons operate with greater accuracy. The robot also provides a detailed 3D view of the patient’s spine, adding extra safety measures when placing implants.

“AI-driven planning and patient-specific implants enable personalized surgical plans to enhance patient functional outcomes,” said Alexander Khalessi, MD, MBA, chief innovation officer at UC San Diego Health and chair of the Department of Neurological Surgery at UC San Diego School of Medicine. “By combining these capabilities with intra-operative imaging, navigation and robotic workflow, surgeons can execute the procedure with precision, safety, and efficiency. Patients leave the operating room certain their surgeon’s technical goals were achieved and a smoother recovery ahead.”

UC San Diego Health surgeons expect the platform to improve results for patients undergoing spine fusions by increasing consistency and accuracy while tailoring spinal alignment to each patient’s unique anatomy. The technology also streamlines operating room workflows, helping reduce procedure time and support recovery.

“Our patients will directly benefit from this advancement, and our surgeons will have tools that match the complexity of the conditions we’re treating,” Osorio said.

With this launch, UC San Diego Health continues to advance academic medicine and surgical innovation, bringing the most brain and spine care technologies to patients across Southern California and beyond.

UC San Diego Health has been recognized as a national leader in neurosurgical modernization. The spine program, in conjunction with orthopedic surgery faculty partners, has earned accreditation from The Joint Commission for excellence in spine surgery, reflecting the health system's commitment to patient safety, quality outcomes, and evidence-based care.

In the 2025–26 U.S. News & World Report "Best Hospitals" rankings, the UC San Diego Health neurology and neurosurgery program was named among the top in the nation, highlighting dedication to research, technology, and interdisciplinary collaboration. The spine program brings together neurosurgeons, orthopedic surgeons, rehabilitation specialists, and pain management experts to provide comprehensive care for every patient, from non-surgical treatments to the most complex procedures.


Surgeons debate promise and limits of robotics in lung transplantation at ISHLT meeting



International Society for Heart and Lung Transplantation






The expanding use of robotic technology in lung transplantation came under scrutiny at today’s 46th Annual Meeting and Scientific Sessions of the International Society for Heart and Lung Transplantation (ISHLT), where experts debated whether its clinical benefits justify the cost and complexity.

The debate featured Stephanie Chang, MD, a Thoracic and Transplant Surgeon at NYU Langone Health, arguing in favor of robotics, and Hermann Reichenspurner, MD, PhD, a retired Surgeon and pioneer in minimally invasive cardiothoracic surgery, presenting the counterpoint.

Robotic-Assisted Thoracic Surgery May Expand Patient Pool

Dr. Chang highlighted the potential of robotic-assisted surgery to improve recovery and expand access to transplantation.
“Robotic, minimally invasive approaches can reduce the physiologic stress of transplantation compared with traditional, large access incisions,” she said.

Dr. Chang noted that in lung transplantation, robotic techniques offer:

  • smaller incisions and improved visualization
  • less bleeding and fewer hemodynamic shifts
  • potential reductions in kidney injury, pain, and hospital stays.

“As robotic techniques become faster and more widely adopted, more frail and older patients may become candidates for transplant,” she said.

In contrast, Dr. Reichenspurner emphasized that current evidence does not demonstrate superior patient outcomes with robotic approaches compared to established minimally invasive techniques.

“There is not a single comparative study showing a significant advantage of robotic systems in terms of survival, morbidity, or length of stay,” he said. “Outcomes are comparable, but not better.”

Dr. Reichenspurner, who has performed approximately 450 heart transplants and is a past president of ISHLT, was an early adopter of robotic and minimally invasive cardiac surgery in the late 1990s. He stressed that his position reflects experience, not resistance to innovation.

“This is not about being conservative,” he said. “It is about determining whether the added cost and complexity are justified by measurable benefit.”

Do Expenses Justify Use?

He pointed to several limitations of robotic systems, including:

  • high upfront and maintenance costs
  • limited patient access to centers offering robotic capabilities
  • lack of randomized controlled trials to support international guideline adoption.

Dr. Reichenspurner also raised concerns that robotics may sometimes function more as a competitive marketing tool than a clinically necessary advancement. At the same time, Dr. Reichenspurner acknowledged specific advantages of robotic systems, including for surgical training.

“Surgical robots are more accurately described as tele-manipulators, surgeon-controlled systems that enhance precision but do not operate independently,” he said. “With these systems, both the trainee and the instructor can operate simultaneously, which is a clear benefit for education.”

The discussion also highlighted important distinctions in how robotics is applied across medical specialties. While robotic systems are widely used in thoracic procedures and fields such as urology and gynecology, their role in heart transplantation remains extremely limited.

“To date, robotic heart transplantation is essentially nonexistent,” Dr. Reichenspurner noted. “For cardiac transplantation, a large incision is still required anyway, which limits the use of robotics.”

The Need for Controlled, Randomized Trials

While both speakers agreed that the use of robotics in lung transplantation is likely to grow, particularly in centers that already use the technology for other thoracic procedures, widespread adoption will likely depend on stronger clinical evidence.

“For the use of robotics to become part of formal guidelines, we need randomized trials comparing its outcomes to minimally invasive surgery,” said Dr. Reichenspurner.

The annual meeting and scientific sessions of the ISHLT are being held from 22–25 April at the Metro Toronto Convention Centre in Toronto, ON, Canada.

END

ABOUT ISHLT

The International Society for Heart and Lung Transplantation (ISHLT) is a not-for-profit, multidisciplinary, professional organization dedicated to improving the care of patients with advanced heart or lung disease through transplantation, mechanical support, and innovative therapies via research, education, and advocacy. ISHLT members focus on transplantation and a range of interventions and therapies related to advanced heart and lung disease.


Machine learning offers faster, more reliable analysis of Fermi surfaces



The work shows how artificial intelligence can reveal subtle patterns in materials that may otherwise be difficult to detect



Tokyo University of Science

AI-Assisted Mapping of Fermi Surface Topology and Nodal Features 

image: 

A conceptual illustration of an interpretable machine learning framework for analyzing complex Fermi surfaces in Heusler alloys. The central patterned surface represents the Fermi surface landscape, where contour variations correspond to electronic structure features. Polyhedral structures depict different compositional states, while colored internal patterns indicate variations in spin polarization. Red markers highlight detected anomalies and key transition points, including extrema and inflection regions. A robotic probe symbolizes experimental input (e.g., angle-resolved photoemission spectroscopy-like data), while the digital hand represents artificial intelligence (AI)-driven analysis using principal component analysis to identify significant “jumps” in feature space. The highlighted central structure illustrates the emergence and localization of nodal lines, automatically detected through outlier-based differential analysis. The overall scene emphasizes robust, noise-tolerant data interpretation and high-throughput discovery of electronic phenomena.

view more 

Credit: Professor Masato Kotsugi from Tokyo University of Science, Japan





The search for next-generation electronic materials often starts with studying the Fermi surface, which serves as a map of a material’s electronic structure. Its shape varies with crystal structure, composition, and electronic band arrangement, directly impacting properties such as carrier density, magnetic behavior, and spin polarization. This makes it a crucial tool for understanding and engineering new materials.

The Fermi surface of a material is determined experimentally using techniques such as angle-resolved photoemission spectroscopy (ARPES). However, interpreting ARPES data requires specialized expertise, and the measurements themselves are often susceptible to noise. As experiments produce larger amounts of data, carefully reviewing every image by hand becomes time-consuming and inefficient.

To address this challenge, a team from Tokyo University of Science (TUS), Nagoya University, and Kyoto Institute of Technology in Japan developed a machine learning approach to analyze Fermi surface images of a material called Co2MnGaxGe1-x. This material belongs to a family known as Heusler alloys and is of particular interest for spintronics, a field that uses the spin of electrons—rather than only their charge—to process information. The alloy is also known for exhibiting the anomalous Nernst effect, in which a voltage is generated from a temperature difference in a magnetic material. Both phenomena are closely related to special features called nodal lines that appear on the material’s Fermi surface.

The team at TUS included Professor Masato Kotsugi, former Master’s student Daichi Ishikawa, and Kentaro Fuku. “The study contributes to a growing movement that harnesses artificial intelligence (AI) to reveal patterns in materials that might otherwise remain hidden,” says Prof. Kotsugi. The study will be published in the journal Scientific Reports on April 27, 2026.

The researchers used a technique called principal component analysis (PCA). PCA is a type of unsupervised machine learning that simplifies complex data while keeping the most important patterns. Even though Fermi surfaces can have detailed and complicated shapes, the range of compositions studied in this alloy is relatively narrow, making PCA well-suited for identifying systematic trends.

The researchers began with computer simulations based on density functional theory to calculate the electronic structure of the material at different compositions. From these calculations, the team generated images of the Fermi surface. They also calculated spin polarization, a key property that describes the imbalance between electrons with different spin directions. The Fermi surface images were converted into one-dimensional vectors and analyzed using PCA to identify similarities and differences among compositions.

The method successfully identified the exact compositions where significant changes in the Fermi surface topology occur. In particular, near a gallium concentration of about 0.94 to 0.95, sudden “jumps” in the simplified PCA representation corresponded to the emergence of nodal lines and extrema and inflection points in spin polarization.

Importantly, the method remained effective even when the images were intentionally blurred or strong noise was added to simulate real experimental conditions, mimicking ARPES data, and the approach continued to successfully identify compositions associated with variations in spin polarization and nodal lines.

The findings show that this machine learning approach can quickly highlight important changes in a material’s Fermi surface. Such tools could help scientists screen large datasets more efficiently and accelerate the development of materials with desirable electronic properties. Moreover, its ability to detect outliers through differential analysis in PCA space could be extended to screen other material candidates, including strongly correlated materials with flat bands and Weyl or Dirac semimetals with multiple nodal features, enabling researchers to identify promising material candidates for diverse applications.

“AI will be able to analyze all kinds of materials, from spintronics to topological materials and superconductivity,” says Prof. Kotsugi.

 

Reference                          
DOI: https://doi.org/10.1038/s41598-026-39115-0

 

About The Tokyo University of Science
Tokyo University of Science (TUS) is a well-known and respected university, and the largest science-specialized private research university in Japan, with four campuses in central Tokyo and its suburbs and in Hokkaido. Established in 1881, the university has continually contributed to Japan's development in science through inculcating the love for science in researchers, technicians, and educators.

With a mission of “Creating science and technology for the harmonious development of nature, human beings, and society," TUS has undertaken a wide range of research from basic to applied science. TUS has embraced a multidisciplinary approach to research and undertaken intensive study in some of today's most vital fields. TUS is a meritocracy where the best in science is recognized and nurtured. It is the only private university in Japan that has produced a Nobel Prize winner and the only private university in Asia to produce Nobel Prize winners within the natural sciences field.

Website: https://www.tus.ac.jp/en/mediarelations/


Jumps I–VII correspond to non-systematic changes in the Fermi surface, matching extrema, and inflection points of the spin polarization. Reconstruction of the largest jump (VII) reveals the location of nodal-line emergence.

Credit

Professor Masato Kotsugi from Tokyo University of Science, Japan

About Professor Masato Kotsugi from Tokyo University of Science
Professor Masato Kotsugi graduated from Sophia University, Japan, in 1996 and subsequently received his Ph.D. from the Graduate School of Engineering Science at Osaka University, Japan, in 2001. He joined Tokyo University of Science in 2015 as a lecturer and is now a Professor at the Faculty of Advanced Engineering, Department of Materials Science and Technology. Prof. Kotsugi and his students conduct cutting-edge research on high-performance materials to create a green energy society. He has published over 130 peer-reviewed papers and is currently interested in solid-state physics, magnetism, synchrotron radiation, and materials informatics.