Tuesday, April 14, 2026

New research finds workers are leveraging AI for career mobility as employers struggle to keep pace



New University of Phoenix Career Institute® Career Optimism Index® study points to an emerging shift in workforce power dynamics




University of Phoenix

AI Is Quietly Putting Power Back in Workers’ Hands 

image: 

An infographic depicting key findings of the 2026 Career Optimism Index® study: While today’s workforce appears to be staying put, a quiet shift is underway. AI is helping workers build confidence, develop skills and prepare for future career moves – potentially away from their current employer. 

view more 

Credit: University of Phoenix






University of Phoenix Career Institute® today released its sixth annual Career Optimism Index® recurring national workforce research study of 5,000 U.S. working adults and 1,000 employers fielded January 21–February 6, 2026. The study found that while workers appear to be "job hugging” in a stabilizing labor market where mobility remains limited, many are quietly using AI to build their skills, boost confidence, and position themselves for greater career mobility – potentially preparing for their next move, which could be away from their current employer.

On the surface, the landscape favors employers: companies are deploying AI to increase productivity, reshape teams, and find efficiencies, according to the World Economic Forum‘s latest AI at Work report. But the 2026 Index points to a new dynamic underway: half of workers (50%) say AI makes them more confident about pivoting to a new role – signaling an impending shift from “job hugging” to “job hopping” that puts power back in workers’ hands. The last time workplace power was firmly in employees’ hands was in 2022, when employers saw a mass exodus of talent seeking greater mobility and opportunity, as highlighted in the 2022 Career Optimism Index® study.

This year’s Index shows workers are increasingly turning to AI independently to strengthen their readiness in a business environment characterized by historically low turnover rates, as illustrated in the U.S. Bureau of Labor Statistics’ January JOLTS report. More than half of workers (53%) say AI advancements boost confidence in building their skills, while 75% say AI increases their confidence at work, and 81% say it helps them identify new ways to apply their skills for future growth.

This AI-driven confidence is translating into optimism: 63% of workers say they feel positive about job opportunities available to them, rising to 75% among workers who have become comfortable and knowledgeable about AI. As job growth shows signs of strengthening, according to the U.S. Bureau of Labor Statistics’ March Employment Situation report, this may mark the moment many workers have been quietly preparing for – when rising confidence and AI-driven skill building begin to translate into increased career movement. At the same time, nearly half of employers (48%) worry they cannot retain AI-fluent talent, highlighting AI capability as both a competitive advantage and a looming retention risk.

Key Findings from the 2026 Career Optimism Index®

  • AI is increasing workers’ confidence in career mobility: 50% of workers say AI makes them more confident about pivoting into a new role, and workers who are knowledgeable about AI report even greater optimism about available job opportunities than workers overall (75% vs. 63%).
  • Workers are learning AI independently: Half of workers (50%) say they are learning to use AI independently, pointing to strong employee demand for AI skill-building even without formal employer support.
  • Employees are looking for more AI guidance: Many workers say employer support has not kept pace with their needs, with 47% saying their employer should be doing more to incorporate AI into their work and 60% wanting more guidance in learning AI tools.
  • Retention concerns are rising: Nearly half of employers (48%) worry they may be unable to retain AI-fluent talent as demand for those skills continues to grow, and 62% say employees are developing AI skills faster than the organization can adapt.
  • Clear AI strategy improves job satisfaction: Workers whose employer has a clear plan for AI-enabled growth are significantly more likely to be satisfied in their current job than those whose employer does not (87% vs. 72%).

Why This Matters Now

As organizations accelerate AI adoption, the 2026 Index identifies that workforce implications extend beyond productivity and efficiency. For workers, AI is becoming a tool for career growth, confidence, and mobility. For employers, that creates a new challenge: the same capabilities that help employees become more effective in their current roles may also make them feel more prepared to plan their exit.

“AI is changing the workforce conversation in real time,” said John Woods, Provost and Chief Academic Officer at University of Phoenix. “While many organizations are focused on how AI can improve efficiency, our 2026 Career Optimism Index® study shows workers are focused on how to use AI to help them grow and advance their careers. For employers, this is an important moment to lead with AI clarity, because organizations that make AI part of a broader growth strategy for their people may be better positioned to support engagement, satisfaction, and retention – particularly as hiring shows signs of strengthening and workers gain more confidence to explore new opportunities.”

The findings suggest employers have an opportunity to move from AI experimentation to workforce strategy by defining clear AI career pathways and standards, establishing skills assessment systems that support talent management and internal mobility, expanding workforce training and structured enablement, and building AI capability among managers to foster a stronger culture of AI support.

View and download the complete study at https://www.phoenix.edu/career-institute.html.

ABOUT THE CAREER OPTIMISM INDEX®

The Career Optimism Index® study is one of the most comprehensive studies of Americans' personal career perceptions to date. The University of Phoenix Career Institute® conducts this research annually to provide insights on current workforce trends and to help identify solutions to support and advance American careers.

The sixth annual study, fielded between January 21, 2026-February 6, 2026, surveyed 5,000 U.S. adults who either currently work or wish to be working on how they feel about their careers at this moment in time, including their concerns, their challenges, and the degree to which they are optimistic about their careers. The study was conducted among a nationally representative sample of U.S. adults (ages 18 and up). The study also explores insights from 1,000 U.S. employers who are influential or play a critical role in hiring and workplace decisions within a range of departments, company sizes, and industries.

ABOUT UNIVERSITY OF PHOENIX CAREER INSTITUTE®

Housed within the university's College of Doctoral Studies, the Career Institute conducts impactful research and collaborates with leading organizations to explore broad and persistent barriers to career growth. Through the Career Optimism Index® annual studies and targeted reports, the Institute shares actionable insights to inform solutions. For more information, visit www.phoenix.edu/career-institute.

ABOUT UNIVERSITY OF PHOENIX

University of Phoenix is Built for Real Life. 50 Years Strong. The University innovates to help working adults enhance their careers and develop skills in a rapidly changing world through flexible online learning, relevant courses, academic AI pillars, and skills-mapped curriculum for associate, bachelor’s and master’s degree programs. Active students and alumni have access to Career Services for Life® resources including career guidance and tools. For more information, visit phoenix.edu.

Managed misalignment of AI and the impossibility of full AI-human agreement


PNAS Nexus

Hector Zenil 

image: 

Dr. Zenil showing on the screen a simulation of AI agents interacting and trying to infleunce one another, along with the various metrics associated with each agent in the arena.

view more 

Credit: OIA





Perfect AI alignment with human values and interests is mathematically impossible, according to a study, but behavioral diversity among AI agents offers the promise of some control. Hector Zenil and colleagues used Gödel’s incompleteness theorem and Turing’s undecidability result for the Halting Problem to show that any LLM complex enough to exhibit general intelligence or superintelligence will also be computationally irreducible and produce unpredictable behavior, making forced alignment impossible. As an alternative, the authors propose a strategy of “managed misalignment,” in which competing AI agents with different cognitive styles and partially overlapping goals operate in distinct roles to check one another.

As each agent attempts to fulfill its own goals with its own modes of reasoning and ethical frameworks—what the authors dub “artificial agentic neurodivergence”—the agents will dynamically aid or thwart one another, preempting ultimate dominance by any single system. The authors simulated a “cognitive ecosystem” by prompting AI interacting agents to represent fully aligned behaviors such as optimizing human utility, partially aligned behaviors such as prioritizing the environment, or unaligned behaviors, pursuing arbitrary objectives.

The authors trialed this approach in ethical debates between a range of LLMs in which humans or prompted LLMs tried to disrupt emerging consensus. In these debates, open models showed a wider spectrum of perspectives than proprietary models, creating what the authors characterize as a more resilient AI ecosystem, one that is less likely to converge on a single opinion—which could be harmful in cases where that opinion is not aligned with human interests.

Journal

Article Title

Article Publication Date

Research uses AI to examine social exchanges and interactions



Carnegie Mellon University





Psychologists have long known that social situations profoundly influence human behavior, yet have lacked a unified, empirically grounded way to describe them. A new study addresses this problem by using generative AI to systematically classify thousands of everyday social interactions. In a new study, researchers analyzed thousands of textual descriptions of two-person social interactions, then used generative artificial intelligence (AI) to code the exchanges by features, resulting in a taxonomy of categories of social interactions. Then they related these groups to variables like conflict, power, and duty to provide a comprehensive, data-driven framework for quantifying the structure of interactions.

The study, “The Structure of Social Situations: Insights From the Large-Scale Automated Coding of Text,” by researchers at Carnegie Mellon University and the University of Pennsylvania (Penn), is published in Psychological Science. “Researchers have proposed many frameworks for representing social situations, but due to the diversity and complexity of real-life situations, many of these are partial, non-integrated, and not mapped onto situations encountered in everyday life,” says Taya R. Cohen, Professor of Organizational Behavior and Business Ethics at Carnegie Mellon’s Tepper School of Business, who coauthored the study. “Our work advances the study of social cognition and behavior by using AI to create a more comprehensive framework for the structure of social situations.”

Because social situations exert a profound influence on human behavior and mental life, understanding the structure and dimensions of such situations has been a major topic of psychology research for decades. But gaps remain, leaving the field without a rigorous understanding of how the characteristics that matter most relate to commonly encountered social interactions.

In this study, researchers analyzed more than 20,000 detailed textual descriptions of two-person social interactions. They used a large data set of short stories describing social interactions in daily life (e.g., family situations, workplace interactions, animal interactions, pet mishaps written by online participants, as well as short situational descriptions from other sources (e.g., blogs, novels, fiction published on social media, reading-comprehension exams).

The study used a combination of large language model (LLM) techniques to extract high-level situational characteristics from the data sets and core situational cues like relationships, activities, locations, and goals (who, what, where, and why) that make up the observable dimensions of each situation.
“A core challenge in psychology is understanding the structure of social situations—the patterns and psychological features that shape how people think, feel, and behave in social contexts,” explains Sudeep Bhatia, Associate Professor of Psychology at Penn, who led the study. “Our work provides a rigorous and integrative framework for mapping out everyday social situations and relating them to key theoretical dimensions in psychology.” 

The study found systematic associations between situational characteristics proposed by existing taxonomies as well as between situational characteristics and observable cues, replicating and extending findings from earlier studies, but at a much larger scale. In particular, the study drew on a broader and more representative group of typical exchanges experienced by adults.

“Our study offers researchers a rich descriptive catalogue of dozens of classes of situations with which they can test and refine their theories,” Bhatia added, “It can be used to model the distributional structure of situations, as we did, as well as to formally study the effect of situations on interpersonal behavior, perceptions of situations, pursuit of goals, and the interplay between situations and personality.”

Among the study’s limitations, the authors note that their analysis relied on short stories, which resemble the brief autobiographical narratives used in prior research but likely exclude more complex and nuanced situations. In addition, their findings depended on analyses conducted with current-generation LLMs, which have biases and constraints. Finally, the work examined only English-language narratives, which limits the cultural scope of the conclusions.

Widespread AI use narrows society’s creative space



Commercial LLMs challenged with tests of originality and creativity generate results that are more similar to one another than people’s responses.




Duke University





There are already hundreds of thousands of large language models (LLMs) in existence with a few dozen commercial systems dominating the market. Between options such as GPT-4, Claude and Gemini, many people have their favorite, especially when it comes to creative tasks such as writing.

Those preferences, however, are likely entirely in the eye of the beholder. According to new research from Duke University, the creative outputs of commercial LLMs are more similar to each other than users might hope. When challenged with three standard tasks assessing creativity, answers from commercial LLMs are much more alike than their human counterparts.

The results appeared online March 24 in the journal Proceedings of the National Academy of Sciences Nexus.

“People might wonder if different LLMs will take them in different directions with the same prompts for creative projects,” said Emily Wenger, the Cue Family Assistant Professor of Electrical and Computer Engineering at Duke. “This paper basically says no. LLMs are less creative as a population than humans.”

According to a 2024 survey by Adobe, over half of Americans have already used LLMs as creative partners for brainstorming, writing, creating images or writing code. Because an overwhelming majority of users trust them for help with being more creative, researchers have been trying to find out if that trust is misplaced.

One seminal paper in this emerging field conducted by Anil Doshi and Oliver Hauser found that writers who used GPT-4 produced more creative stories than humans working alone. However, the same study showed that those LLM-aided stories were more similar to each other than were stories from human writers working solo.

This research, and other papers like it, only looked at people using one specific LLM. Wenger, who studies how data gets into AI models, was curious how these types of results would translate between different LLMs.

“Commercial LLMs have all been trained on the same dataset—the entirety of the internet—and they all have the same goal,” Wenger said. “It seemed likely to me that this would limit the amount of diversity we’d see in their creativity, so I decided to find out.”

To explore her hunch, Wenger turned to Yoed Kenett, a cognitive neuroscientist and associate professor of data and decision sciences at the Technion – Israel Institute of Technology. Together, they settled on three standard tasks used to assess creativity levels and put 22 LLMs to the test against over 100 people.

One test, called the Alternative Uses Test (AUT), challenges participants to name different ways that an object could be used from its intended use. For example, using a book as a doorstop, fly swatter or kindling for a fire. The second test, called the Divergent Association Task (DAT), asks participants to name 10 different words, each as different as possible from the others in every sense. Lastly, the Forward Flow (FF) test provides a starting prompt word and asks participants to write down the next word that follows in their mind from the previous word for up to 20 words. For example, fire, candle, wax, hair, comb, honey, bee, stripes, zebra, etc.

Together, these tests seek to measure the divergent and dissociative thinking abilities that facilitate creativity.

“Significant empirical research on the past few decades highlight how much human creativity depends on variability,” said Yoed Kenett. “The problem, as we and others are increasingly showing, is that while LLMs appear to generate extremely original outputs, they are overly homogenized and not variable in their responses. This could have detrimental long-term impact on human creative thinking and thus must be addressed.”

The results, which aimed to measure the variability and originality in responses between LLMs and people, were clear. While individual LLMs might outperform individual people in levels of creativity, as a whole, the algorithms’ responses were much more similar to each other than the people’s. Importantly, altering the LLM system prompt to encourage higher creativity only slightly increased their variability—and human responses still won out.

“This work has broad implications as people continue adopting and integrating LLMs into their daily life,” Wenger said. “Over reliance on these tools will smooth the world’s work toward the same underlying set of words or grammar, tending to make writing all look the same.”

“If you’re trying to come up with an original concept or product to stand out from the crowd,” Wenger continued, “this work highly suggests you should bring together a diverse group of people to brainstorm rather than relying on AI.”

CITATION: “Large language models are homogeneously creative.” Emily Wenger and Yoed N. Kenett. PNAS Nexus, 2026, 5, pgag042. DOI: 10.1093/pnasnexus/pgag042

AI with locality awareness



Marc RuĂźwurm is training AI to be geospatially aware in a project conducted by a new Emmy Noether Group at the University of Bonn.



University of Bonn

Junior Professor Dr Marc RuĂźwurm 

image: 

Junior Professor Dr Marc RuĂźwurm heads a new Emmy Noether Group for AI methods at the University of Bonn.

view more 

Credit: Gregor HĂĽbl / University of Bonn





The University of Bonn is hosting a new Emmy Noether Group devoted to AI methods. Junior Professor Marc RuĂźwurm is developing AI methods for fusing different types of geodata to arrive at a uniform geospatial representation. The German Research Foundation (DFG) will be providing up to 1.4 million euros in funding for the research group over the next six years. The Emmy Noether Program is a framework designed to enable selected postdocs and assistant professors on fixed-term contracts to obtain the qualifications necessary to hold a university professorship.

Places can be described based on various different characteristics, such as whether a given place is forested or barren, its height above sea level, what animals are found there and whether there are buildings, roads or parks. Such information is generally stored in classic geodatabases of maps, satellite images, elevation models, etc. This practice tends to create problems however, because “the data exist in differing formats, resolutions and grid sizes, so it takes major effort to utilize them in combination,” as Junior Professor Marc RuĂźwurm of the University of Bonn Institute for Food and Resource Economics explains. “Harmonizing such geodata to make it usable in modern AI methods is a lot of work”. This can mean combining animal photos from camera traps with vegetation, altitude, climate and human infrastructure data in order to predict whether certain species will find suitable habitats there.

AI is learning to better “understand” places

The new Emmy Noether Group will investigate how geodata can be represented within the parameters of artificial neural networks. RuĂźwurm and his team are developing AI methods to synthesize such different data types to derive a uniform geospatial representation. The goal is for AI to achieve a better “understanding” of places than it currently has. “People often rely on pictures and maps to get a sense of what a place is like without actually being there themselves—whether warm or cold, green or intensively developed, crowded or deserted. Our work is aimed at enabling AI to use this kind of data to know more about places in similar fashion.”

The new AI methods developed have diverse application potential, such as allowing more precise urban quality-of-life analysis by correlating location characteristics with resident satisfaction data or real estate prices. “By drawing on different kinds of geoinformation, AI could also project what coastlines and beaches are subject to elevated plastic waste levels,” RuĂźwurm observes. Global mapping of vegetation and settled areas could also be made more precise, as AI would be more aware of regional differences.

Transdisciplinary AI research

The breadth of possible applications indicates the transdisciplinary nature of this work, which is why the University of Bonn is an ideal research center for it. RuĂźwurm, who moved here from the Netherlands at the start of the year, will be collaborating with colleagues from different disciplines within the framework of the University of Bonn’s Transdisciplinary Research Areas (TRAs) Modelling and Sustainable Futures. The collaborative purpose is to study how AI methods can be employed to more effectively evaluate local biodiversity, gauge microplastic soil content over large areas, represent global Earth gravity in AI models and reveal how biodiversity and other environmental changes correlate with political and societal decision-making processes. “What makes the University of Bonn so attractive to me is how fundamental research and applied research really go hand in hand here.”

Bio

Marc RuĂźwurm has been junior research group leader at the University of Bonn’s Machine Learning in Earth Observation (MEO) Lab since February 2026. His previous position was Assistant Professor of Machine Learning and Remote Sensing at Wageningen University, and he has experience in geodesy and geoinformation. Starting in September 2026  RuĂźwurm will be head of the Emmy Noether Group “Earth Embeddings: Learning Concept Maps in Neural Nets,” backed by roughly 800,000 euros in initial funding from the German Research Foundation (DFG) for a three-year period. Around 600,000 euros in follow-on funding may be granted for a three-year extension after passing an interim evaluation. The funding is provided as part of the DFG's involvement in the Global Minds Initiative Germany by the Federal Ministry of Research, Technology and Space.

Tired of swiping? Now an AI simulation helps us understand why


Screen logging tells us where smart phone users tap and swipe, but now researchers have developed a musculoskeletal model that helps understand the physical effort that goes into these motions.




Aalto University

Log2Motion 

video: 

The researchers hope that human simulations would be adopted to help design interactions that are more ergonomic and pleasant for users.

view more 

Credit: Antti Oulasvirta / Aalto University




Tired of swiping? Now an AI simulation helps us understand why

Screen logging tells us where smart phone users tap and swipe, but now researchers have developed a musculoskeletal model that helps understand the physical effort that goes into these motions.

Prolonged scrolling is bad for your well-being, but is it also physically tiring? Until now, we haven’t really been able to say. This is why researchers from Aalto and Leipzig Universities created a new AI model that makes it possible to simulate muscle activations and used energy to work out how physically effortful smartphone interactions are for users.

'It’s the first time anyone has developed a tool that can help designers and developers quickly assess how physically tiring a real mobile user interface could be,’ says Antti Oulasvirta, Professor at Aalto University and ELLIS Institute Finland. ‘So far, smartphone logs have only told us where a finger has touched the screen – not whether or not it’s felt comfortable.'

To bridge this gap, Oulasvirta and his colleagues at Leipzig University developed Log2Motion, an AI model that translates smartphone logs into simulated human motion. Movement of this musculoskeletal simulation is based on data from previous motion capture studies.

In the simulation, a human model consisting of digital bones and muscles moves its index finger to interact with a smartphone laid out on a desk. Through a software emulator, the model can use real mobile apps in real time. It can re-enact logs collected on users to illuminate what happened during interaction. The Log2Motion model then estimates the motion, speed, accuracy and effort of these biomechanical movements.

The model provides entirely new horizons for smartphone use research – as well as design.

'We found that some gestures are harder to perform – in this case, up-down and down-up swipes,' explains Oulasvirta. 'Small icons and locations toward the corners of the display also require additional effort.'

Using such simulation early in the process could help designers create user-friendly interfaces. It can also provide insight into accessibility needs for users with tremors, reduced strength or prosthetics.

'It is possible to scale the Log2Motion model to simulate other scenarios, such as the more classic: laying on the couch, holding the phone in one hand and scrolling with the thumb,' Oulasvirta says.

The researchers hope that human simulations would be adopted to help design interactions that are more ergonomic and pleasant for users. In the future, these simulations could be combined with other AI methods to optimise user interfaces to a user’s needs.

The paper, 'Log2Motion: Biomechanical Motion Synthesis from Touch Logs', will be presented on April 17 at CHI 2026, the leading conference on human–computer interaction. 


Log2Motion

Credit

Antti Oulasvirta / Aalto University

More resources

Contact: Antti Oulasvirta, Professor, Aalto University, antti.oulasvirta@aalto.fi

About Aalto University

Aalto University is where science and art meet technology and business. We shape a sustainable future by making research breakthroughs in and across our disciplines, sparking the game changers of tomorrow and creating novel solutions to major global challenges. Our community is made up of 16,000 students and 5,200 employees, including 446 professors. Our campus is in Espoo, Greater Helsinki, Finland.  



Language-model-guided robotic boxes advance perovskite solar cell discovery



Higher Education Press
Conceptual scheme of materials intelligence. 

image: 

Conceptual scheme of materials intelligence.

view more 

Credit: Zijian Chen,Wenjin Yu,Chuang Wu et al.





Perovskite solar cells have emerged as one of the most promising next-generation photovoltaic technologies, but their development still depends heavily on time-consuming trial-and-error synthesis and labor-intensive device fabrication. Researchers have already explored more than one hundred thousand recipes to improve device performance, yet the formulas remain complex, additives are highly diverse, and crystallization is extremely sensitive to environmental conditions. As a result, fabrication remains difficult to control, while the related physical and chemical mechanisms are still not fully understood. Although high-throughput robotic systems can accelerate data collection, they often struggle to analyze rapidly growing numerical datasets effectively or to provide timely feedback for semantic recipe optimization and mechanistic reasoning at the device scale.

Researchers from the Hong Kong Polytechnic University and collaborating institutions report an agentic robotics system for perovskite solar cell research in Engineering in 2026. The work combines a language agent, a domain-specific recipe language model (RLM), and 11 interconnected robotic boxes within a unified framework for synthesis, fabrication, characterization, and feedback-driven optimization. Using this system, the team carried out 50,764 perovskite solar cell device experiments, achieved a champion power conversion efficiency of 27.0%, with a certified value of 26.5%, and generated more than 578 million tokens to strengthen recipe recommendation and mechanistic reasoning.

At the core of the study is the idea that robotic experimentation should do more than automate repeated operations. The researchers designed a seven-layer artificial intelligence (AI) architecture covering learning, generating, RecipeQA, fine-tuning, reasoning, evaluation, and optimization. Within this framework, both numerical and semantic recipes can be continuously learned from literature corpora and robot-generated corpora, enabling iterative refinement of the RLM. Formulas and parameters are encoded into machine-readable recipes, translated into robot-executable commands, and returned as structured feedback after fabrication and characterization. In this way, the system establishes a closed-loop workflow linking recommendation, execution, validation, and model improvement.

The hardware system upgrades an earlier robotic synthesis system into a full-device fabrication system for perovskite solar cells. A digital twin serves as a real-time software–hardware interface, translating model-generated recipes into executable robotic instructions while synchronizing experimental states and feedback. The 11 robotic boxes form an enclosed and interconnected environment for synthesis, fabrication, and characterization. Altogether, the system includes 101 functional modules, more than 1,500 components, and 4,300 controllable parameters, reconstructing traditionally fragmented glovebox-based manual operations into coupled robotic execution.

According to the researchers, the key advance is the integration of three advantages within one closed-loop AI–robotics framework: controllable fabrication of full perovskite solar cell devices by robotic boxes, robotic characterization that converts high-throughput experimental outputs into structured mechanism-related evidence, and domain-specific RLM which is trained and continuously improves recipe recommendation, mechanistic reasoning, and subsequent robotic execution.

The significance of the work extends beyond perovskite photovoltaics. By integrating a language agent, an RLM, robotic fabrication, robotic characterization, and feedback-driven optimization into one research framework, the study provides a practical route toward next-generation materials research tools. More broadly, this work highlights a paradigm shift from manual discovery, providing a scalable architectural foundation of materials intelligence. In the longer term, such AI and robotics systems could be deployed in extreme environments to support on-site materials intelligent manufacturing.

The article, titled “Agentic Robotic Boxes for Perovskite Solar Cell Fabrication with Recipe Language Model,” was authored by Zijian Chen, Wenjin Yu, Chuang Wu, Feibei Chen, Zixuan Wang, Chao Zhou, Yimeng You, Shaojie Li, Qiyuan Zhu, Ning Ma, Yao Sun, Donghui Li, Billy Fanady, Shengchou Jiang, Zhongliang Yan, Shumin Zhou, Liang Li, Chang-Yu Hsieh, Yang Bai, Lixin Xiao, Chi-yung Chung, Ching-chuen Chan, Zhanfeng Cui, Michael Grätzel, Haitao Zhao. It was published in the journal Engineering. Full text of the open access paper: https://doi.org/10.1016/j.eng.2026.04.002. For more information about Engineering, visit the website at https://www.sciencedirect.com/journal/engineering.

No comments: