Can Artificial Intelligence manage humans without dehumanizing them?
Algorithms that threaten worker dignity, autonomy and discretion are quietly reshaping how people are managed at work, warns new research from the University of Surrey
University of Surrey
The study, published in the Annals of Tourism Research, finds that Artificial Intelligence (AI)–driven management systems can be made more human – if organisations reintroduce human judgement, transparency and flexibility into how algorithms are designed and used.
Drawing on interviews with 30 hospitality professionals and developers, and an analysis of 61 algorithmic management systems used across hotels, restaurants and call centres, the research details how AI does not automatically replace managers but quietly redistributes authority. Algorithms make decisions about tasks, performance and scheduling, but the human managers who interpret, adapt or challenge these outputs determine whether workplaces become more empowering or more oppressive.
Dr Brana Jianu, co-author of the study, Research Fellow at the University of Surrey, said:
“Algorithmic management doesn’t have to strip work of its humanity. When managers use algorithms as tools for collaboration rather than control, they can protect employee dignity while still improving efficiency. The key is to keep people in the loop – explaining how systems work, encouraging discretion, and giving staff the power to question automated decisions.”
The research introduces the concept of Modalities of (In)Visibility to describe how algorithms shape what is seen, measured and valued at work – and whose interests are prioritised. When algorithms highlight context and allow for human interpretation, staff feel empowered and respected. When the logic behind the system is hidden, workers are more likely to feel surveilled and powerless.
Professor Iis Tussyadiah, Dean of Surrey Business School and co-author of the study said:
“We need to design dashboards that show not just individual productivity, but team collaboration; allowing employees to challenge or amend automated allocations; and holding transparency sessions that explain how data is used to make scheduling or evaluation decisions.
“Humanising AI at work depends less on the technology itself and more on how organisations use it. As the hospitality sector becomes a testing ground for AI management, the lessons learned could reshape workplaces far beyond hotels and restaurants.”
[ENDS]
Note to editors:
- The full study has been published in the Annals of Tourism Research
Journal
Annals of Tourism Research
Method of Research
Observational study
Subject of Research
People
Article Title
Humanising algorithmic management systems
New Wiley guidelines give researchers clear path forward in responsible AI use
Informed by community feedback, the detailed guidance addresses research methodology and peer review, while setting standards for disclosure and reproducibility
Wiley
Wiley (NYSE: WLY), a global leader in authoritative content and research intelligence, has set new standards for responsible and intentional AI use, delivering comprehensive guidelines specifically designed with and for research authors, journal editors, and peer reviewers.
As AI usage among researchers surges to 84%, Wiley is responding directly to the pressing need for publisher guidance articulated by 73% of respondents in the most recent ExplanAItions study. Building on similar advisement for book authors published in March 2025, and shaped by ExplanAItions findings, Wiley’s new guidance draws from more than 40 in-depth interviews with research authors and editors across various disciplines, as well as the company’s experts in AI, research integrity, copyright and permissions.
It offers the following research-specific provisions:
- Disclosure Standards: Detailed disclosure requirements with practical examples show researchers exactly when and how to disclose AI use—covering drafting and editing, study design, data collection, literature review, data analysis, and visuals. This guidance treats disclosure as an enabling practice, not a barrier, helping researchers use AI confidently and responsibly.
- Peer Review Confidentiality Protections: Clear prohibitions on uploading unpublished manuscripts to AI tools, while providing guidance on responsible AI applications for reviewers and editors. This outlines areas where AI use is and is not appropriate in the peer review process.
- Image Integrity Rules: Explicit prohibition of AI-edited photographs in journals, with clear distinctions between permitted conceptual illustrations and factual/evidential images that require verifiable accuracy, providing clarity on AI use for image generation in various contexts.
- Reproducibility Framework: Comprehensive advice as to which AI uses require disclosure, helping researchers understand when transparency is necessary for scientific evaluation.
"Researchers need clear frameworks for responsible AI use. We've worked directly with the community to create them, setting new standards that will benefit everyone involved in the creation and consumption of scientific content,” said Jay Flynn, Executive Vice President and General Manager, Research & Learning at Wiley. “By partnering with the research community from the start, we're ensuring these AI guidelines are grounded in the realities researchers navigate every day while continuing to protect the integrity of the scientific record."
As the research publishing industry experiences rapid AI adoption, these guidelines will serve as a model for responsible AI integration across the sector. They emphasize that AI use should not result in automatic manuscript rejection. Instead, editorial evaluation should focus on research quality, integrity, and transparency, using disclosure as a routine, intentional practice. Beyond establishing standards, the guidelines provide practical examples, workflow integration tips, and decision-making frameworks.
This advisement is a key component of Wiley's comprehensive, coordinated effort to support researchers as AI transforms scientific discovery. The Wiley AI Gateway, launched earlier this month, allows scholars to access peer-reviewed research directly within their AI workflows, while the ongoing ExplanAItions study provides continuous benchmarks on researcher perspectives and needs. The company has also established core AI principles that guide its journey as it continues to integrate AI features into its products and platforms. Together, these initiatives showcase Wiley’s commitment to serving as a partner to the research community as it navigates technological change responsibly.
###
About Wiley
Wiley (NYSE: WLY) is a global leader in authoritative content and research intelligence for the advancement of scientific discovery, innovation, and learning. With more than 200 years at the center of the scholarly ecosystem, Wiley combines trusted publishing heritage with AI-powered platforms to transform how knowledge is discovered, accessed, and applied. From individual researchers and students to Fortune 500 R&D teams, Wiley enables the transformation of scientific breakthroughs into real-world impact. From knowledge to impact—Wiley is redefining what's possible in science and learning. Visit us at Wiley.com and Investors.Wiley.com. Follow us on Facebook, X, LinkedIn and Instagram.
Chapters in new book focus on ‘cone automation’ for GenAI
Analyses have implications for managers
Carnegie Mellon University
Technological anxiety is at least as old as the industrial revolution, so the rapid development of generative artificial intelligence (genAI) products has spurred research and analysis on the impact this technology will have on labor markets. In chapters in a new book, researchers examine how the structure of tasks can facilitate or impede the adoption of genAI, how workers of different types choose to use genAI, and where workers are likely to look for jobs if they are displaced from their work due to genAI. GenAI will likely widen the “cone of automation” by substituting for labor in more complex work or in work that occurs less frequently, the authors conclude.
The chapters, written by researchers at Carnegie Mellon University, the University of Southern California, and the University of Pennsylvania, appear in The Oxford Handbook of the Foundations and Regulation of Generative AI.
“Our conceptualization of a cone of automation provides a simple visual representation automation is expected to occur, given the characteristic of a technology,” explains Ramayya Krishnan, professor of management science and information systems at and emeritus dean of Carnegie Mellon’s Heinz College, who coauthored the chapter. “Relevant dimensions are the overall output, or frequency at which a step needs to be completed, and the length of the step as currently configured in production.”
The cone of automation highlights several facts: 1) Automation is more likely to occur in steps performed at a high frequency; this is intuitive since the benefits of a machine are more likely to be realized when the machine is working at high capacity. 2) Automation is more likely to occur for “middle-length” steps; only when output grows does it become more likely to automate easy steps. 3) People are more likely to be an economic advantage when dealing with particularly long, complex steps.
GenAI will likely widen the cone of automation by substituting for labor in more complex work or in work that occurs less frequently, the authors suggest. When the costs of failure are high, businesses will probably adopt less genAI due to its randomness. In this case, the cone of automation would narrow and genAI would play an explicitly complementary role that involves having a person oversee its work.
“GenAI differs considerably from classical machines in that it is more general and more useful but also more prone to errors,” notes Laurence Ales, professor of economics at Carnegie Mellon’s Tepper School of Business, who coauthored the chapter. “These features inform the potential patterns businesses will use in adopting genAI, including whether it will substitute for or complement existing workers.”
The technical feasibility of automation using genAI is not enough to understand these patterns, according to the authors. The economic conditions for adoption depend on the interaction of technical features with process structure. The cost and benefit of dividing tasks drive how firms currently organize work and define jobs, and measures of occupational exposure to genAI or other technologies must consider the relative frequency and separability of tasks.
In the long run, the use of genAI will influence the quality of data available for training future models, the authors note. GenAI is often mediated by human users with different levels of skill: The more genAI is used by workers with less ability to identify and correct errors in output (while increasing the quantity of this low-quality output), the more the quality of future training corpuses is likely to degrade, they predict.
“We may expect a divergence in genAI quality, in which lower data quality further reduces the complementarity of the technology with high skill, whereas contexts with high error standards will see narrower and perhaps slower diffusion of genAI but higher long-run complementarity with high skill and high data quality,” says Christophe Combemale, research professor of engineering and public policy at Carnegie Mellon, who coauthored the chapter.
Finally, the authors consider the potential shape of occupational disruption due to genAI. A network view of occupations is needed to anticipate outcomes for disrupted workers, they suggest. Even occupations not directly disrupted by genAI may experience competition and wage losses if they become targets for workforce transitions out of disrupted occupations. Conversely, the resilience of labor markets in providing employment for disrupted workers will depend on having a sufficient density of alternative, less AI-substitutable occupations into which workers can transition.
SCALE: a novel framework of educational objectives in the AI era
The authentic intelligence system covers scientific literacy, computational thinking, AI literacy, metacognition, and engineering and design thinking
East China Normal University, TAO
image:
The researcher innovatively redefines educational objectives in the age of artificial intelligence in this novel study.
view more
Credit: Weipeng Yang from The Education University of Hong Kong Image source link: https://doi.org/10.1016/j.tao.2025.100018
Artificial intelligence (AI) has become ubiquitous in the 21st century. Their widespread popularity and utility have redefined the skills necessary for personal and professional success. Clearly, it is crucial to align modern education with these rapidly changing needs. While conventional educational taxonomies, including Bloom’s Taxonomy, are foundational, they often fail to emphasize aspects required in a world where AI systems serve as tools and collaborators. These aspects include technological fluency, adaptive learning, and computational reasoning. Therefore, it is urgent to redefine the taxonomy of human intelligence to balance digital intelligence with human-focused qualities, paving the way for authentic intelligence.
In a significant development, Dr. Weipeng Yang from the Faculty of Education and Human Development and the AI, Brain and Child Research Centre at The Education University of Hong Kong, Hong Kong, China, has proposed an innovative taxonomy of authentic intelligence for AI natives—a generation of individuals who grow up surrounded by an AI-driven world—to determine the key dimensions of valuable intelligence in facilitating deep learning and all-around development in the era of AI. His novel insights were made available online on 19 September 2025 and will be published in Volume 1, Issue 2 of the journal TAO, a newly launched interdisciplinary journal, in November 2025.
In this study, the researcher introduces the SCALE taxonomy of 21st-century authentic intelligence—a novel framework of new educational objectives that align human learning with the emerging needs of the AI age. It encompasses: Scientific literacy (ability to find evidence); Computational thinking (complex problem-solving); AI literacy (navigating human-AI collaboration); Learning to learn/metacognition (adaptive learning); and Engineering and design thinking (creating solutions through iteration).
“The SCALE taxonomy of educational objectives emerges from the recognition that education must shift from knowledge transmission to creative projects in the AI age. Its core vision is to cultivate learners who can apply scientific principles to understand the world and the society one lives in, think computationally to solve problems effectively, navigate AI systems ethically and efficiently, adapt their learning strategies in response to evolving technologies, and engage in engineering practices to design solutions for real-world challenges,” explains Dr. Yang.
In these myriad ways, SCALE can comprehensively equip AI natives to engage in meaningful experiences in diverse contexts such as big-group activities, learning centers, redesigned classroom environments, and evidence-based assessment. This learning through creating, via activities such as robotics projects, AI-driven art, and simulations, can effectively connect theoretical knowledge with practical application, facilitating engagement and metacognition in the age of AI.
Dr. Yang says: “As shared vocabulary, SCALE enables educators and researchers to collaborate across disciplines to design authentic learning curricula, advocate for policy changes that prioritize creative, tech-embedded learning, and contribute to a global dialogue on preparing learners for the AI-assisted workforce. As educators and researchers continue to refine SCALE’s applications, the taxonomy’s true value lies in its capacity to nurture resilient, creative thinkers who can shape the AI age rather than be shaped by it.”
Overall, the present framework not only delivers a shared vision for educational innovation but also acts as a practical tool for curriculum development!
***
Reference
DOI: https://doi.org/10.1016/j.tao.2025.100018
About ECNU TAO
TAO is an international, comprehensive, and innovative journal from East China Normal University (ECNU) that explores the "Way" of the world's science, technology, and civilization through quantum thinking and Lao Tzu's philosophy and focus on reflecting the revolutionary shifts in thinking systems, technological progress, and social innovation in the era of Al. Its current focus is on the following five disciplines or areas: philosophy, physics, chemistry, education, and AI. As an open-access, continuous-publishing journal, TAO aims to facilitate the circulation of novel knowledge and the communication of top researchers from around the globe.
Website: https://www.sciencedirect.com/journal/tao
About Weipeng Yang from The Education University of Hong Kong
Dr. Weipeng Yang is an Associate Professor and a recipient of the President's Outstanding Performance in Research Award at The Education University of Hong Kong. He is the Principal Investigator of the Early Childhood Learning Sciences (ECLS) Lab and an Associate Director of the AI, Brain and Child Research Centre (ABC-RC). As one of the world's top 2% most-cited scientists (Career-long and Single-year Impact ranked by Stanford University and Elsevier) and the Editor-in-Chief of the Journal of Research in Childhood Education, his research on digital technologies and young children's computational thinking has been funded by Hong Kong's Research Grants Council and has shaped early learning experiences in the 21st century.
Funding information
This research was funded by the Hong Kong Research Grants Council General Research Fund (RGC/GRF) (Ref. No. 18604423).
Journal
TAO
Method of Research
Systematic review
Subject of Research
Not applicable
Article Title
Redefining educational objectives in the age of artificial intelligence: The SCALE taxonomy
Integrating professional intellectual property education with curriculum-based ideological and political education in the era of AI
Higher Education Press
This paper addresses the integration of professional intellectual property (IP) education with curriculum-based ideological and political education in the context of digital transformation and AI. It identifies existing challenges in IP teaching, including monotonous content, insufficient practical integration, and limited international perspective. To overcome these, the authors propose a novel pedagogical framework centered on lifelong learning, practice-driven instruction, and an international outlook, enhanced by AI technologies.
The proposed approach leverages AI to create intelligent, personalized learning platforms that facilitate the synergistic development of IP knowledge and ideological-political values. The study argues that this integration not only improves teaching quality but also cultivates students’ legal awareness, innovation capacity, social responsibility, and practical skills. By aligning professional education with ideological goals through technology-enhanced methods, the paper provides a comprehensive theoretical foundation and practical model for advancing high-quality talent development in professional degree programs in the new era.
The work titled “Integrating Professional Intellectual Property Education with Curriculum-Based Ideological and Political Education in the Era of AI”, was published on Frontiers of Digital Education (published on July 4, 2025).
Reference:
Yingxue Ren, Jingwen Ren, Yun Chen, Quanwei Liu. Integrating Professional Intellectual Property Education with Curriculum-Based Ideological and Political Education in the Era of AI. Frontiers of Digital Education, 2025, 2(3): 28
https://doi.org/10.1007/s44366-025-0065-8
Journal
Frontiers of Digital Education
Method of Research
Experimental study
Subject of Research
Not applicable
Article Title
Integrating Professional Intellectual Property Education with Curriculum-Based Ideological and Political Education in the Era of AI
Researchers pose five guiding questions to improve the use of artificial intelligence in physicians’ clinical decision-making
While able to catch what doctors may miss, artificial intelligence can also make mistakes and lead to incorrect diagnoses; it should support clinical judgement, not replace it
While Artificial Intelligence (AI) can be a powerful tool that physicians can use to help diagnose their patients and has great potential to improve accuracy, efficiency and patient safety, it has its drawbacks. It may distract doctors, give them too much confidence in the answers it provides, and even lead them to lose confidence in their own diagnostic judgement.
To ensure that AI is properly integrated into healthcare practice, a research team has provided a framework comprising five guiding questions aimed at supporting doctors in their patient care while not undermining their expertise through an over-reliance on AI. The framework was recently published in the peer-reviewed Journal of the American Medical Informatics Association.
“This paper moves the discussion from how well the AI algorithm performs to how physicians actually interact with AI during diagnosis,” said senior author Dr. Joann G. Elmore, professor of medicine in the division of general internal medicine and health services research and Director of the National Clinician Scholars Program at the David Geffen School of Medicine at UCLA. “This paper provides a framework that pushes the field beyond ‘Can AI detect disease?’ to ‘How should AI support doctors without undermining their expertise?’ This reframing is an essential step toward safer and more effective adoption of AI in clinical practice.”
While AI-related errors happen, no one really knows why these tools can fail to improve diagnostic decision-making when implemented into clinical practice.
To find out why, the researchers propose five questions to guide research and development to prevent AI-linked diagnostic errors. The questions to ask are: What type and format of information should AI present? Should it provide that information immediately, after initial review, or be toggled on and off by the physician? How does the AI system show how it arrives at its decisions? How does it affect bias and complacency? And finally, what are the risks of long-term reliance on it?
These questions are important to ask because:
- Format affects doctors’ attention, diagnostic accuracy, and possible interpretive biases
- Immediate information can lead to a biased interpretation while delayed cues may help maintain diagnostic skills by allowing physicians to more fully engage in a diagnosis
- How the AI system arrives at a decision can highlight features that were ruled in or out, provide “what-if” types of explanations, and more effectively align with doctors’ clinical reasoning
- When physicians lean too much on AI, they may rely less on their own critical thinking, letting an accurate diagnosis slip by
- Long-term reliance on AI may erode a doctor’s learned diagnostic abilities
The next steps toward improving AI for diagnostic purposes are to evaluate different designs in clinical settings, study how AI affects trust and decision-making, observe doctors’ skill development when AI is used in training and clinical practice, and develop systems that self-adjust how they assist physicians.
“AI has huge potential to improve diagnostic accuracy, efficiency, and patient safety, but poor integration could make healthcare worse instead of better,” Elmore said. “By highlighting the human factors like timing, trust, over-reliance, and skill erosion, our work emphasizes that AI must be designed to work with doctors, not replace them. This balance is crucial if we want AI to enhance care without introducing new risks.”
Co-authors are Tad Brunyé of Tufts University and Stephen Mitroff of George Washington University.
The research was supported by the National Cancer Institute of the National Institutes of Health (R01 CA288824, R01 CA225585, R01 CA172343, and R01 CA140560).
Journal
Journal of the American Medical Informatics Association
Method of Research
Commentary/editorial
Subject of Research
People
Article Title
Artificial intelligence and computer-aided diagnosis in diagnostic decisions: 5 questions for medical informatics and human-computer interface research
Article Publication Date
27-Oct-2025
AI models for drug design fail in physics
University of Basel
Proteins play a key role not only in the body, but also in medicine: they either serve as active ingredients, such as enzymes or antibodies, or they are target structures for drugs. The first step in developing new therapies is therefore usually to decipher the three-dimensional structure of proteins.
For a long time, elucidating protein structures was a highly complex endeavor, until machine learning found its way into protein research. AI models with names such as AlphaFold or RosettaFold have ushered in a new era: they calculate how the chain of protein building blocks, known as amino acids, folds into a three-dimensional structure. In 2024, the developers of these programs received the Nobel Prize in Chemistry.
Suspiciously high success rate
The latest versions of these programs go one step further: they calculate how the protein in question interacts with another molecule – a docking partner or “ligand”, as experts call it. This could be an active pharmaceutical ingredient, for example.
“This possibility of predicting the structure of proteins together with a ligand is invaluable for drug development,” says Professor Markus Lill from the University of Basel. Together with his team at the Department of Pharmaceutical Sciences, he researches methods for designing active pharmaceutical ingredients.
However, the apparently high success rates for the structural prediction puzzled Lill and his staff. Especially as there are only around 100,000 already elucidated protein structures together with their ligands available for training the AI models – relatively few compared to other training data sets for AI. “We wanted to find out whether these AI models really learn the basics of physical chemistry using the training data and apply them correctly,” says Lill.
Same prediction for significantly altered binding sites
The researchers modified the amino acid sequence of hundreds of sample proteins in such a way that the binding sites for their ligands exhibited a completely different charge distribution or were even blocked entirely. Nevertheless, the AI models predicted the same complex structure – as if binding were still possible. The researchers pursued a similar approach with the ligands: they modified them in such a way that they would no longer be able to dock to the protein in question. This did not bother the AI models either.
In more than half of the cases, the models predicted the structure as if the interferences in the amino acid sequence had never occurred. “This shows us that even the most advanced AI models do not really understand why a drug binds to a protein; they only recognize patterns that they have seen before,” says Lill.
Unknown proteins are particularly difficult
The AI models faced particular difficulties if the proteins did not show any similarity to the training data sets. “When they see something completely new, they quickly fall short, but that is precisely where the key to new drugs lies,” emphasizes Markus Lill.
AI models should therefore be viewed with caution when it comes to drug development. It is important to validate the predictions of the models using experiments or computer-aided analyses that actually take the physicochemical properties into account. The researchers also used these methods to examine the results of the AI models in the course of their study.
“The better solution would be to integrate the physicochemical laws into future AI models,” says Lill. With their more realistic structural predictions, these could then provide a better basis for the development of new drugs, especially for protein structures that have so far been difficult to elucidate, and would open up the possibility of completely new therapeutic approaches.
Journal
Nature Communications
Article Title
Investigating whether deep learning models for co-folding learn the physics of protein-ligand interactions
Article Publication Date
6-Oct-2025


No comments:
Post a Comment