Friday, October 31, 2025

  

Can Artificial Intelligence manage humans without dehumanizing them?


Algorithms that threaten worker dignity, autonomy and discretion are quietly reshaping how people are managed at work, warns new research from the University of Surrey


University of Surrey






The study, published in the Annals of Tourism Research, finds that Artificial Intelligence (AI)–driven management systems can be made more human – if organisations reintroduce human judgement, transparency and flexibility into how algorithms are designed and used. 

Drawing on interviews with 30 hospitality professionals and developers, and an analysis of 61 algorithmic management systems used across hotels, restaurants and call centres, the research details how AI does not automatically replace managers but quietly redistributes authority. Algorithms make decisions about tasks, performance and scheduling, but the human managers who interpret, adapt or challenge these outputs determine whether workplaces become more empowering or more oppressive. 

Dr Brana Jianu, co-author of the study, Research Fellow at the University of Surrey, said: 

“Algorithmic management doesn’t have to strip work of its humanity. When managers use algorithms as tools for collaboration rather than control, they can protect employee dignity while still improving efficiency. The key is to keep people in the loop – explaining how systems work, encouraging discretion, and giving staff the power to question automated decisions.” 

The research introduces the concept of Modalities of (In)Visibility to describe how algorithms shape what is seen, measured and valued at work – and whose interests are prioritised. When algorithms highlight context and allow for human interpretation, staff feel empowered and respected. When the logic behind the system is hidden, workers are more likely to feel surveilled and powerless. 

Professor Iis Tussyadiah, Dean of Surrey Business School and co-author of the study said: 

“We need to design dashboards that show not just individual productivity, but team collaboration; allowing employees to challenge or amend automated allocations; and holding transparency sessions that explain how data is used to make scheduling or evaluation decisions. 

“Humanising AI at work depends less on the technology itself and more on how organisations use it. As the hospitality sector becomes a testing ground for AI management, the lessons learned could reshape workplaces far beyond hotels and restaurants.” 

[ENDS] 


Note to editors:  

New Wiley guidelines give researchers clear path forward in responsible AI use



Informed by community feedback, the detailed guidance addresses research methodology and peer review, while setting standards for disclosure and reproducibility



Wiley





Wiley (NYSE: WLY), a global leader in authoritative content and research intelligence, has set new standards for responsible and intentional AI use, delivering comprehensive guidelines specifically designed with and for research authors, journal editors, and peer reviewers.

As AI usage among researchers surges to 84%, Wiley is responding directly to the pressing need for publisher guidance articulated by 73% of respondents in the most recent ExplanAItions study. Building on similar advisement for book authors published in March 2025, and shaped by ExplanAItions findings, Wiley’s new guidance draws from more than 40 in-depth interviews with research authors and editors across various disciplines, as well as the company’s experts in AI, research integrity, copyright and permissions.

It offers the following research-specific provisions:

  • Disclosure Standards: Detailed disclosure requirements with practical examples show researchers exactly when and how to disclose AI use—covering drafting and editing, study design, data collection, literature review, data analysis, and visuals. This guidance treats disclosure as an enabling practice, not a barrier, helping researchers use AI confidently and responsibly.
  • Peer Review Confidentiality Protections: Clear prohibitions on uploading unpublished manuscripts to AI tools, while providing guidance on responsible AI applications for reviewers and editors. This outlines areas where AI use is and is not appropriate in the peer review process.
  • Image Integrity Rules: Explicit prohibition of AI-edited photographs in journals, with clear distinctions between permitted conceptual illustrations and factual/evidential images that require verifiable accuracy, providing clarity on AI use for image generation in various contexts.
  • Reproducibility Framework: Comprehensive advice as to which AI uses require disclosure, helping researchers understand when transparency is necessary for scientific evaluation.

"Researchers need clear frameworks for responsible AI use. We've worked directly with the community to create them, setting new standards that will benefit everyone involved in the creation and consumption of scientific content,” said Jay Flynn, Executive Vice President and General Manager, Research & Learning at Wiley. “By partnering with the research community from the start, we're ensuring these AI guidelines are grounded in the realities researchers navigate every day while continuing to protect the integrity of the scientific record."

As the research publishing industry experiences rapid AI adoption, these guidelines will serve as a model for responsible AI integration across the sector. They emphasize that AI use should not result in automatic manuscript rejection. Instead, editorial evaluation should focus on research quality, integrity, and transparency, using disclosure as a routine, intentional practice. Beyond establishing standards, the guidelines provide practical examples, workflow integration tips, and decision-making frameworks.

This advisement is a key component of Wiley's comprehensive, coordinated effort to support researchers as AI transforms scientific discovery. The Wiley AI Gateway, launched earlier this month, allows scholars to access peer-reviewed research directly within their AI workflows, while the ongoing ExplanAItions study provides continuous benchmarks on researcher perspectives and needs. The company has also established core AI principles that guide its journey as it continues to integrate AI features into its products and platforms. Together, these initiatives showcase Wiley’s commitment to serving as a partner to the research community as it navigates technological change responsibly.

###

About Wiley
Wiley (NYSE: WLY) is a global leader in authoritative content and research intelligence for the advancement of scientific discovery, innovation, and learning.  With more than 200 years at the center of the scholarly ecosystem, Wiley combines trusted publishing heritage with AI-powered platforms to transform how knowledge is discovered, accessed, and applied. From individual researchers and students to Fortune 500 R&D teams, Wiley enables the transformation of scientific breakthroughs into real-world impact. From knowledge to impact—Wiley is redefining what's possible in science and learning. Visit us at Wiley.com and Investors.Wiley.com. Follow us on FacebookXLinkedIn and Instagram.


Chapters in new book focus on ‘cone automation’ for GenAI




Analyses have implications for managers



Carnegie Mellon University






Technological anxiety is at least as old as the industrial revolution, so the rapid development of generative artificial intelligence (genAI) products has spurred research and analysis on the impact this technology will have on labor markets. In chapters in a new book, researchers examine how the structure of tasks can facilitate or impede the adoption of genAI, how workers of different types choose to use genAI, and where workers are likely to look for jobs if they are displaced from their work due to genAI. GenAI will likely widen the “cone of automation” by substituting for labor in more complex work or in work that occurs less frequently, the authors conclude.

The chapters, written by researchers at Carnegie Mellon University, the University of Southern California, and the University of Pennsylvania, appear in The Oxford Handbook of the Foundations and Regulation of Generative AI.

“Our conceptualization of a cone of automation provides a simple visual representation automation is expected to occur, given the characteristic of a technology,” explains Ramayya Krishnan, professor of management science and information systems at and emeritus dean of Carnegie Mellon’s Heinz College, who coauthored the chapter. “Relevant dimensions are the overall output, or frequency at which a step needs to be completed, and the length of the step as currently configured in production.”

The cone of automation highlights several facts: 1) Automation is more likely to occur in steps performed at a high frequency; this is intuitive since the benefits of a machine are more likely to be realized when the machine is working at high capacity. 2) Automation is more likely to occur for “middle-length” steps; only when output grows does it become more likely to automate easy steps. 3) People are more likely to be an economic advantage when dealing with particularly long, complex steps.

GenAI will likely widen the cone of automation by substituting for labor in more complex work or in work that occurs less frequently, the authors suggest. When the costs of failure are high, businesses will probably adopt less genAI due to its randomness. In this case, the cone of automation would narrow and genAI would play an explicitly complementary role that involves having a person oversee its work.

“GenAI differs considerably from classical machines in that it is more general and more useful but also more prone to errors,” notes Laurence Ales, professor of economics at Carnegie Mellon’s Tepper School of Business, who coauthored the chapter. “These features inform the potential patterns businesses will use in adopting genAI, including whether it will substitute for or complement existing workers.”

The technical feasibility of automation using genAI is not enough to understand these patterns, according to the authors. The economic conditions for adoption depend on the interaction of technical features with process structure. The cost and benefit of dividing tasks drive how firms currently organize work and define jobs, and measures of occupational exposure to genAI or other technologies must consider the relative frequency and separability of tasks.

In the long run, the use of genAI will influence the quality of data available for training future models, the authors note. GenAI is often mediated by human users with different levels of skill: The more genAI is used by workers with less ability to identify and correct errors in output (while increasing the quantity of this low-quality output), the more the quality of future training corpuses is likely to degrade, they predict.

“We may expect a divergence in genAI quality, in which lower data quality further reduces the complementarity of the technology with high skill, whereas contexts with high error standards will see narrower and perhaps slower diffusion of genAI but higher long-run complementarity with high skill and high data quality,” says Christophe Combemale, research professor of engineering and public policy at Carnegie Mellon, who coauthored the chapter.

Finally, the authors consider the potential shape of occupational disruption due to genAI. A network view of occupations is needed to anticipate outcomes for disrupted workers, they suggest. Even occupations not directly disrupted by genAI may experience competition and wage losses if they become targets for workforce transitions out of disrupted occupations. Conversely, the resilience of labor markets in providing employment for disrupted workers will depend on having a sufficient density of alternative, less AI-substitutable occupations into which workers can transition.


SCALE: a novel framework of educational objectives in the AI era




The authentic intelligence system covers scientific literacy, computational thinking, AI literacy, metacognition, and engineering and design thinking



East China Normal University, TAO

SCALE Taxonomy of Educational Objectives 

image: 

The researcher innovatively redefines educational objectives in the age of artificial intelligence in this novel study.

 

view more 

Credit: Weipeng Yang from The Education University of Hong Kong Image source link: https://doi.org/10.1016/j.tao.2025.100018




Artificial intelligence (AI) has become ubiquitous in the 21st century. Their widespread popularity and utility have redefined the skills necessary for personal and professional success. Clearly, it is crucial to align modern education with these rapidly changing needs. While conventional educational taxonomies, including Bloom’s Taxonomy, are foundational, they often fail to emphasize aspects required in a world where AI systems serve as tools and collaborators. These aspects include technological fluency, adaptive learning, and computational reasoning. Therefore, it is urgent to redefine the taxonomy of human intelligence to balance digital intelligence with human-focused qualities, paving the way for authentic intelligence.

In a significant development, Dr. Weipeng Yang from the Faculty of Education and Human Development and the AI, Brain and Child Research Centre at The Education University of Hong Kong, Hong Kong, China, has proposed an innovative taxonomy of authentic intelligence for AI natives—a generation of individuals who grow up surrounded by an AI-driven world—to determine the key dimensions of valuable intelligence in facilitating deep learning and all-around development in the era of AI. His novel insights were made available online on 19 September 2025 and will be published in Volume 1, Issue 2 of the journal TAO, a newly launched interdisciplinary journal, in November 2025.

In this study, the researcher introduces the SCALE taxonomy of 21st-century authentic intelligence—a novel framework of new educational objectives that align human learning with the emerging needs of the AI ageIt encompasses: Scientific literacy (ability to find evidence); Computational thinking (complex problem-solving); AI literacy (navigating human-AI collaboration); Learning to learn/metacognition (adaptive learning); and Engineering and design thinking (creating solutions through iteration).

“The SCALE taxonomy of educational objectives emerges from the recognition that education must shift from knowledge transmission to creative projects in the AI age. Its core vision is to cultivate learners who can apply scientific principles to understand the world and the society one lives in, think computationally to solve problems effectively, navigate AI systems ethically and efficiently, adapt their learning strategies in response to evolving technologies, and engage in engineering practices to design solutions for real-world challenges,” explains Dr. Yang.

In these myriad ways, SCALE can comprehensively equip AI natives to engage in meaningful experiences in diverse contexts such as big-group activities, learning centers, redesigned classroom environments, and evidence-based assessment. This learning through creating, via activities such as robotics projects, AI-driven art, and simulations, can effectively connect theoretical knowledge with practical application, facilitating engagement and metacognition in the age of AI.

Dr. Yang says: “As shared vocabulary, SCALE enables educators and researchers to collaborate across disciplines to design authentic learning curricula, advocate for policy changes that prioritize creative, tech-embedded learning, and contribute to a global dialogue on preparing learners for the AI-assisted workforce. As educators and researchers continue to refine SCALE’s applications, the taxonomy’s true value lies in its capacity to nurture resilient, creative thinkers who can shape the AI age rather than be shaped by it.”

Overall, the present framework not only delivers a shared vision for educational innovation but also acts as a practical tool for curriculum development!

 

***

 

Reference

DOI: https://doi.org/10.1016/j.tao.2025.100018

 

About ECNU TAO

TAO is an international, comprehensive, and innovative journal from East China Normal University (ECNU) that explores the "Way" of the world's science, technology, and civilization through quantum thinking and Lao Tzu's philosophy and focus on reflecting the revolutionary shifts in thinking systems, technological progress, and social innovation in the era of Al. Its current focus is on the following five disciplines or areas: philosophy, physics, chemistry, education, and AI. As an open-access, continuous-publishing journal, TAO aims to facilitate the circulation of novel knowledge and the communication of top researchers from around the globe.

Website: https://www.sciencedirect.com/journal/tao

 

About Weipeng Yang from The Education University of Hong Kong

Dr. Weipeng Yang is an Associate Professor and a recipient of the President's Outstanding Performance in Research Award at The Education University of Hong Kong. He is the Principal Investigator of the Early Childhood Learning Sciences (ECLS) Lab and an Associate Director of the AI, Brain and Child Research Centre (ABC-RC). As one of the world's top 2% most-cited scientists (Career-long and Single-year Impact ranked by Stanford University and Elsevier) and the Editor-in-Chief of the Journal of Research in Childhood Education, his research on digital technologies and young children's computational thinking has been funded by Hong Kong's Research Grants Council and has shaped early learning experiences in the 21st century.

 

Funding information

This research was funded by the Hong Kong Research Grants Council General Research Fund (RGC/GRF) (Ref. No. 18604423).

No comments: