Saturday, January 24, 2026

ALIENATION

Why a crowded office can be the loneliest place on earth





Portland State University





A comprehensive new review published in the Journal of Management synthesizes decades of research to understand the epidemic of workplace loneliness. By analyzing 233 empirical studies, researchers from Portland State University have identified how workplace conditions contribute to isolation and offer evidence-based paths to reconnection.

The research emphasizes that loneliness is distinct from social isolation. While isolation is about being alone, loneliness is the subjective feeling that one’s social relationships are deficient—meaning employees can feel deeply lonely even in a crowded office.

"Given the connection between workplace characteristics and loneliness, organizations should consider that loneliness is not a personal issue, and instead is a business issue," said Berrin Erdogan, professor of management at Portland State. "Businesses have an opportunity to design jobs and organizations in a way that will prioritize employee relational well being."

Key Findings:

  • The "Hunger" Signal: Like hunger signals a need for food, temporary loneliness is a biological signal encouraging us to seek connection. However, when loneliness becomes chronic, it harms emotional and cognitive well-being.

  • The Employment Paradox: Generally, having a job keeps loneliness at bay; unemployed and retired individuals report higher levels of loneliness than the employed. However, the quality of the job matters. Roles with high stress, low autonomy, and poor support from managers are major risk factors.

  • The Ripple Effect: Loneliness is contagious in leadership. The study found that lonely managers are not only less effective but can harm the well-being of their employees.

"Work can be a sanctuary from loneliness, but it can also be the source," the researchers note.

The review identifies several promising interventions to combat chronic loneliness. Organizations can help by offering training on stress management and social skills, while individuals found relief through mindfulness practices and engagement in volunteering activities.

Journal

DOI

Article Title

All the Lonely People: An Integrated Review and Research Agenda on Work and Loneliness

Article Publication Date

1984; BRAVE NEW WORLD

Why some messages are more convincing than others



UC San Diego research shows how marketers can choose specific words to boost confidence in a brand’s claim




University of California - San Diego




What kinds of marketing messages are effective — and what makes people believe certain political slogans more than others? New research from the University of California San Diego Rady School of Management explores how people constantly evaluate whether messages are true or false and finds that a surprisingly small ingredient — whether a word has an easy opposite — can shape how confident people feel when deciding whether a message is true.

“Effective messaging isn’t just about whether people agree with a claim — it’s about how confident they feel in that judgment,” said Giulia Maimone, who conducted the research while a doctoral student at UC San Diego’s Rady School of Management. “Understanding how language shapes that sense of certainty helps explain why some messages resonate more than others.”

Confidence — not just agreement — shapes how persuasive a messages is

The study, forthcoming in the Journal of the Association for Consumer Research, reveals that the persuasiveness of a message can hinge on the type of words it uses — specifically, whether those words have clear opposites. The research shows that when companies frame a message with words that are “reversible,” meaning they have an easily retrievable opposite (such as intense/mild or guilty/innocent), people who disagree with the claim tend to mentally flip it to the opposite meaning (for example, “The scent is intense” becomes “The scent is mild”). 

Why words with clear opposites are processed differently

The study shows that this difference matters because people handle disagreement in different ways. When a message uses a word with a clear opposite, rejecting the claim requires an extra step retrieving and substituting the opposite word which makes people feel less certain about their opposing belief. But when a word doesn’t have a clear opposite, people tend to negate them by simply adding “not” to the original word (for example, “not prominent” or “not romantic”). In those cases, the study finds that skeptics tend to feel more confident in their counter-belief, making those messages less effective overall. 

A strategic advantage for marketers

“For marketers, this creates a powerful advantage: by using easily reversible words in a positive affirmation — such as ‘the scent is intense’ —  companies can maximize certainty among those who accept the claim while minimizing certainty among people who reject the message, because they tend to feel less strongly about their opposing belief ” said Maimone, who is now a postdoctoral scholar in marketing at the University of Florida. “Our study highlights a subtle but influential linguistic mechanism that helps explain why some marketing and political messages are more effective than others.”

That’s why this matters for marketing. If a company uses a simple, positive claim with an easily reversible word — like “the scent is intense”—most consumers who believe it feel confident in that belief. But even the consumers who disagree tend to feel less sure about their own negative conclusion because flipping the message to the opposite (“it’s mild”) takes extra mental work. In other words, the wording can strengthen the intended message because it can soften the pushback.

“People don’t just decide ‘true’ or ‘false’ — they also form a level of certainty that affects how persuasive a message becomes,” said Uma R. Karmarkar, study coauthor and associate professor at UC San Diego’s Rady School and the School of Global Policy and Strategy.

Testing the effect outside the lab

In a field test with Facebook ads created in collaboration with a nonprofit, the team found that ad language designed to trigger the higher-confidence processing pathway produced a higher click-through rate than language designed to trigger the lower-confidence pathway.

“Language isn’t just how we communicate — it can be a strategic lever,” said On Amir, study coauthor and professor of marketing at the Rady School. “The right wording can help an intended message land more firmly — and make the counter-belief feel less certain.”

How the researchers studied belief confidence

To reach these conclusions, the researchers conducted two controlled experiments involving more than 1,000 participants who were asked to judge whether a variety of statements were true or false and then report how confident they felt in those judgments. By systematically varying the wording type of the statements — and measuring both response time and confidence — the team was able to isolate how different types of language trigger distinct cognitive processes that shape belief certainty.

Malicious AI swarms pose emergent threats to democracy



Summary author: Walter Beckwith


American Association for the Advancement of Science (AAAS)




In a Policy Forum, Daniel Schroeder and colleagues discuss the risks of malicious “Artificial Intelligence (AI) swarms”, which enable a new class of large-scale, coordinated disinformation campaigns that pose significant risks to democracy. Manipulation of public opinion has long relied on rhetoric and propaganda. However, modern AI systems have created powerful new tools for shaping human beliefs and behavior on a societal scale. Large language models (LLMs) and autonomous agents can now generate vast amounts of persuasive, human-like content. When combined into collaborative AI Swarms – collections of AI-driven personas that retain memory and identity – these systems can mimic social dynamics and easily infiltrate online communities, making false narratives appear credible and widely shared. According to the authors, unlike earlier labor-intensive influence operations run by humans, AI systems can operate cheaply, consistently, and at tremendous scale, transforming once isolated disinformation efforts into persistent, adaptive campaigns that pose serious risks to democratic processes worldwide. Here, Schroeder et al. discuss the technology underpinning these malicious systems and identify pathways through which they can harm democratic discourse through widely used digital platforms. The authors argue that defense against these systems must be layered and pragmatic, aiming not for total prevention of their use, which is highly unlikely, but for raising the cost, risk, and visibility of manipulation. Because such efforts would require global coordination outside of corporate and governmental interests, Schroeder et al. propose a distributed “AI Influence Observatory,” consisting of a network of academic groups, nongovernmental organizations, and other civil institutions to guide independent oversight and action. “Success depends on fostering collaborative action without hindering scientific research while ensuring that the public sphere remains both resilient and accountable,” write the authors. “By committing now to rigorous measurement, proportionate safeguards, and shared oversight, upcoming elections could even become a proving ground for, rather than a setback to, democratic AI governance.”

Journal

DOI

Article Title

Article Publication Date

AI is already writing almost one-third of new software code

Journal

DOI

Method of Research

Subject of Research

Article Title

Article Publication Date

To make AI more fair, tame complexity



Biases in AI models can be reduced by better reflecting the complexities of the real world



University of Texas at Austin





In April 2025, OpenAI’s popular ChatGPT hit a milestone of a billion active weekly users, as artificial intelligence continued its explosion in popularity.

But with that popularity has come a dark side. Biases in AI’s models and algorithms can actively harm some of its users and promote social injustice. Documented biases have led to different medical treatments due to patients’ demographics and corporate hiring tools that discriminate against female and Black candidates.

New research from Texas McCombs suggests both a previously unexplored source of AI biases and some ways to correct for them: complexity.

“There’s a complex set of issues that the algorithm has to deal with, and it’s infeasible to deal with those issues well,” says Hüseyin Tanriverdi, associate professor of information, risk, and operations management. “Bias could be an artifact of that complexity rather than other explanations that people have offered.”

With John-Patrick Akinyemi, a McCombs Ph.D. candidate in IROM, Tanriverdi studied a set of 363 algorithms that researchers and journalists had identified as biased. The algorithms came from a repository called AI Algorithmic and Automation Incidents and Controversies.

The researchers compared each problematic algorithm with one that was similar in nature but had not been called out for bias. They examined not only the algorithms but also the organizations that created and used them.

Prior research has assumed that bias can be reduced by making algorithms more accurate. But that assumption, Tanriverdi found, did not tell the whole story. He found three additional factors, all related to a similar problem: not properly modeling for complexity.

Ground truth. Some algorithms are asked to make decisions when there’s no established ground truth: the reference against which the algorithm’s outcomes are evaluated. An algorithm might be asked to guess the age of a bone from an X-ray image, even though in medical practice, there’s no established way for doctors to do so.

In other cases, AI may mistakenly treat opinions as objective truths — for example, when social media users are evenly split on whether a post constitutes hate speech or protected free speech.

AI should only automate decisions for which ground truth is clear, Tanriverdi says. “If there is not a well-established ground truth, then the likelihood that bias will emerge significantly increases.”

Real-world complexity. AI models inevitably simplify the situations they describe. Problems can arise when they miss important components of reality.

Tanriverdi points to a case in which Arkansas replaced home visits by nurses with automated rulings on Medicaid benefits. It had the effect of cutting off disabled people from assistance with eating and showering.

“If a nurse goes and walks around to the house, they will be able to understand more about what kind of support this person needs,” he says. “But algorithms were using only a subset of those variables, because data was not available on everything.

“Because of omission of the relevant variables in the model, that model was no longer a good enough representation of reality.”

Stakeholder involvement.  When a model serving a diverse population is designed mostly by members of a single demographic, it becomes more susceptible to bias. One way to counter this risk is to ensure that all stakeholder groups have a voice in the development process.

By involving stakeholders who may have conflicting goals and expectations, an organization can determine whether it’s possible to meet them all. If it’s not, Tanriverdi says, “It may be feasible to reach compromise solutions that everyone is OK with.”

The research concludes that taming AI bias involves much more than making algorithms more accurate. Developers need to open up their black boxes to account for real-world complexities, input from diverse groups, and ground truths.

“The factors we focus on have a direct effect on the fairness outcome,” Tanriverdi says. “These are the missing pieces that data scientists seem to be ignoring.”

“Algorithmic Social Injustice: Antecedents and Mitigations”  is published in MIS Quarterly.

 

Generative AI use and depressive symptoms among US adults



JAMA Network




About The Study: 

This survey study found that artificial intelligence (AI) use was significantly associated with greater depressive symptoms, with magnitude of differences varying by age group. Further work is needed to understand whether these associations are causal and explain heterogeneous effects.


Corresponding Author: To contact the corresponding author, Roy H. Perlis, MD, MSc, email rperlis@mgb.org.

To access the embargoed study: Visit our For The Media website at this link https://media.jamanetwork.com/

(doi:10.1001/jamanetworkopen.2025.54820)

Editor’s Note: Please see the article for additional information, including other authors, author contributions and affiliations, conflict of interest and financial disclosures, and funding and support.

#  #  #

Embed this link to provide your readers free access to the full-text article 

 https://jamanetwork.com/journals/jamanetworkopen/fullarticle/10.1001/jamanetworkopen.2025.54820?guestAccessKey=1b34668e-afe8-4888-aa3d-dd05b3b83eff&utm_source=for_the_media&utm_medium=referral&utm_campaign=ftm_links&utm_content=tfl&utm_term=012126

About JAMA Network Open: JAMA Network Open is an online-only open access general medical journal from the JAMA Network. On weekdays, the journal publishes peer-reviewed clinical research and commentary in more than 40 medical and health subject areas. Every article is free online from the day of publication.