Friday, May 30, 2025

 

Q&A: What universities can learn about navigating ideological tension from the history of same-sex domestic partner benefits



University of Washington





As public universities across the U.S. face increasing scrutiny over issues such as diversity initiatives and tenure protections, new research from the University of Washington offers timely lessons on how universities can navigate politically charged issues without abandoning their core commitments.

The study, recently published in Organization Science, examines how public universities decided whether to offer same-sex domestic partner benefits in the early 1990s and 2000s. Researchers found that universities — especially those in conservative states — often strategically adjusted not just whether and when they adopted inclusive policies, but also how they justified those decisions.

“When universities face powerful stakeholders who oppose their values, how they frame their decisions can be as important as the decisions themselves,” said Abhinav Gupta, co-author and professor of management in the UW Foster School of Business.

UW News spoke with Gupta about what universities can learn from this earlier period of cultural and political tension.

Can you tell me about the inspiration for this research?

AG: This project began when I was a doctoral student at The Pennsylvania State University, where my co-authors and I were interested in understanding how institutional change unfolds under ideological pressure. We were especially drawn to the LGBTQ+ rights movement, which has been one of the most successful in recent U.S. history — not only in shifting cultural values, but also in driving tangible changes in workplace policy and practice.

Among those changes, the adoption of same-sex domestic partner benefits by universities stood out as a concrete, measurable outcome with real resource implications. It offered us a focused way to examine how inclusive policies are implemented within institutions that must navigate competing political and economic demands.

We weren’t just curious about whether universities adopted these benefits — we wanted to understand how they managed the politics of those decisions, especially in states where conservative legislatures controlled university budgets. This was an opportunity to study how organizations pursue values-based change pragmatically, often advancing their commitments in ways that are sensitive to the views of key stakeholders.

Over time, we built a comprehensive dataset of top public universities, tracking the progression of this policy between 1990 and 2013. Modeling that process was painstaking, but it allowed us to identify patterns in how universities adopted and framed these decisions — strategically aligning with trusted actors in their environments, such as major local employers, and adjusting their rhetoric to reduce backlash.

Although history doesn’t repeat itself exactly, the same underlying dynamics often resurface. This case offers a narrow but revealing window into how change happens — not through confrontation alone, but through patient, careful work that gradually builds consensus. For anyone interested in advancing equity in complex institutional settings, there are valuable lessons in how the LGBTQ+ movement translated advocacy into durable, systemic shifts.

What patterns did you notice in universities’ decision making?

AG: One of the most striking dynamics we observed was in states where public universities relied heavily on funding from conservative legislatures. In these contexts, university administrators were often deeply concerned about potential backlash. They feared that allocating funds to support same-sex domestic partner benefits could be seen as ideologically out of step with legislative priorities.

We analyzed adoption patterns across major public universities — research powerhouses and flagship institutions throughout the U.S. — and found a clear and systematic pattern. Universities in more progressive states were often early adopters of these benefits, with some acting as early as 1991. In contrast, their peers in more conservative states often waited nearly a decade longer to adopt the same policies.

But what was particularly telling was how these later adopters framed their decisions. Many universities in red states did not lead with social justice arguments. Instead, they took a “business case” approach, aligning their decisions with market-based rationales — emphasizing competitiveness, talent recruitment and employee retention. These institutions typically adopted the policy only after major local employers had done so, effectively using the private sector as cover. This allowed them to present the decision as a practical response to labor market trends rather than an ideologically driven move.

This pattern led us to develop a broader theoretical insight: when organizations anticipate ideological resistance from key stakeholders, they often look to “exemplar organizations” — entities already seen as legitimate by those stakeholders. By emulating the behavior of these exemplars and adopting rhetoric that reflects stakeholder values, they can diffuse opposition and build support without abandoning their goals.

In contrast, universities in more liberal states often cited peer institutions and framed their decisions more explicitly around fairness and inclusion. What this shows is that organizations don’t simply conform or resist in the face of ideological tension — they adapt. They make strategic choices about when and how to act, often tailoring their message and reference points to gain legitimacy in diverse political and cultural environments.

What lessons can universities take from this case study, particularly in the current environment?

AG: We’re living through a time of heightened scrutiny and political tension, and universities increasingly find themselves at the center of it. In many ways, higher education has long enjoyed a degree of autonomy — but that autonomy rests on relationships with a broad set of external stakeholders whose values may not always align with those of university leadership, faculty or students.

This moment raises a fundamental question: What should universities do when their internal priorities come into conflict with the beliefs or expectations of those who hold influence over their resources — such as policymakers, donors or community leaders? Some might argue that institutions should stay true to their values no matter the cost. But our research suggests that universities benefit more when they strategically engage their environment, not ignore it.

This doesn't mean compromising principles. It means understanding the value systems of key stakeholders and learning to speak in ways that resonate. For example, when universities face resistance to inclusive policies, it can be effective to frame those decisions around economic competitiveness, workforce needs or community relevance — themes that often carry bipartisan appeal. The goal is not to dilute the message, but to translate it into language that expands support rather than provokes opposition.

In our research, we also emphasize the value of “exemplar organizations” — trusted institutions that skeptical stakeholders already view as legitimate. When a university can point to respected peers or private-sector leaders who have adopted a similar course of action, it lowers the perceived risk of following suit and frames the decision as pragmatic rather than ideological.

At their best, universities are extraordinary institutions. They create scientific breakthroughs, train healthcare professionals and business leaders, support local economies and open doors for the next generation. Their work benefits people across political, cultural and socioeconomic divides. To continue delivering that value, especially in contentious times, universities need to build broad-based coalitions — not by avoiding disagreement, but by finding common ground wherever possible.

Other co-authors were Chad Murphy of Oregon State University and Forrest Briscoe of Cornell University.

For more information, contact Gupta at abhinavg@uw.edu.

 

Borders and beyond: Excavating life on the medieval Mongolian frontier




The Hebrew University of Jerusalem
Grave inside the garrison 

image: 

Grave inside the garrison

view more 

Credit: Credit: Gideon Shelach-Lavi



New archaeological findings along a little-known medieval wall in eastern Mongolia reveal that frontier life was more complex than previously believed. Excavations show evidence of permanent habitation, agriculture, and cultural exchange, suggesting that these walls were not solely defensive structures but part of a broader system of regional control and interaction during the Jin dynasty.


Link to pictures and video: https://drive.google.com/drive/folders/1krCqKwVHzMIA-EaU7AhES47HikEgElmp?usp=sharing


A team of international archaeologists led by Professor Gideon Shelach-Lavi of the Department of Asian Studies at the Hebrew University of Jerusalem has uncovered new insights into life along one of Asia’s most enigmatic medieval frontiers. Their findings, recently published in the journal Antiquity, focus on a little-known section of the Medieval Wall System and reveal that the main function of this section was not military defense. In fact, excavation reviled that in this part of the Medieval Wall System there was now standing linear wall but only a relatively shallow trench that starched over 300 km long.  Researchers now believe that the main function of this line, that included also walled forts, was managing movement of nomadic populations, controlling local unrest, regulating trade, marking territory, and shaping regional interactions.


The Medieval Wall System is a vast network of trenches, earthen walls, and fortified enclosures constructed between the tenth and thirteenth centuries across parts of Mongolia, China, and Russia. Despite its impressive scale, many segments remain poorly understood. Since 2018, the collaborative research project The Wall: People and Ecology in Medieval Mongolia and China based in the Hebrew university—funded by the European Research Council—has worked to map, excavate, and interpret these monumental features. The 2023 field season focused on the Mongolian Arc, a remote frontier zone running through Mongolia’s Sukhbaatar and Dornod provinces parallel to the current border with China.


“Our goal was not only to understand how these walls were built, but to uncover what life was like for the people who lived near them,” explained Professor Shelach-Lavi. “This goes beyond military history—it’s about reconstructing everyday experiences on the edges of imperial power.”

The team’s excavation centered on a fortified enclosure known as MA03 in Sukhbaatar Province, dated by radiocarbon analysis to the period of the Jin dynasty (twelfth to thirteenth century). Although traditionally thought to serve defensive purposes, the shallow trench near MA03 lacked a substantial wall, suggesting that it functioned more as a territorial marker or checkpoint than a military barrier. Within the enclosure, the researchers uncovered stone architecture, an advanced heating system, and a range of artifacts—including animal bones, pottery, iron tools, and a broken iron plough. These remains point to a permanent settlement engaged in herding, hunting, and agriculture, challenging the common perception of the region as exclusively nomadic. The heating system, similar to those found in medieval China and Korea, further suggests cultural exchange and adaptation to Mongolia’s severe winters.

One of the most striking discoveries was a mid-fifteenth-century burial inserted long after the enclosure had been abandoned. The grave, which contained well preserved textiles, wooden objects, and metal artifacts, was dug directly into the collapsed remains of the enclosure wall.

“This tells us that even centuries later, the site still held meaning,” said Professor Shelach-Lavi. “It remained visible in the landscape and may have been remembered—or even revered—by later communities.”

The findings contribute to a growing body of research suggesting that ancient frontier walls across Eurasia served not just military ends, but also administrative and symbolic functions. In the context of Mongolia—long associated with mobile pastoralism—the study reveals a more complex and adaptable way of life.

“Our research reminds us to look beyond capital cities and royal courts,” said Professor Shelach-Lavi. “People lived, worked, traded, and built communities along these borderlands.

Understanding their lives helps us understand the broader dynamics that shaped Eurasian history.”

Learn more: https://www.the-wall-huji.com/the-mongolian-arc


Caption

Structure before excavations

Caption

Excavation of the stone platform with the chimney

Credit

Credit: Tal Rogovski



  Generative AI’s most prominent skeptic doubles down



By AFP
May 29, 2025


Generative AI critic Gary Marcus, speaks at the Web Summit Vancouver 2025 tech conference in Vancouver Canada - Copyright AFP 

Don MacKinnon

Two and a half years since ChatGPT rocked the world, scientist and writer Gary Marcus still remains generative artificial intelligence’s great skeptic, playing a counter-narrative to Silicon Valley’s AI true believers.

Marcus became a prominent figure of the AI revolution in 2023, when he sat beside OpenAI chief Sam Altman at a Senate hearing in Washington as both men urged politicians to take the technology seriously and consider regulation.

Much has changed since then. Altman has abandoned his calls for caution, instead teaming up with Japan’s SoftBank and funds in the Middle East to propel his company to sky-high valuations as he tries to make ChatGPT the next era-defining tech behemoth.

“Sam’s not getting money anymore from the Silicon Valley establishment,” and his seeking funding from abroad is a sign of “desperation,” Marcus told AFP on the sidelines of the Web Summit in Vancouver, Canada.

Marcus’s criticism centers on a fundamental belief: generative AI, the predictive technology that churns out seemingly human-level content, is simply too flawed to be transformative.

The large language models (LLMs) that power these capabilities are inherently broken, he argues, and will never deliver on Silicon Valley’s grand promises.

“I’m skeptical of AI as it is currently practiced,” he said. “I think AI could have tremendous value, but LLMs are not the way there. And I think the companies running it are not mostly the best people in the world.”

His skepticism stands in stark contrast to the prevailing mood at the Web Summit, where most conversations among 15,000 attendees focused on generative AI’s seemingly infinite promise.

Many believe humanity stands on the cusp of achieving super intelligence or artificial general intelligence (AGI) technology that could match and even surpass human capability.

That optimism has driven OpenAI’s valuation to $300 billion, unprecedented levels for a startup, with billionaire Elon Musk’s xAI racing to keep pace.

Yet for all the hype, the practical gains remain limited.

The technology excels mainly at coding assistance for programmers and text generation for office work. AI-created images, while often entertaining, serve primarily as memes or deepfakes, offering little obvious benefit to society or business.

Marcus, a longtime New York University professor, champions a fundamentally different approach to building AI — one he believes might actually achieve human-level intelligence in ways that current generative AI never will.

“One consequence of going all-in on LLMs is that any alternative approach that might be better gets starved out,” he explained.

This tunnel vision will “cause a delay in getting to AI that can help us beyond just coding — a waste of resources.”

– ‘Right answers matter’ –


Instead, Marcus advocates for neurosymbolic AI, an approach that attempts to rebuild human logic artificially rather than simply training computer models on vast datasets, as is done with ChatGPT and similar products like Google’s Gemini or Anthropic’s Claude.

He dismisses fears that generative AI will eliminate white-collar jobs, citing a simple reality: “There are too many white-collar jobs where getting the right answer actually matters.”

This points to AI’s most persistent problem: hallucinations, the technology’s well-documented tendency to produce confident-sounding mistakes.

Even AI’s strongest advocates acknowledge this flaw may be impossible to eliminate.

Marcus recalls a telling exchange from 2023 with LinkedIn founder Reid Hoffman, a Silicon Valley heavyweight: “He bet me any amount of money that hallucinations would go away in three months. I offered him $100,000 and he wouldn’t take the bet.”

Looking ahead, Marcus warns of a darker consequence once investors realize generative AI’s limitations. Companies like OpenAI will inevitably monetize their most valuable asset: user data.

“The people who put in all this money will want their returns, and I think that’s leading them toward surveillance,” he said, pointing to Orwellian risks for society.

“They have all this private data, so they can sell that as a consolation prize.”

Marcus acknowledges that generative AI will find useful applications in areas where occasional errors don’t matter much.

“They’re very useful for auto-complete on steroids: coding, brainstorming, and stuff like that,” he said.

“But nobody’s going to make much money off it because they’re expensive to run, and everybody has the same product.”


Stevens team teaches AI models to spot misleading scientific reporting



Using AI to flag unscientific claims could empower people to engage more confidently with media reports




Stevens Institute of Technology





Hoboken, N.J., May 28, 2025 — Artificial intelligence isn’t always a reliable source of information: large language models (LLMs) like Llama and ChatGPT can be prone to “hallucinating” and inventing bogus facts. But what if AI could be used to detect mistaken or distorted claims, and help people find their way more confidently through a sea of potential distortions online and elsewhere? 

As presented at a workshop at the annual conference of the Association for the Advancement of Artificial Intelligence, researchers at Stevens Institute of Technology present an AI architecture designed to do just that, using open source LLMs and free versions of commercial LLMs to identify potential misleading narratives in news reports on scientific discoveries.  

“Inaccurate information is a big deal, especially when it comes to scientific content — we hear all the time from doctors who worry about their patients reading things online that aren’t accurate, for instance,” said K.P. Subbalakshmi, the paper’s co-author and a professor in the Department of Electrical and Computer Engineering at Stevens. “We wanted to automate the process of flagging misleading claims and use AI to give people a better understanding of the underlying facts.”

To achieve that, the team of two PhD students and two Masters students led by Subbalakshmi, first created a dataset of 2,400 news reports on scientific breakthroughs. The dataset included both human-generated reports, drawn either from reputable science journals or low-quality sources known to publish fake news, and AI-generated reports of which half were reliable and half contained inaccuracies. Each report was then paired with original research abstracts related to the technical topic, enabling the team to check each report for scientific accuracy. Their work is the first attempt at systematically directing LLMs to detect inaccuracies in science reporting in public media according to Subbalakshmi.

“Creating this dataset is an important contribution in its own right, since most existing datasets typically do not include information that can be used to test systems developed to detect inaccuracies ‘in the wild’” Dr. Subbalakshmi said. “These are difficult topics to investigate, so we hope this will be a useful resource for other researchers.”

Next, the team created three LLM-based architectures to guide an LLM through the process of determining a news report’s accuracy. One of these architectures is a three-step process. First, the AI model summarized each news report and identified the salient features. Next, it conducted sentence-level comparisons between claims made in the summary and evidence contained in the original peer-reviewed research. Finally, the LLM made a determination as to whether the report accurately reflected the original research.

The team also defined “dimensions of validity” and asked the LLM to think about these five “dimensions of validity” — specific mistakes, such as oversimplification or confusing causation and correlation, commonly present in inaccurate news reports. “We found that asking the LLM to use these dimensions of validity made quite a big difference to the overall accuracy,” Dr. Subbalakshmi said and added that these dimensions of validity can be expanded upon, to better capture domain specific inaccuracies, if needed.

Using the new dataset, the team’s LLM pipelines were able to correctly distinguish between reliable and unreliable news reports with about 75% accuracy — but proved markedly better at identifying inaccuracies in human-generated content than in AI-generated reports. The reasons for that aren’t yet clear, although Dr. Subbalakshmi notes that non-expert humans similarly struggle to identify technical errors in AI-generated text. “There’s certainly room for improvement in our architecture,” Dr. Subbalakshmi says. “The next step might be to create custom AI models for specific research topics, so they can ‘think’ more like human scientists.”

In the long run, the team’s research could open the door to browser plugins that automatically flag inaccurate content as people use the Internet, or to rankings of publishers based on how accurately they cover scientific discoveries. Perhaps most importantly, Dr. Subbalakshmi says, the research could also enable the creation of LLM models that describe scientific information more accurately, and that are less prone to confabulating when describing scientific research.  

“Artificial intelligence is here — we can’t put the genie back in the bottle,” Dr. Subbalakshmi said. “But by studying how AI ‘thinks’ about science, we can start to build more reliable tools — and perhaps help humans to spot unscientific claims more easily, too.”

 

About Stevens Institute of Technology
Stevens Institute of Technology is a premier, private research university situated in Hoboken, New Jersey. Since our founding in 1870, technological innovation has been the hallmark of Stevens’ education and research. Within the university’s three schools and one college, more than 8,000 undergraduate and graduate students collaborate closely with faculty in an interdisciplinary, student-centric, entrepreneurial environment. Academic and research programs spanning business, computing, engineering, the arts and other disciplines actively advance the frontiers of science and leverage technology to confront our most pressing global challenges. The university continues to be consistently ranked among the nation’s leaders in career services, post-graduation salaries of alumni and return on tuition investment.

Horses ‘mane’ inspiration for new generation of social robots



University of Bristol
Fig 1 

image: 

Ellen receiving equine-assisted intervention (EAIs) therapy.

view more 

Credit: Ellen Weir




Interactive robots should not just be passive companions, but active partners–like therapy horses who respond to human emotion–say University of Bristol researchers.

Equine-assisted interventions (EAIs) offer a powerful alternative to traditional talking therapies for patients with PTSD, trauma and autism, who struggle to express and regulate emotions through words alone.

The study, presented at the CHI '25: Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems held in Yokohama, recommends that therapeutic robots should also exhibit a level of autonomy, rather than one-dimensional displays of friendship and compliance.

Lead author Ellen Weir from Bristol’s Faculty of Science and Engineering explains: “Most social robots today are designed to be obedient and predictable - following commands and prioritising user comfort.

“Our research challenges this assumption.”

In EAIs, individuals communicate with horses through body language and emotional energy. If someone is tense or unregulated, the horse resists their cues. When the individual becomes calm, clear, and confident, the horse responds positively. This ‘living mirror’ effect helps participants recognise and adjust their emotional states, improving both internal well-being and social interactions.

However, EAIs require highly trained horses and facilitators, making them expensive and inaccessible.

Ellen continued: “We found that therapeutic robots should not be passive companions but active co-workers, like EAI horses.

“Just as horses respond only when a person is calm and emotionally regulated, therapeutic robots should resist engagement when users are stressed or unsettled. By requiring emotional regulation before responding, these robots could mirror the therapeutic effect of EAIs, rather than simply providing comfort.”

This approach has the potential to transform robotic therapy, helping users develop self-awareness and regulation skills, just as horses do in EAIs.

Beyond therapy, this concept could influence human-robot interaction in other fields, such as training robots for social skills development, emotional coaching, or even stress management in workplaces.

A key question is whether robots can truly replicate - or at least complement - the emotional depth of human-animal interactions. Future research must explore how robotic behaviour can foster trust, empathy, and fine tuning, ensuring these machines support emotional well-being in a meaningful way.

Ellen added: “The next challenge is designing robots that can interpret human emotions and respond dynamically—just as horses do. This requires advances in emotional sensing, movement dynamics, and machine learning.

“We must also consider the ethical implications of replacing sentient animals with machines. Could a robot ever offer the same therapeutic value as a living horse? And if so, how do we ensure these interactions remain ethical, effective, and emotionally authentic?”

  

Caption

Diagram showing how Equine-Assisted Interventions (EAIs) work

Diagram showing how Equine-Assisted Interventions (EAIs) work.

Credit

Ellen Weir

Paper:

"You Can Fool Me, You Can’t Fool Her!": Autoethnographic Insights from Equine-Assisted Interventions to Inform Therapeutic Robot Design by Ellen Weir, Ute Leonards and Anne Roudaut Metatla presented at CHI '25: Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems.