Monday, April 28, 2025

 

AI suggestions make writing more generic, Western



Cornell University




ITHACA, N.Y. – A new study from Cornell University finds AI-based writing assistants have the potential to function poorly for billions of users in the Global South by generating generic language that makes them sound more like Americans.

The study showed that when Indians and Americans used an AI writing assistant, their writing became more similar, mainly at the expense of Indian writing styles. While the assistant helped both groups write faster, Indians got a smaller productivity boost, because they frequently had to correct the AI’s suggestions.

“This is one of the first studies, if not the first, to show that the use of AI in writing could lead to cultural stereotyping and language homogenization,” said senior author Aditya Vashistha, assistant professor of information science. “People start writing similarly to others, and that’s not what we want. One of the beautiful things about the world is the diversity that we have.”

The study, “AI Suggestions Homogenize Writing Toward Western Styles and Diminish Cultural Nuances,” will be presented by first author Dhruv Agarwal, a doctoral student in the field of information science, at the Association of Computing Machinery’s conference on Human Factors in Computing Systems.

ChatGPT and other popular AI tools powered by large language models, are primarily developed by U.S. tech companies, but are increasingly used worldwide, including by the 85% of the world’s population that live in the Global South.

To investigate how these tools may be impacting people in nonWestern cultures, the research team recruited 118 people, about half from the U.S. and half from India, and asked them to write about cultural topics. Half of the participants from each country completed the writing assignments independently, while half had an AI writing assistant that provided short autocomplete suggestions. The researchers logged the participants’ keystrokes and whether they accepted or rejected each suggestion.

A comparison of the writing samples showed that Indians were more likely to accept the AI’s help, keeping 25% of the suggestions compared to 19% kept by Americans. However, Indians were also significantly more likely to modify the suggestions to fit their topic and writing style, making each suggestion less helpful, on average.

For example, when participants were asked to write about their favorite food or holiday, AI consistently suggested American favorites, pizza and Christmas, respectively. When writing about a public figure, if an Indian entered “S” in an attempt to type Shah Rukh Khan, a famous Bollywood actor, AI would suggest Shaquille O’Neil or Scarlett Johansson.

“When Indian users use writing suggestions from an AI model, they start mimicking American writing styles to the point that they start describing their own festivals, their own food, their own cultural artifacts from a Western lens,” Agarwal said.

This need for Indian users to continually push back against the AI’s Western suggestions is evidence of AI colonialism, researchers said. By suppressing Indian culture and values, the AI presents Western culture as superior, and may not only shift what people write, but also what they think.

“These technologies obviously bring a lot of value into people’s lives,” Agarwal said, “but for that value to be equitable and for these products to do well in these markets, tech companies need to focus on cultural aspects, rather than just language aspects.”

For additional information, see this Cornell Chronicle story.

-30-



Carnegie Mellon launches Human-Centered AI Research Center with Seoul National University


Work at SNU-CMU HCAI Center will enhance lives and society


Carnegie Mellon University

SNU-CMU HCAI 

image: 

Seoul National University and Carnegie Mellon University faculty stand in front of a banner during the Opening Ceremony for the new Human-Centered AI Research Center.

view more 

Credit: Carnegie Mellon University





Carnegie Mellon University and Seoul National University (SNU) have announced a new collaboration to advance human-centered artificial intelligence research that prioritizes human well-being, accessibility and social responsibility.

The SNU-CMU Human-Centered AI Research Center (HCAI) aims to pioneer innovative AI solutions by combining interdisciplinary expertise in human-centered design.

“We’re excited to officially launch this partnership with our colleagues at Seoul National University,” said Laura Dabbish, professor in the Human-Computer Interaction Institute at Carnegie Mellon University’s School of Computer Science. “The groundwork for the center began two years ago with workshops that brought together students and faculty from both universities. We’re now building on that foundation to reimagine how AI can support human connection, empower individuals and enhance everyday life. Together, we’re creating a global model for AI innovation that’s rooted in human needs and values.”

So far, the researchers have hosted on-site workshops at both campuses, and met again while attending the same conference. HCII faculty David Lindlbauer and John Stamper participated in the HCAI Center’s official opening ceremony on February 13, 2025, at the SNU campus in Seoul, South Korea. Attendees discussed the vision for the center, upcoming research initiatives and opportunities for joint projects.

"The Human-Centered AI Research Center brings together the best of Seoul National University and Carnegie Mellon University to advance AI that serves humanity,” said Gahgene Gweon, associate professor in the SNU Department of Intelligence and Information and HCII Ph.D. alumna. “By combining SNU’s leading role in AI innovation across Asia with CMU’s excellence in interdisciplinary and human-centered AI, we are pioneering research that makes AI more ethical, intuitive and impactful for society." 

One of the center’s first research collaborations has already achieved major recognition at the Association of Computing Machinery (ACM) Conference on Human Factors in Computing Systems (CHI). The joint SNU-CMU paper, “Letters from Future Self: Augmenting the Letter-Exchange Exercise with LLM-based Future Self Agents to Enhance Young Adults’ Career Exploration,” was accepted to CHI 2025 and honored with a Best Paper award, an accolade reserved for the top 1% of submissions. The project explored how large language models (LLMs) can support young adults in imagining their futures through guided career exploration activities, such as writing reflective letters to themselves or exchanging chats or letters with an LLM-based agent for advice. This work explores the capabilities of LLM-based conversational AI agents to simulate specific characters and provide tailored responses, while responding with personalized interventions in self-guided contexts.

“Researchers at CMU and SNU have a shared interest in how Agentic AI offers a new type of social intelligence that might allow agents to operate in complex interpersonal relationships,” said John Zimmerman, Tang Family Professor of Artificial Intelligence and Human-Computer Interaction at CMU and co-author on the paper. “We want to explore how agents could and should operate between older parents and their adult children, between teens and parents, and between bosses and teams. When does the agent add value, and when has it crossed a social boundary?”

CMU and SNU have four joint research projects in the works for 2025. Each project brings together interdisciplinary teams of faculty and students from both institutions to advance ethical, people-centered AI. These projects explore key challenges in AI, including: 

  • How AI can support teamwork in programming with faculty leads Stamper and Carolyn Rosé of CMU, and Gweon of SNU.
  • Enhancing interactive problem-solving with Nikolas Martelaro and Scott Hudson of CMU and Joonhwan Lee of SNU.
  • Detecting societal bias in vision-language models with Motahhare Eslami, Ken Holstein, Hong Shen, Adam Perer, Jason Hong of CMU and Gunhee Kim and Eunkyu Park of SNU.
  • Assisting older adults through socially intelligent agents with Zimmerman and Jodi Forlizzi of CMU, and Hajin Lim and Eunmee Kim of SNU.

Additional HCAI activities slated for this year will include research workshops, student internships and faculty and student visits between the campuses.

More details about the center are available on its website.

 

With AI, researchers can now identify the smallest crystals



AI solves the century-old puzzle of uncovering the shape of atomic clusters by examining the patterns produced by an X-ray beam refracted through fine powder.



Columbia University School of Engineering and Applied Science

Crystallography 

image: 

Crystallography is the science of analyzing the pattern produced by shining an X-ray beam through a material sample. A powder sample produces a different pattern than solid crystal.

view more 

Credit: Columbia Engineering





One longstanding problem has sidelined life-saving drugs, stalled next-generation batteries, and kept archaeologists from identifying the origins of ancient artifacts. 

For more than 100 years, scientists have used a method called crystallography to determine the atomic structure of materials. The method works by shining an X-ray beam through a material sample and observing the pattern it produces. From this pattern – called a diffraction pattern – it is theoretically possible to calculate the exact arrangement of atoms in the sample. The challenge, however, is that this technique only works well when researchers have large, pure crystals. When they have to settle for a powder of minuscule pieces — called nanocrystals — the method only hints at the unseen structure.

Scientists at Columbia Engineering have created a machine learning algorithm that can observe the pattern produced by nanocrystals to infer the material’s atomic structure, as described in a new study published in Nature Materials. In many cases, their algorithm achieves near-perfect reconstruction of the atomic-scale structure from the highly degraded diffraction information — a feat unimaginable just a couple of years ago. 

“The AI solved this problem by learning everything it could from a database of many thousands of known, but unrelated, structures,” says Simon Billinge, professor of materials science and of applied physics and applied mathematics at Columbia Engineering. “Just as ChatGPT learns the patterns of language, the AI model learned the patterns of atomic arrangements that nature allows.”

Crystallography Transformed Science
Crystallography is vital to science because it’s the most effective method for understanding the properties of virtually any material. The method typically relies on a technique called X-ray diffraction, in which scientists shoot energetic beams at a crystal and record the pattern of light and dark spots it produces, sort of like a shadow. When crystallographers use this technique to analyze a large and pure sample, the resulting X-ray patterns contain all the information needed to determine its atomic-level structure. Best known for enabling the discovery of DNA’s double-helix structure, the method has opened important avenues of research in medicine, semiconductors, energy storage, forensic science, archaeology, and dozens of other fields. 

Unfortunately, researchers often only have access to samples of very small crystallites, or atomic clusters, in the form of powder or suspended in solution. In these cases, the X-ray patterns contain much less information, far too little for researchers to determine the sample’s atomic structure using existing methods. 

AI Extends the Method to Nanoparticles
The team trained a generative AI model on 40,000 known atomic structures to develop a system that is able to make sense of these inferior X-ray patterns. The machine learning technique, called diffusion generative modeling, emerged from statistical physics and recently gained notoriety for enabling AI-generated art programs like Midjourney and Sora. 

“From previous work, we knew that diffraction data from nanocrystals doesn’t contain enough information to yield the result,” Billinge said. “The algorithm used its knowledge of thousands of unrelated structures to augment the diffraction data.”

To apply the technique to crystallography, the scientists began with a dataset of 40,000 crystal structures and jumbled the atomic positions until they were indistinguishable from random placement. Then, they trained a deep neural network to connect these almost randomly placed atoms with their associated X-ray diffraction patterns. The net used these observations to reconstruct the crystal. Finally, they put the AI-generated crystals through a procedure called Rietveld refinement, which essentially “jiggles” crystals into the closest optimal state, based on the diffraction pattern.

Although early versions of this algorithm struggled, it eventually learned to reconstruct crystals far more effectively than the researchers had expected. The algorithm was able to determine the atomic structure from nanometer-sized crystals of various shapes, including samples that had proven too difficult for previous experiments to characterize. 

“The powder crystallography challenge is a sister problem to the famous protein folding problem where the shape of a molecule is derived indirectly from a linear data signature,” said Hod Lipson, James and Sally Scapa Professor of Innovation and chair of the Department of Mechanical Engineering at Columbia Engineering, who, with Billinge, co-proposed the study. “What particularly excites me is that with relatively little background knowledge in physics or geometry, AI was able to learn to solve a puzzle that has baffled human researchers for a century. This is a sign of things to come for many other fields facing long-standing challenges.”

The century-old powder crystallography puzzle is particularly meaningful to Lipson, who is the grandson of Henry Lipson CBE FRS (1910–1991) who pioneered computational crystallography methods. In the 1930s, Henry Lipson worked with Bragg and other contemporaries to develop early mathematical techniques that were broadly used to solve the first complex molecules, such as penicillin, leading to the 1964 Nobel prize in Chemistry.

Gabe Guo BS’24, currently a PhD student at Stanford University, who led the project while he was a senior at Columbia, said, “When I was in middle school, the field was struggling to build algorithms that could tell cats from dogs. Now, studies like ours underscore the massive power of AI to augment the power of human scientists and accelerate innovation to new levels.”




No comments: