Friday, February 14, 2025

 

The complicated question of how we determine who has an accent



A disconnect between how people judge speakers versus regions



Ohio State University





COLUMBUS, Ohio – How do you tell if someone has a particular accent?  It might seem obvious: You hear someone pronounce words in a way that is different from “normal” and connect it to other people from a specific place.

 

But a new study suggests that might not be the case.

 

“People probably don’t learn who has an accent from hearing someone talk and thinking, ‘huh, they sound funny’ – even though sometimes it feels like that’s how we do it,” said Kathryn Campbell-Kibler, author of the study and associate professor of linguistics at The Ohio State University.

 

Accents may be more of something we learn culturally, Campbell-Kibler said.

 

The study was published online recently in the Journal of Sociolinguistics.

 

The study is part of a long-term project that Ohio State researchers have been conducting at the Language Sciences Research Lab at the Center of Science and Industry (COSI), a science museum in Columbus.

 

In this study, researchers asked visitors to the museum to participate in a study about accents. In the end, 1,106 people age 9 and up – mostly Ohioans – took part.

 

All participants heard a series of recordings, each featuring a speaker saying several words, all with the same vowel in them.

 

“Americans often listen to vowels to judge how much of an accent someone has,” Campbell-Kibler said.

 

For example, some people may pronounce “pen” so that it sounds to others like they are saying “pin.”

 

In all, participants heard 15 speakers pronounce words like “pass,” “food” and “pen.” 

 

Participants rated each speaker on a scale from “not at all accented” to “very accented.”

 

Although the participants weren’t told this at the time, all the speakers had grown up in one of three regions of Ohio that linguists have coded as having particular accents: northern Ohio (the Inland North accent), central Ohio (the Midland accent) and southern Ohio (the South accent).

 

After rating the accents of the people they heard, participants were then asked to rate how accented they thought the speech was in various parts of Ohio on a scale of 0 (no accent) to 100 (very accented). Based on the similarity of answers, Campbell-Kibler was able to collapse them into the three categories of northern, central and southern Ohio.

 

In general, museum visitors thought people from southern Ohio had the strongest accents, somewhere near 60 to 70 on the scale.  Central Ohio residents didn’t have much of an accent, according to participants, with a score averaging 20 to 25.  They weren’t sure about northern Ohio residents, who had a score near 50 on the scale.

 

Results showed that Ohioans take until adulthood to fully absorb these beliefs about the differing accents in the state.  The 9-year-old participants didn’t show much differentiation in perceptions between north and south – it took until about age 25 before the beliefs leveled off.

 

But here is what most interested Campbell-Kibler about the results.

 

If a person reported that northern Ohioans in general had a strong, noticeable accent – say they rated them 90 on the scale of 0-100 – you would expect that when they heard a recording of a northern Ohioan speaking, they would rate it as very accented.

 

But that didn’t occur. People who thought northern Ohioans in general had a strong accent didn’t think the accent of the actual northern Ohio speaker they heard was more accented than those from other areas of the state.

 

The same was true for the other regions in Ohio that were rated.

 

“Just because people gave a high rating to the idea that people in southern Ohio have an accent, that doesn’t mean they are good at hearing how actual southern Ohioans pronounce vowels differently,” Campbell-Kibler said.

 

So if people can’t pick up on accents from hearing them directly, then how do they learn about them?

 

“It is perplexing.  We don’t know the full answer for why this is,” Campbell-Kibler said.

 

But part of the answer may be that we learn about accents culturally, through other people, and through hearing people on TV shows and movies.

 

“We may hear friends say they have an aunt in Akron who talks funny or hear people on the TV or the movies from Alabama or Britain talk differently than we do,” Campbell-Kibler said.

 

“We may not be able to identify an accent – we just know something is there because friends are telling stories about it or we hear the characters on TV.

 

“There’s a lot more we need to learn about how accents are represented cognitively in our brains,” she said.

 

Innovative framework for quantifying direct typhoon impacts on vegetation



Journal of Remote Sensing
Overall workflow of this study. 

image: 

Overall workflow of this study.

view more 

Credit: Journal of Remote Sensing





A new study introduces an innovative framework that harnesses satellite observations and machine learning models to quantify the direct impacts of typhoons on vegetation canopy structure and photosynthesis. The research assesses both immediate damage and long-term recovery, providing crucial insights for coastal ecosystem management and disaster risk assessment amid climate change.

Coastal vegetation ecosystems, crucial for global carbon sequestration and biodiversity, face increasing typhoon threats as climate change alters their frequency, intensity, and landward movement. However, traditional methods for assessing typhoon damage—typically based on pre- and post-event satellite comparisons—often overlook natural plant life cycles and interannual environmental changes, leading to inaccurate assessments of both damage and recovery. This highlights the urgent need for more precise and comprehensive approaches to understanding the full scope of typhoon impacts on vegetation.

Published (DOI: 10.34133/remotesensing.0430) in Journal of Remote Sensing on February 6, 2025, the study by researchers from Peking University Shenzhen Graduate School and Boston University introduces a pioneering framework that combines satellite data with random forest models to accurately assess the immediate and long-term effects of typhoons on vegetation canopy structure and photosynthetic activity. By addressing the limitations of traditional methods, this approach provides a more reliable measure of typhoon-induced vegetation damage and recovery from both structural and functional perspectives.

Using satellite-observed leaf area index (LAI) and environmental data, researchers developed random forest models to simulate vegetation conditions in the absence of typhoons, providing a benchmark to assess the true extent of damage. By comparing simulated LAI data with observed values, the researchers assessed typhoon-induced canopy loss and tracked recovery over time. The study also integrates light use efficiency (LUE) models to assess the impact of typhoons on photosynthesis, offering deeper insights into the physiological consequences of typhoons on vegetation. Applying this framework to three super typhoons—Nida, Hato, and Mangkhut—that traversed the Greater Bay Area, researchers quantified the extent of vegetation damage. Typhoon Nida affected 76.58% of vegetated areas, Hato impacted 61.25%, and Mangkhut caused the largest loss at 89.67%. Structural damage led to a sustained decline in carbon uptake, with direct cumulative photosynthetic losses of 0.36 Tg C for Nida, 0.22 Tg C for Hato, and 0.50 Tg C for Mangkhut. The study also demonstrated the advantages of the new framework over traditional methods. For instance, conventional approaches mischaracterized Typhoon Hato's effects, suggesting positive impacts on vegetation, while the new framework more effectively identified substantial damage.

"This study offers a new approach to assessing typhoon impacts on vegetation," said the lead researcher. “By disentangling the direct effects of typhoons from plant's internal rhythm and environmental variations, we can better understand the true extent of damage and recovery, which is critical for effective ecosystem management and disaster risk reduction.”

This framework evaluates both the structural and functional dimensions of typhoon-induced vegetation damage and long-term recovery, providing more comprehensive and multidimensional decision support for vegetation management and post-disaster restoration. Its broad applicability enables further exploration of the quantitative relationships between typhoons, vegetation damage, and recovery, while also shedding light on the underlying mechanisms. This provides valuable scientific evidence for future post-disaster management strategies.

###

References

DOI

10.34133/remotesensing.0430

Original Source URL

https://doi.org/10.34133/remotesensing.0430

Funding information

This work was supported by the National Natural Science Foundation of China (42271104) and the Shenzhen Science and Technology Program (JCYJ20220531093201004 and KQTD20221101093604016).

About Journal of Remote Sensing

The Journal of Remote Sensingan online-only Open Access journal published in association with AIR-CAS, promotes the theory, science, and technology of remote sensing, as well as interdisciplinary research within earth and information science.

 SCIENCE FOR FARMERS 

UConn researchers unlock new potential porcine virus treatment



UConn researchers have advanced technology that could tackle Porcine reproductive and respiratory syndrome virus (PRRSV), a condition that costs the pork industry billions each year



University of Connecticut




UConn researchers have identified a novel small molecule for the development of preventative treatment for a serious and costly disease in pigs.

Porcine reproductive and respiratory syndrome virus (PRRSV) costs an estimated $1.2 billion annually in the U.S. In Europe, the estimated yearly loss is €1.5 billion. The virus causes respiratory disease in piglets, and miscarriages or stillbirths in sows.

There is currently no effective vaccine or treatment for PRRSV. Some scientists are working on genetically modified pigs to block viral infection, but this strategy will take decades to have a measurable impact.

Researchers from the College of Agriculture, Health and Natural Resources have identified a small molecule that can successfully disable the virus’ mechanisms for reproducing and evading the host organism’s immune system.

They published these findings in the Journal of Virology. Jiaqi Zhu ‘23 (CAHNR), is the first author on this paper. UConn collaborators include Xiuchun “Cindy” Tian, professor of animal science; Antonio Garmendia, professor of pathobiology and veterinary science; Neha Mishra, associate professor of pathobiology and veterinary science, and Kyle Hadden, professor of pharmaceutical science.

This work is a collaboration between UConn and Northwest A&F University in China, where Young Tang, former UConn associate professor, is currently faculty.

The researchers began this work by using artificial intelligence to screen a bank of small molecules to identify which ones might be good candidates. The algorithm compared the structure of the viral protein the researchers wanted to target against those of the small molecules.

They then narrowed their results down to a single chemical that could inhibit the virus without producing toxic effects.

The researchers targeted a protein called NendoU. This protein is highly conserved, meaning that when the virus mutates, this protein will likely stay the same because it plays such an essential role in the virus’ ability to reproduce.

The researchers found that the number of viral particles in cells treated with the small molecule was more than 1,000 times fewer than the untreated control group.

“Basically, the virus comes into the untreated cell and uses the cell’s machinery to amplify and create more viruses,” Tian says. “So, if you treat the cells with this particular chemical, compared to untreated cells, it’s going to reduce it by 1,000 times in terms of viral number.”

NendoU is also common across other closely related viruses.

“We were thinking this [chemical] could also work on other viruses in this order,” Zhu says. “So, we tested it on another virus called chicken infectious bronchitis virus and it also worked very well.”

COVID-19 belongs to the same viral family as PRRSV. This means that even though PRRSV is not a risk to human health, this research could have applications for human anti-viral drug development.

These findings build on previous work from this group in which, in collaboration with technology enabled pharmaceutical company, Atomwise Inc., they identified a different chemical that disrupts the virus’ ability to enter the host cell.

“By shutting the door for viral entry and inhibiting those that are already in the cells, we could combine these two small molecules in the future, and potentially have a stronger, and synergistic effect on disease control,” says Tian.

The researchers are working with UConn’s Technology Commercialization Services (TCS) to advance the development and commercialization of this technology. Engaging with TCS early on, they protected their intellectual property and developed a strategic commercialization plan. As part of these efforts, TCS facilitated one-on-one meetings with five of the world’s ten largest animal healthcare companies, along with multiple other organizations interested in the technology.

“We have received amazing interest from industry, and the feedback has been extremely helpful, setting up the development path of the technology,” says Ana Fidantsef, industry liaison with TCS. “We hope these interactions will lead to collaborations that will immensely help the swine market and industry.”

 

Alkalinity on demand: innovative tech for instant water quality analysis





Nanjing Institute of Environmental Sciences, MEE
AI-Powered Alkalinity Analysis with Smartphones: Precision Meets Accessibility. 

image: 

A novel machine learning approach accurately predicts water alkalinity using smartphone-captured color changes induced by low-cost reagents. The technique demonstrates strong performance across freshwater and saltwater samples, with R² values as high as 0.945, revolutionizing affordable water quality monitoring for global applications.

view more 

Credit: Eco-Environment & Health




Scientists have developed a technique for water alkalinity analysis that requires no specialized equipment, using only artificial intelligence and smartphone technology. This method allows for the rapid and accurate measurement of alkalinity levels across diverse water matrices, from freshwater to saltwater, making water quality monitoring more accessible and affordable. This innovation addresses the need for simple and cost-effective water testing, empowering citizen scientists and overcoming financial limitations in traditional monitoring programs.

Alkalinity is a crucial indicator of water quality, influencing everything from aquatic ecosystems to industrial processes like water treatment and carbon cycling. However, existing methods to measure alkalinity are often complex, costly, and require specialized equipment, limiting their widespread use. These challenges have highlighted the need for a simpler, more affordable solution. Such a solution could enable broader access to critical water data, improving water quality assessments across diverse environments, from remote communities to urban centers.

In a major leap forward for environmental science, researchers from Case Western Reserve University and Cornell University have introduced an innovative method for analyzing water alkalinity. Published (DOI: 10.1016/j.eehl.2024.10.002) in the journal Eco-Environment & Health on 14 November 2024, their study reveals a new approach that combines low-cost commercial reagents with machine learning to accurately determine alkalinity levels in water samples—without the need for complex lab equipment.

The researchers' method uses affordable reagents that change color in response to shifts in alkalinity. These color changes are then captured via smartphone cameras, with images processed by sophisticated machine learning models. The AI algorithms correlate the intensity of the color shift with alkalinity levels, achieving an impressive degree of accuracy—R² values of 0.868 for freshwater and 0.978 for saltwater samples. The technique’s precision is further underscored by its low root-mean-square-error values. With no specialized equipment required, this breakthrough method could revolutionize water quality testing, particularly in regions with limited resources or in situations where traditional equipment is impractical.

Dr. Huichun Zhang, the study's lead author and a prominent figure in environmental engineering, shared his excitement about the technology's potential. "This AI-powered approach marks a significant milestone in water quality monitoring. It challenges the trend of ever-more complex and costly analysis techniques, offering a foundation for similar advancements in other water quality parameters," Zhang said.

The implications of this research are far-reaching. The technique offers an affordable, scalable solution for gathering water quality data, enabling citizen scientists, researchers, and even regulatory agencies to monitor water quality more efficiently. It promises to break down financial barriers, democratizing access to critical environmental data, especially in underserved communities. Moreover, widespread adoption of this technology could contribute to more robust predictive models, enhancing water management practices, agricultural decision-making, and efforts to combat pollution.

###

References

DOI

10.1016/j.eehl.2024.10.002

Original Source URL

https://doi.org/10.1016/j.eehl.2024.10.002

Funding information

This work was funded by the Ohio Department of Higher Education – Harmful Algal Bloom Research Initiative.

About Eco-Environment & Health (EEH)

Eco-Environment & Health (EEH) is an international and multidisciplinary peer-reviewed journal designed for publications on the frontiers of the ecology, environment and health as well as their related disciplines. EEH focuses on the concept of "One Health" to promote green and sustainable development, dealing with the interactions among ecology, environment and health, and the underlying mechanisms and interventions. Our mission is to be one of the most important flagship journals in the field of environmental health.

 

Smart cities get smarter: AI-powered material detection for sustainable urban planning





Chinese Society for Environmental Sciences

Illustration of geospatial and visual data fusion for building material classification. The figure integrates geospatial with visual data to classify building materials effectively. 

image: 

Illustration of geospatial and visual data fusion for building material classification. The figure integrates geospatial with visual data to classify building materials effectively. a, An aerial view of the targeted building outlined by a white dashed rectangle, providing context within its surrounding environment. b, The roof of the building, utilizing satellite imagery to analyze roofing materials. c–d, Front (c) and side (d) perspectives of the building’s façade, as captured by Google Street View for classifying wall materials. These images, retrieved using precise geographic coordinates from OpenStreetMap, ensure accurate alignment and are instrumental in comprehensively analyzing the building materials.

view more 

Credit: Environmental Science and Ecotechnology





A new study has unveiled a cutting-edge framework that harnesses deep learning and remote sensing to identify building materials with remarkable accuracy. This innovative approach enables the development of high-resolution material intensity databases—an essential tool for sustainable urban planning and strategic building retrofits. By systematically classifying materials used in existing buildings, the framework facilitates efforts to reduce embodied carbon, enhance energy efficiency, and promote circularity in urban environments. Scalable and adaptable, this technology represents a significant leap forward in decarbonizing the built environment and driving the transition toward sustainable cities.

The construction sector is a major driver of global carbon emissions, with buildings alone responsible for nearly one-third of worldwide energy-related CO₂ emissions. However, existing methods for assessing building materials are often constrained by limited geographic scope, poor scalability, and insufficient accuracy. Conventional databases struggle to provide comprehensive material intensity assessments, especially across diverse urban landscapes. These challenges underscore the urgent need for innovative, data-driven solutions that can deliver precise and actionable insights at scale.

A collaborative research initiative led by Peking University and the University of Southern Denmark has risen to this challenge. The team has developed an advanced framework that integrates deep learning with remote sensing to identify building materials with unprecedented precision. Their findings (DOI: 10.1016/j.ese.2025.100538), published on February 3, 2025, in Environmental Science and Ecotechnology, showcase the potential of this technology in creating customized material intensity databases tailored to different urban regions, paving the way for more sustainable and efficient city planning.

The study employs a fusion of Google Street View imagery, satellite data, and OpenStreetMap geospatial information to classify building materials with high accuracy. By leveraging Convolutional Neural Networks (CNNs), the researchers trained models capable of identifying roof and façade materials with exceptional detail. The models were first trained using extensive datasets from Odense, Denmark, before being successfully validated in major Danish cities such as Copenhagen, Aarhus, and Aalborg. The validation process confirmed the framework’s robustness, demonstrating its ability to generalize across diverse urban settings and reinforcing its scalability.

A key innovation of the study is its use of advanced visualization techniques, including Gradient-weighted Class Activation Mapping (Grad-CAM), which offers a window into how the AI models interpret imagery. By revealing which parts of an image most influence classification decisions, this technique enhances model transparency and reliability. Additionally, the researchers developed material intensity coefficients to quantify the environmental impact of different building materials. By combining high-resolution imagery with deep learning, this framework overcomes longstanding limitations in material data availability and accuracy, providing a powerful tool for sustainable urban development.

Highlights

• A scalable framework supports the creation of customized material intensity databases for diverse regions, facilitating sustainable urban planning and retrofits.

• Deep learning enables precise identification of building materials using remote sensing and street view data.

• Visualizations of model predictions enhance interpretability and reveal decision-making processes.

• Accurate material assessments inform targeted building upgrades for improved energy efficiency.

Prof. Gang Liu, the principal investigator of this project, highlighted the transformative potential of the technology: "Our study demonstrates how deep learning and remote sensing can fundamentally change the way we analyze and manage urban building materials. With precise material intensity data, we can drive more sustainable urban planning and targeted retrofitting, contributing directly to global carbon reduction efforts."

The implications of this breakthrough extend far beyond academic research. By enabling cities to accurately identify and map building materials, this framework equips urban planners with critical data for energy efficiency strategies, carbon reduction policies, and circular economy initiatives. Its scalability ensures that the approach can be adapted to different urban environments, making it a game-changer for sustainable city planning worldwide.

###

References

DOI

10.1016/j.ese.2025.100538

Original Source URL

https://doi.org/10.1016/j.ese.2025.100538

Funding information

This work is financially supported by the National Natural Science Foundation of China (71991484, 71991480), the Fundamental Research Funds for the Central Universities of Peking University, the Independent Research Fund Denmark (iBuildGreen), the European Union under grant agreement No. 101056810 (CircEUlar), and the China Scholarship Council (202006730004 and 202107940001).

About Environmental Science and Ecotechnology

Environmental Science and Ecotechnology (ISSN 2666-4984) is an international, peer-reviewed, and open-access journal published by Elsevier. The journal publishes significant views and research across the full spectrum of ecology and environmental sciences, such as climate change, sustainability, biodiversity conservation, environment & health, green catalysis/processing for pollution control, and AI-driven environmental engineering. The latest impact factor of ESE is 14, according to the Journal Citation ReportTM 2024.