Friday, May 16, 2025

 

The key to spotting dyslexia early could be AI-powered handwriting analysis



AI shows promise detecting dyslexia and dysgraphia from what children write on paper and tablets, a new University at Buffalo-led study suggests



University at Buffalo




BUFFALO, N.Y. – A new University at Buffalo-led study outlines how artificial intelligence-powered handwriting analysis may serve as an early detection tool for dyslexia and dysgraphia among young children.

The work, presented in the journal SN Computer Science, aims to augment current screening tools which are effective but can be costly, time-consuming and focus on only one condition at a time.

It could eventually be a salve for the nationwide shortage of speech-language pathologists and occupational therapists, who each play a key role in diagnosing dyslexia and dysgraphia.

“Catching these neurodevelopmental disorders early is critically important to ensuring that children receive the help they need before it negatively impacts their learning and socio-emotional development. Our ultimate goal is to streamline and improve early screening for dyslexia and dysgraphia, and make these tools more widely available, especially in underserved areas,”  says the study’s corresponding author Venu Govindaraju, PhD, SUNY Distinguished Professor in the Department of Computer Science and Engineering at UB.

The work is part of the National AI Institute for Exceptional Education, which is a UB-led research organization that develops AI systems that identify and assist young children with speech and language processing disorders.

Builds upon previous handwriting recognition work

Decades ago, Govindaraju and colleagues did groundbreaking work employing machine learning, natural language processing and other forms of AI to analyze handwriting, an advancement the U.S. Postal Service and other organizations still use to automate the sorting of mail.

The new study proposes similar a framework and methodologies to identify spelling issues, poor letter formation, writing organization problems and other indicators of dyslexia and dysgraphia.

It aims to build upon prior research, which has focused more on using AI to detect dysgraphia (the less common of the two conditions) because it causes physical differences that are easily observable in a child’s handwriting. Dyslexia is more difficult to spot this way because it focuses more on reading and speech, though certain behaviors like spelling offers clues.

The study also notes there is a shortage of handwriting examples from children to train AI models with.

Collecting samples from K-5 students

To address these challenges, a team of UB computer scientists led by Govindaraju gathered insight from teachers, speech-language pathologists and occupational therapists to help ensure the AI models they’re developing are viable in the classroom and other settings.

“It is critically important to examine these issues, and build AI-enhanced tools, from the end users’ standpoint,” says study co-author Sahana Rangasrinivasan, a PhD student in UB’s Department of Computer Science and Engineering.

The team also partnered with study co-author Abbie Olszewski, PhD, associate professor in literacy studies at the University of Nevada, Reno, who co-developed the Dysgraphia and Dyslexia Behavioral Indicator Checklist (DDBIC) to identify symptoms overlapping between dyslexia and dysgraphia.

The team collected paper and tablet writing samples from kindergarten through 5th grade students at an elementary school in Reno. This part of the study was approved by an ethics board, and the data was anonymized to protect student privacy.

They will use this data to further validate the DDBIC tool, which focuses on 17 behavioral cues that occur before, during and after writing; train AI models to complete the DDBIC screening process; and compare how effective the models are compared to people administering the test.

Work emphasizes AI for public good

The study describes how the team’s models can be used to:

  • Detect motor difficulties by analyzing writing speed, pressure and pen movements.
  • Examine visual aspects of handwriting, including letter size and spacing.
  • Convert handwriting to text, spotting misspellings, letter reversals and other errors.
  • Identify deeper cognitive issues based on grammar, vocabulary and other factors.

Finally, it discusses a tool that combines all these models, summarizes their findings, and provides a comprehensive assessment.

“This work, which is ongoing, shows how AI can be used for the public good, providing tools and services to people who need it most,” says study co-author Sumi Suresh, PhD, a visiting scholar at UB.

Additional co-authors include Bharat Jayarman, PhD, director of the Amrita Institute of Advanced Research and professor emeritus in the UB Department of Computer Science and Engineering; and Srirangaraj Setlur, principal research scientist at the UB Center for Unified Biometrics and Sensors.

 

Artificial intelligence and genetics can help farmers grow corn with less fertilizer



Novel process harnesses machine learning to reveal groups of genes that determine how efficiently plants use nitrogen



New York University

Corn growing at NYU 

image: 

Corn growing in the Irene Rose Sohn Zegar Memorial Greenhouse on the top floor of NYU’s Center for Genomics and Systems Biology. 

view more 

Credit: Tracey Friedman/NYU





New York University scientists are using artificial intelligence to determine which genes collectively govern nitrogen use efficiency in plants such as corn, with the goal of helping farmers improve their crop yields and minimize the cost of nitrogen fertilizers.

“By identifying genes-of-importance to nitrogen utilization, we can select for or even modify certain genes to enhance nitrogen use efficiency in major US crops like corn,” said Gloria Coruzzi, the Carroll & Milton Petrie Professor in NYU’s Department of Biology and Center for Genomics and Systems Biology and the senior author of the study, which appears in the journal The Plant Cell.

In the last 50 years, farmers have been able to grow larger crop yields thanks to major improvements in plant breeding and fertilizers, including how efficiently crops uptake and use nitrogen, the key component of fertilizers.

Still, most crops only use roughly 55 percent of the nitrogen in fertilizer that farmers apply to their fields, while the remainder ends up in the surrounding soil. When nitrogen seeps into groundwater, it can contaminate drinking water and cause harmful algae blooms in lakes, rivers, reservoirs, and warm ocean waters. Furthermore, the unused nitrogen that remains in the soil is converted by bacteria into nitrous oxide, a potent greenhouse gas that is 265 times more effective at trapping heat over a 100-year period than is carbon dioxide.

The United States is the world’s leading producer of corn. This major cash crop requires large amounts of nitrogen to grow, but much of the fertilizer fed to corn is not taken up or used. Corn’s low nitrogen use efficiency presents a financial challenge for farmers, given the increasing costs of fertilizer—the majority of which is imported—and also risks harming the soil, water, air, and climate.

To address this challenge in corn and other crops, NYU researchers have developed a novel process to improve nitrogen use efficiency that integrates plant genetics with machine learning, a type of artificial intelligence that detects patterns in data—in this case, to associate genes with a trait (nitrogen use efficiency).

Using a model-to-crop approach, NYU researchers tracked the evolutionary history of corn genes that are shared with Arabidopsis, a small flowering weed often used as a model organism in plant biology due to the ease of studying it in the lab using the power of molecular genetic approaches. In a previous study published in Nature Communications, Coruzzi’s team identified genes whose responsiveness to nitrogen was conserved between corn and Arabidopsis and validated their role in plants.

In The Plant Cell study, their most recent on this topic, the NYU researchers built upon their work in corn and Arabidopsis to identify how nitrogen use efficiency is governed by groups of genes—also known as “regulons”—that are activated or repressed by the same transcription factor (a regulatory protein).

“Traits like nitrogen use efficiency or photosynthesis are never controlled by one single gene. The beauty of the machine learning process is it learns sets of genes that are collectively responsible for a trait, and can also identify the transcription factor or factors that control these sets of genes,” said Coruzzi.

The researchers first used RNA sequencing to measure how genes in corn and Arabidopsis respond to nitrogen treatment. Using these data, they trained machine learning models to identify nitrogen-responsive genes conserved across corn and Arabidopsis varieties, as well as the transcription factors that regulate the genes-of-importance to nitrogen use efficiency (NUE). For each “NUE Regulon”—the transcription factor and corresponding set of regulated NUE genes—the researchers calculated a collective machine learning score and then ranked the top performers based on how well the combined expression levels could accurately predict how efficiently nitrogen is used in field-grown varieties of corn.

For the top-ranked NUE Regulons, the researchers used cell-based studies in both corn and Arabidopsis to validate the machine learning predictions for the set of genes in the genome that are regulated by each transcription factor. These experiments confirmed NUE Regulons for two corn transcription factors (ZmMYB34/R3) that regulate 24 genes controlling nitrogen use as well as for a closely related transcription factor in Arabidopsis (AtDIV1), which regulates 23 target genes sharing a genetic history with corn that also control nitrogen use. When fed back into the machine learning models, these model-to-crop conserved NUE Regulons significantly enhanced the ability of AI to predict nitrogen use efficiency across field-grown corn varieties.

Identifying NUE Regulons of collective genes and related transcription factors that govern nitrogen use will enable crop scientists to breed or engineer corn that needs less fertilizer. 

“By looking at corn hybrids at the seedling stage to see if expression of the identified genes-of-importance to nitrogen use efficiency is high, rather than planting them in the field and measuring their nitrogen use, we can use molecular markers to select the hybrids at the seedling stage that are most efficient in nitrogen use, and then plant those varieties,” said Coruzzi. “This will not only result in a cost savings for farmers, but also reduce the harmful effects of nitrogen pollution of groundwaters and nitrous oxide greenhouse gas emissions.”

New York University has filed a patent application covering the research and findings described in this paper. Additional study authors include Ji Huang, Tim Jeffers, Nathan Doner, Hung-Jui Shih, Samantha Frangos, and Manpreet Singh Katari of NYU; Chia-Yi Cheng of NYU and National Taiwan University, and Matthew Brooks of the US Department of Agriculture's Agricultural Research Service. The research was supported by the National Science Foundation Plant Genome Research Program (IOS-1339362) and the National Institutes of Health (R01-GM121753, F32GM116347).

The study appears in a special focus issue of The Plant Cell, “Translational Research from Arabidopsis to Crop Plants and Beyond,” which recognizes the 25th anniversary of the publication of the Arabidopsis genome sequence. DOI: 10.1093/plcell/koaf093


NYU researchers Tim Jeffers (left) and Amari Hill (right) studying corn growing in the NYU Irene Rose Sohn Zegar Memorial Greenhouse.

NYU researchers—including postdoctoral associate Tim Jeffers (left), PhD student Amari Hill (center), and principal investigator Gloria Coruzzi (right)—are using plant genetics and artificial intelligence to study nitrogen use efficiency.

Credit

Tracey Friedman/NYU

 

Stranger things: How Netflix teaches economics



From cartels to creative destruction, UBC Okanagan professor helping students learn through pop culture



University of British Columbia Okanagan campus





A new study co-authored by UBC Okanagan’s Dr. Julien Picault shows how scenes from hit shows like Narcos and Stranger Things can help students grasp complex economic concepts—from cartels and market control to creative destruction and inflation.

Published in The Journal of Economic EducationTeaching economics with Netflix explores how carefully selected Netflix content can help undergraduate students engage with economics in a more meaningful, accessible way.

“Students are already watching this content,” says Dr. Picault, Professor of Teaching in the Department of Economics, Philosophy and Political Science. “Our goal is to meet them where they are and use culturally relevant media to explain fundamental concepts like opportunity cost, supply and demand, or moral hazard.”

The paper introduces EcoNetflix, a free online resource Dr. Picault and collaborators at Marymount University created.

The site features teaching guides built around diverse clips from Netflix original shows, films, and documentaries from around the world, with clear connections to both introductory and advanced economics concepts.

Stranger Things and smartphones: A lesson in creative destruction

For example, in the popular alien sci-fi series Stranger Things, the characters use walkie-talkies, phone booths and cassette players. Today, a single smartphone replaces all these tools.

This shift illustrates creative destruction, where new technology makes old products obsolete. It also raises questions about cost: would buying each of those devices separately be more expensive than owning a smartphone?

And how do new products like smartphones affect how we measure inflation through the Consumer Price Index?

Narcos and cartels: Teaching market control and oligopoly

In the crime drama Narcos, based on the true story of Colombian drug lord Pablo Escobar, one scene shows Escobar meeting with rival kingpins to propose a formal alliance. He offers to manage operations while the others contribute funding in exchange for shared profits and protection.

This collusive behaviour is known as forming a cartel—an agreement among producers to avoid competition and control prices or territory. It reflects how firms and organizations operating in an oligopoly or a fragmented market may begin cooperating when powerful players see cooperation as more profitable than conflict.

By dividing the market, they reduce risk, stabilize earnings and limit outside threats—even if the arrangement is illegal or unsustainable long-term.

Why it works: Connecting real life to economic theory

The material reflects various cultural, geographic and social perspectives, aligning with efforts to make economics education more inclusive.

“It’s not just about being entertaining,” says Dr. Picault. “We want to improve learning outcomes and show how economics applies to the world students already navigate.”

The paper also argues that platforms like Netflix, with global reach and diverse catalogues, offer a rich foundation for building more inclusive economics lessons.

Dr. Picault’s recent work builds on earlier studies and teaching guides he’s authored on using pop culture to teach economics.

 

Q&A: UW researcher discusses the “cruel optimism” of tech industry layoffs




University of Washington





In 2022, after decades of booming growth, technology companies in the United States began to lay off droves of employees. The announcements — which continued in 2023 and 2024, spanning from major corporations to startups — made constant headlines: Meta dropped 11,000 employees, 13% of its staff. Microsoft cut 10,000Amazon 27,000. In all, between 2022 and 2024, more than 500,000 tech workers were laid off. Smaller cuts have continued; this week, Microsoft cut more than 6,800 globally, nearly 2,000 in Washington.

In 2023, University of Washington researchers recruited a group of 29 laid-off U.S. tech workers to explore the cuts’ effects on employees. Over five weeks, participants reflected on topics like job searching and the potential for workplace organizing. They shared their answers and responded to each other in a private Slack group. Overall, the group was ambivalent about tech work. They said the work was often unfulfilling, despite their plans to continue in the industry.

The researchers presented their paper April 30 at the ACM CHI Conference on Human Factors in Computing Systems in Yokohama, Japan.

UW News spoke with lead author Samuel So, a UW doctoral student in human centered design and engineering, about shifting views of the tech industry, the potential for workplace organizing and why workers find themselves in a state of “cruel optimism” with the industry.

Can you give some context around the layoffs? Why was it such a big shock that the tech industry was laying people off like this?

Samuel So: Overall, the layoffs came as a shock because the tech industry has been thought of as layoff-proof for the past 20 years. At least since the 2001 dot-com bubble crash, there hasn’t been precedent for mass tech industry layoffs. So many tech workers reckoned with the possibility for the first time, and many of the layoffs were unceremonious — people learned about the cuts when their access to work accounts was revoked, or through impersonal email announcements.

Companies generally cited macroeconomic factors when they announced layoffs. This included high interest rates, industry-wide revenue losses and over-hiring during the pandemic.

But what’s interesting is that these companies were announcing layoffs in rapid succession. Some appeared to be performing well, even achieving record profits, yet still staged subsequent rounds of layoffs. So it’s also helpful to understand that layoffs are helpful for boosting stock performance, and companies were simply copying each other because they could. Mass layoffs may have previously been considered taboo in the tech industry, but now that big tech companies were doing it, it became more acceptable for other companies to follow suit. Some also speculated the layoffs were intended to reset labor relations in favor of employers, because tech workers were previously able to command high salaries.

I do want to note that, because the study didn't engage with executives or company leadership, my understanding is primarily drawn from news articles, public speculation and the working theories of the participants in our study.

What made you want to study this?

SS: I’m broadly interested in the values and beliefs surrounding technologies and in studying the rhetoric of tech companies. I also had a personal interest. In Seattle, we’re surrounded by the tech industry, and I was curious about how these mass layoffs would potentially impact not just the tech industry, but the cultures, neighborhoods, and cities that were largely developed by these tech companies.

I also went to a public STEM high school and majored in computer science in my undergraduate degree. I received a very clear message that, to many people, a tech job signified upward mobility, work-life balance and job stability. My high school was largely made up of low-income immigrant families, and tech jobs practically signified the American dream. So I was curious about how layoffs might have affected or shaped people's beliefs around the tech industry and what that signals for the future of the industry.

What do you think distinguished these from layoffs in other industries?

SS: The idea that the tech industry is layoff-proof was a factor. Another aspect is the rollout of high-profile generative artificial intelligence technologies. Some tech conglomerates were announcing billion-dollar investments in AI around the time of mass layoffs. This contributed to internal conflicts and alienation that many laid-off workers experienced, especially those who felt companies would chase technology trends at the expense of their workers' well-being.

Several workers in the study felt this was a culmination of their disillusionment with major tech companies. In many ways this seems to track with how the broader culture has come to view these companies. What do you make of this shift in perception?

SS: Some participants in our study likened the tech industry to a cult, that it has these cults of personalities and passion around leadership principles and company values that are almost treated as scripture. So some of the romantic or utopic sentiments around tech companies are now being actively challenged by tech workers. This is not particularly new — tech workers have been voicing their concerns and discontent over the past decade. We’ve seen the rise of collective organizing and worker-led campaigns. But I think the mass layoffs took this discontent to unprecedented levels.

Even so, most tech workers in our study planned on staying in the industry. This raises an interesting tension: What might it mean for the tech workforce to be disillusioned with the beliefs that drove the industry for so long? For example, some participants talked about entering the tech industry with goals of changing humanity or working on projects with broad societal impacts, to ultimately be disappointed when their work was just moving pixels around.

Your paper centers on the theory of “cruel optimism.” Can you explain what that is and why it applies to these workers’ experiences?

SS: Cruel optimism describes a relation in which something you desire is actually detrimental to your well-being. The cultural theorist Lauren Berlant coined this concept to describe how people might remain attached to ideas of the "good life" because it promises a desirable outcome. But the pursuit of this good life can lead people to work through precarious or uncertain conditions that put them at risk.

In our case, cruel optimism helps us understand why tech workers remain in an industry that is actively contributing to their unfulfillment and discontent. Berlant raises interesting points about how, when ideas of the good life are threatened, people will hold onto those ideas as much as possible, as it feels like a necessary way of being in the world. We can see this in how tech workers cling to certain ideas of what a good life looks like in the tech industry, even while they are explicitly criticizing the tech industry and its leadership.

What are some potential ways for the tech industry to move beyond its current state?

SS: Mass layoffs are not inevitable. They weren't common in the U.S. until the 1980s, and there are historical examples of tech workers successfully pushing back and contesting their layoff decisions from the early 2000s.

We found workers managed their feelings of discontent through individual adjustments. For example, some accepted that work is just work, and moving forward, they planned to act in their best interests and not in those of the company. While there is value in that shift, it also risks having workers isolate themselves in dealing with these problems or resigning themselves to the way things currently are.

Our paper argues that these feelings of discontent can be redirected toward collective action or organizing. The tech workers in our study had an appetite for resistance or organizing, but they felt powerless in pushing back. This makes sense, since the tech industry is largely anti-union by design. Founders of early tech companies said that unions were antithetical to innovation.

But fostering open spaces for collective reimagining of the industry can take many forms. Existing organizing groups like Tech Workers Coalition operate across different companies and physical locations. Some workers in our study were talking about these issues with other tech workers for the first time. Simply sharing grievances and expressing discontent with trusted coworkers is a form of organizing.

Vannary Sou, a UW undergraduate in the Information School, was a co-author. Sucheta Ghoshal, a UW assistant professor of human centered design and engineering, and Sean A. Munson, a UW professor of human centered design and engineering, are senior authors. This research was funded by the National Science Foundation.

For more information, contact So at samuelso@uw.edu.

 

How we think about protecting data



A new study shows public views on data privacy vary according to how the data are used, who benefits, and other conditions.




Massachusetts Institute of Technology




How should personal data be protected? What are the best uses of it? In our networked world, questions about data privacy are ubiquitous and matter for companies, policymakers, and the public. 

A new study by MIT researchers adds depth to the subject by suggesting that people’s views about privacy are not firmly fixed and can shift significantly, based on different circumstances and different uses of data.

“There is no absolute value in privacy,” says Fabio Duarte, principal research scientist in MIT’s Senseable City Lab and co-author of a new paper outlining the results. “Depending on the application, people might feel use of their data is more or less invasive.”

The study is based on an experiment the researchers conducted in multiple countries using a newly developed game that elicits public valuations of data privacy relating to different topics and domains of life. 

“We show that values attributed to data are combinatorial, situational, transactional, and contextual,” the researchers write. 

The paper, “Data Slots: tradeoffs between privacy concerns and benefits of data-driven solutions,” is published in Nature: Humanities and Social Sciences Communications. The authors are Martina Mazzarello, a postdoc in the Senseable City Lab; Duarte; Simone Mora, a research scientist at Senseable City Lab; Cate Heine PhD ’24 of University College London; and Carlo Ratti, director of the Senseable City Lab.

The study is based around a card game with poker-type chips the researchers created to study the issue, called Data Slots. In it, players hold hands of cards with 12 types of data — such as a personal profile, health data, vehicle location information, and more — that relate to three types of domains where data are collected: home life, work, and public spaces. After exchanging cards, the players generate ideas for data uses, then assess and invest in some of those concepts. The game has been played in-person in 18 different countries, with people from another 74 countries playing it online; over 2,000 individual player-rounds were included in the study. 

The point behind the game is to examine the valuations that members of the public themselves generate about data privacy. Some research on the subject involves surveys with pre-set options that respondents choose from. But in Data Slots, the players themselves generate valuations for a wide range of data-use scenarios, allowing the researchers to estimate the relative weight people place on privacy in different situations. 

The idea is “to let people themselves come up with their own ideas and assess the benefits and privacy concerns of their peers’ ideas, in a participatory way,” Ratti explains.

The game strongly suggests that people’s ideas about data privacy are malleable, although the results do indicate some tendencies. The data privacy card whose use players most highly valued was for personal mobility; given the opportunity in the game to keep it or exchange it, players retained it in their hands 43 percent of the time, an indicator of its value. That was followed in order by personal health data, and utility use. (With apologies to pet owners, the type of data privacy card players held on to the least, about 10 percent of the time, involved animal health.) 

However, the game distinctly suggests that the value of privacy is highly contingent on specific use-cases. The game shows that people care about health data to a substantial extent but also value the use of environmental data in the workplace, for instance. And the players of Data Slots also seem less concerned about data privacy when use of data is combined with clear benefits. In combination, that suggests a deal to be cut: Using health data can help people understand the effects of the workplace on wellness. 

“Even in terms of health data in work spaces, if they are used in an aggregated way to improve the workspace, for some people it’s worth combining personal health data with environmental data,” Mora says. 

Mazzarello adds: “Now perhaps the company can make some interventions to improve overall health. It might be invasive, but you might get some benefits back.”

In the bigger picture, the researchers suggest, taking a more flexible, user-driven approach to understanding what people think about data privacy can help inform better data policy. Cities — the core focus on the Senseable City Lab — often face such scenarios. City governments can collect a lot of aggregate traffic data, for instance, but public input can help determine how anonymized such data should be. Understanding public opinion along with the benefits of data use can produce viable policies for local officials to pursue. 

“The bottom line is that if cities disclose what they plan to do with data, and if they involve resident stakeholders to come up with their own ideas about what they could do, that would be beneficial to us,” Duarte says. “And in those scenarios, people’s privacy concerns start to decrease a lot.” 

###

Written by Peter Dizikes, MIT News

 

Incomplete team staffing, burnout, and work intentions among US physicians




JAMA Internal Medicine




About The Study:

 In this study, physicians frequently experienced incomplete team staffing. Working with an incompletely staffed team was associated with significantly greater odds of burnout, intent to reduce clinical work hours, and intent to leave one’s current organization (ITL). Given associations between ITL and attrition, these findings emphasize the importance of adequate staffing.  


Corresponding Author: To contact the corresponding author, Lisa S. Rotenstein, MD, MBA, MSc, email lisa.rotenstein@ucsf.edu.

To access the embargoed study: Visit our For The Media website at this link https://media.jamanetwork.com/

(doi:10.1001/jamainternmed.2025.1679)

Editor’s Note: Please see the article for additional information, including other authors, author contributions and affiliations, conflict of interest and financial disclosures, and funding and support.

#  #  #

Media advisory: This study is being presented at the 2025 Society of General Internal Medicine Annual Meeting. 

 https://jamanetwork.com/journals/jamainternalmedicine/fullarticle/10.1001/jamainternmed.2025.1679?guestAccessKey=cb52c805-a6cd-49e1-8f27-d519d054015e&utm_source=for_the_media&utm_medium=referral&utm_campaign=ftm_links&utm_content=tfl&utm_term=051425