Using ChatGPT to support Chinese and English writing for students with dyslexia: Opportunities, challenges, and insights
Study proposes personalized learning and AI partnership to give students greater control over their learning in the age of AI
ECNU Review of Education
Students with dyslexia experience persistent challenges in writing, from limited vocabulary to difficulty organizing ideas. These hurdles are compounded in Hong Kong’s large‑classroom settings, where providing individualized writing support is often impractical. Although AI‑driven tools such as ChatGPT have begun to reshape language learning, few have been developed specifically for learners with dyslexia or rigorously tested for their impact on both motivation and writing performance.
A research team led by Fung K. Y., Fung K. C., Lee L. H., Lui R. T. L., Qu H., Song S., and Sin K. F. sought to address this gap. They designed CHATTING, a ChatGPT‑assisted writing system equipped with inclusive features such as adjustable speech rate, speech‑to‑text capabilities, and multi‑language support in Traditional Chinese, Cantonese, and English. The system was intended to be both accessible and adaptable, allowing students to interact with AI in ways that fit their learning needs.
The study aimed to measure CHATTING’s influence on learning engagement—including behavioral, emotional, cognitive, and intrinsic motivation—and to assess its effects on Chinese and English writing quality. It also explored how students formulated questions for the AI, the types of plagiarism that emerged, and the language barriers encountered during use.
The researchers recruited 101 secondary students, including both learners with and without dyslexia. Participants were randomly assigned to an experimental group, which used CHATTING, or a control group, which received traditional writing instruction. The intervention lasted four days: two days for pre‑writing tests and two days for post‑writing tests.
During the study, students completed writing tasks in both Chinese and English. These were scored on content, language, organization, and other features. Engagement and motivation were measured using questionnaires based on Self‑Determination Theory. Additional feedback on usability and functionality was gathered through surveys, while open‑ended interviews captured students’ personal experiences. Plagiarism detection software (Copyleaks) identified copied material, and statistical analysis using analysis of covariance (ANCOVA) assessed changes in engagement.
The data revealed striking differences between students with and without dyslexia. Learners with dyslexia saw significant gains in emotional engagement, which rose by 16.57%, and in intrinsic motivation, which increased by 8.71%. Their peers without dyslexia experienced moderate growth in emotional engagement of 8.65% and in cognitive engagement of 10.95%.
Feedback indicated that students with dyslexia valued CHATTING more highly across multiple dimensions, including helpfulness, relevance, readability, consistency, and conciseness. Many reported that the tool helped them generate ideas quickly, boosted their confidence, and provided a more interactive learning experience through question‑and‑answer exchanges.
Despite these positives, writing performance declined in both groups. Although word counts increased, overall scores in Chinese and English writing dropped after using CHATTING. Plagiarism was a notable concern: it was most severe in English for students with dyslexia and in Chinese for those without dyslexia.
The study also highlighted the importance of question‑asking skills. Students who could formulate specific, open‑ended questions tended to receive more relevant and useful AI responses. Others, struggling to understand AI‑generated content, resorted to copying text directly, raising both comprehension and integrity issues.
Fung et al. highlight, “The results present a nuanced picture of AI in education. On the one hand, tools like CHATTING can significantly enhance engagement and motivation, particularly for students who face traditional learning barriers. On the other hand, without guided instruction and clear expectations, such tools may unintentionally undermine writing quality, encourage plagiarism, or limit the development of independent writing skills.”
The authors argue that the key lies in teacher‑guided integration of AI into existing curricula. Educators can help students use AI as a scaffold—offering support without replacing critical processes like idea development, drafting, and revision. This approach can ensure that AI complements rather than compromises learning.
The research team acknowledges several limitations. The sample size was relatively small and lacked diversity, making it difficult to generalize findings to other contexts. The intervention was brief—just two days of AI‑assisted writing—so long‑term effects remain unknown. Additionally, some AI outputs were overly long, off‑topic, or culturally mismatched, posing challenges for second‑language learners.
The study suggests that educators should integrate AI writing platforms as supplementary tools, pairing them with explicit training in question‑asking and critical evaluation of AI output. Developers could improve such systems by incorporating plagiarism‑prevention features, adaptive difficulty settings, and clearer explanations tailored to learners’ needs. Policymakers might consider establishing ethical guidelines for AI use in education, addressing issues such as plagiarism, over‑reliance, and inclusivity.
***
Reference
DOI: https://doi.org/10.1177/20965311251358269
Journal
ECNU Review of Education
Method of Research
Observational study
Subject of Research
Not applicable
Article Title
A study on using ChatGPT to help students with dyslexia learn Chinese and English writing
Interactive apps, AI chatbots promote playfulness, reduce privacy concerns
Penn State
UNIVERSITY PARK, Pa. — The more interactive a mobile app or artificial intelligence (AI) chatbot is, the more playful they are perceived to be, with users letting their guard down and risking their privacy, according to a team led by researchers at Penn State.
The researchers studied the effect of mobile app interactivity on users’ vigilance toward privacy risks during the sign-up process, and how this shapes their attitudes toward the app and their willingness to keep using it. The team found that interactivity motivates users to engage with the app by fostering a heightened sense of playfulness and lowering their privacy concerns. The findings, published in the journal Behaviour & Information Technology, have implications for user privacy in an era increasingly dominated by mobile apps and AI chatbots that are designed to be fun and engaging, according to senior author S. Shyam Sundar, Evan Pugh University Professor and the James P. Jimirro Professor of Media Effects at Penn State.
“I think, in general, there’s been an increase in the extent to which apps and AI tools pry into user data — ostensibly to better serve users and to personalize information for them,” Sundar said. “In this study, we found that interactivity does not make users pause and think, as we would expect, but rather makes them feel more immersed in the playful aspect of the app and be less concerned about privacy. Companies could exploit this vulnerability to extract private information without users being totally aware of it.”
In an online experiment, the researchers asked 216 participants to go through the sign-up process for a simulated fitness app. Participants were randomly assigned to different versions of the app with varying levels of two different types of interactivity: “message interactivity,” ranging from simple questions and answers to highly inter-connected chats, where the app’s messaging builds on the user’s previous responses; and “modality interactivity,” referring to options such as clicking and zooming in on images.
Then the participants answered questions about their experience with the app’s sign-up process by rating perceived playfulness and privacy concerns on seven-point scales to indicate how strongly they agree or disagree with specific statements, such as “I felt using the app is fun” and “I would be concerned that the information I submitted to the app could be misused.” The researchers examined the responses to identify the effect of both type and extent of interactivity on user perceptions of the app.
They found that interactivity enhanced perceived playfulness and users’ intention to engage with an app, which was accompanied by a decrease in privacy concerns. Surprisingly, Sundar said, message interactivity, which the researchers thought would increase user vigilance, instead distracted users from thinking about the personal information they may be sharing with the system. That is, the way AI chatbots operate today — building responses based on a user’s prior inputs — makes individuals less likely to think about the sensitive information they may be sharing, according to the researchers.
“Nowadays, when users engage with AI agents, there’s a lot of back-and-forth conversation, and because the experience is so engaging, they forget that they need to be vigilant about the information they share with these systems,” said lead author Jiaqi Agnes Bao, assistant professor of strategic communication at the University of South Dakota who completed the research during her doctoral work at Penn State. “We wanted to understand how to better design an interface to make sure users are aware of their information disclosure.”
While user vigilance plays a large part in preventing the unintended disclosure of personal information, app and AI developers can balance playfulness and privacy concerns through design choices that result in win-win situations for individuals and companies alike, Bao said.
“We found that if both message interactivity and modality interactivity are designed to operate in tandem, it could cause users to pause and reflect,” she said. “So, when a user converses with an AI chatbot, a pop-up button asking the user to rate their experience or leave comments on how to improve their tailored responses can give users a pause to think about the kind of information they share with the system and help the company provide a better customized experience.”
AI platforms’ responsibility goes beyond simply giving users the option to share or not share personal information via conversation, said study co-author Yongnam Jung, a doctoral candidate at Penn State.
“It’s not just about notifying users, but about helping them make informed choices, which is the responsible way for building trust between platforms and users,” she added.
The study builds on the team's earlier research, which revealed similar patterns, according to the researchers. Together, they said, the two studies underscore a critical trade-off: while interactivity enhances the user experience, it highlights the benefits of the app and draws attention away from potential privacy risks.
Generative AI, for the most part and in most application domains, is based on message interactivity, which is conversational in nature, said Sundar, who is also the director of Penn State’s Center for Socially Responsible Artificial Intelligence (CSRAI). He added that this study’s finding challenges current thinking among designers that, unlike clicking and swiping tools, conversation-based tools make people more cognitively alert to negative aspects, like privacy concerns.
“In reality, conversation-based tools are turning out to be a playful exercise, and we’re seeing this reflected in the larger discourse on generative AI where there are all kinds of stories about people getting so drawn into conversations that they do things that seem illogical,” he said. “They are following the advice of generative AI tools for very high-stakes decision making. In some ways, our study is a cautionary tale for this newer suite of generative AI tools. Perhaps inserting a pop-up or other modality interactivity tools in the middle of a conversation may stem the flow of this mesmerizing, playful interaction and jerk users into awareness now and then.”
Journal
Behaviour and Information Technology
Article Title
Are you fooled by interactivity? The effects of interactivity on privacy disclosure
No comments:
Post a Comment