A.I.
University of Kansas study explores the transformation of educational system with the advent of artificial intelligence
Research suggests educational systems must undergo transformation to fully leverage the benefits of artificial intelligence tools
Cactus Communications
The advent of artificial intelligence (AI) presents several new and exciting opportunities for improving the quality of education. While several ways of integrating AI into schooling have been explored, only a few of them consider changing the traditional school operations and educational practices.
In a recent article that was published on 23 July 2024 in the ECNU Review of Education, Professor Yong Zhao from the University of Kansas explored more radical changes that may be applied to traditional schooling to fully utilize the potential of AI technology for benefiting students. “Numerous publications have appeared, all trying to suggest, recommend, and predict the future of AI uses in education. However, most of the discussions, regardless of their scholarly quality, are primarily focused on using AI in the traditional arrangement of schools”, explains Professor Zhao. “The assumption is that everything the traditional school has operated with shall remain the same. AI tools, according to most of the advice, are to be incorporated into teaching by teachers just like previous technologies,” he notes.
By investigating a broader view of possible changes that can be applied to the current schooling system, the article not only suggested ways of utilizing the potential of AI technology better but also dealt with leveraging AI to support personalized learning. The article notes that although personalized learning has clearly demonstrated benefits, it has not been widely implemented in schools in its true sense. AI tools present a unique opportunity to implement personalized learning, customized to individual student needs harnessing their unique talents and potential.
Traditional schooling systems aim to create members of the workforce. However, AI has disrupted the job market, eliminated traditional career roles and created new roles. In the article, Professor Zhao noted that focusing and building on each child’s innate talents and unique strengths is essential for them to be successful in any career of their choice. Since any potential talent when sufficiently honed is valuable in the age of AI , he argued that educational systems must focus on students’ strengths rather than their weaknesses. Furthermore, the article suggested that the traditional curricula might need to change to make way for personalized education. Students could use AI and other resources to follow their interests and passion. This might also eliminate the requirement for age-based classes and promote learning with tools, resources, experts and peers aligned with the same interests rather than age.
Besides personalized education, AI can also be effective in facilitating project-based learning. The article noted that AI tools can help schools inculcate skills such as problem solving and independent thinking in students, transforming them into individuals with critical thinking and analytical skills. Integrating AI into the education system will also transform the role of teachers. They would become coaches and mentors who would work with the students to help them identify their strengths and potential and guide them to become the best versions of themselves. They would also need to be updated with the AI tools and help students utilize AI as a learning partner.
Traditional educational systems resist change, and now, with the advent of AI, there are several more incentives for changing how schools operate. In the article, Professor Zhao examines the question of whether and how educational systems can change in the age of AI. By considering broad-based changes to the schooling system, he suggests that the true potential of AI in learning can be unlocked. “AI is no doubt a powerful technology, but it is easy to underestimate its power. Uses in the traditional classroom to assist students and teachers in learning and teaching helps, but they also minimize the transformative power of AI,” Professor Zhao observes. “Schools could be transformed with the advancement of technology, especially generative AI. The changes should start with student-driven personalizing learning and problem-oriented pedagogy, “he concludes.
***
Reference
Titles of original papers: Artificial Intelligence and Education: End the Grammar of Schooling
Journal: ECNU Review of Education
DOI: https://doi.org/10.1177/20965311241265124
Journal
ECNU Review of Education
Method of Research
Literature review
Subject of Research
Not applicable
Article Title
Artificial Intelligence and Education: End the Grammar of Schooling
AI poses no existential threat to humanity – new study finds
Large language models like ChatGPT cannot learn independently or acquire new skills, meaning they pose no existential threat to humanity.
ChatGPT and other large language models (LLMs) cannot learn independently or acquire new skills, meaning they pose no existential threat to humanity, according to new research from the University of Bath and the Technical University of Darmstadt in Germany.
The study, published today as part of the proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024) – the premier international conference in natural language processing – reveals that LLMs have a superficial ability to follow instructions and excel at proficiency in language, however, they have no potential to master new skills without explicit instruction. This means they remain inherently controllable, predictable and safe.
This means they remain inherently controllable, predictable and safe.
The research team concluded that LLMs – which are being trained on ever larger datasets – can continue to be deployed without safety concerns, though the technology can still be misused.
With growth, these models are likely to generate more sophisticated language and become better at following explicit and detailed prompts, but they are highly unlikely to gain complex reasoning skills.
“The prevailing narrative that this type of AI is a threat to humanity prevents the widespread adoption and development of these technologies, and also diverts attention from the genuine issues that require our focus,” said Dr Harish Tayyar Madabushi, computer scientist at the University of Bath and co-author of the new study on the ‘emergent abilities’ of LLMs.
The collaborative research team, led by Professor Iryna Gurevych at the Technical University of Darmstadt in Germany, ran experiments to test the ability of LLMs to complete tasks that models have never come across before – the so-called emergent abilities.
As an illustration, LLMs can answer questions about social situations without ever having been explicitly trained or programmed to do so. While previous research suggested this was a product of models ‘knowing’ about social situations, the researchers showed that it was in fact the result of models using a well-known ability of LLMs to complete tasks based on a few examples presented to them, known as `in-context learning’ (ICL).
Through thousands of experiments, the team demonstrated that a combination of LLMs ability to follow instructions (ICL), memory and linguistic proficiency can account for both the capabilities and limitations exhibited by LLMs.
Dr Tayyar Madabushi said: “The fear has been that as models get bigger and bigger, they will be able to solve new problems that we cannot currently predict, which poses the threat that these larger models might acquire hazardous abilities including reasoning and planning.
“This has triggered a lot of discussion – for instance, at the AI Safety Summit last year at Bletchley Park, for which we were asked for comment – but our study shows that the fear that a model will go away and do something completely unexpected, innovative and potentially dangerous is not valid.
“Concerns over the existential threat posed by LLMs are not restricted to non-experts and have been expressed by some of the top AI researchers across the world."
However, Dr Tayyar Madabushi maintains this fear is unfounded as the researchers' tests clearly demonstrated the absence of emergent complex reasoning abilities in LLMs.
“While it's important to address the existing potential for the misuse of AI, such as the creation of fake news and the heightened risk of fraud, it would be premature to enact regulations based on perceived existential threats,” he said.
“Importantly, what this means for end users is that relying on LLMs to interpret and perform complex tasks which require complex reasoning without explicit instruction is likely to be a mistake. Instead, users are likely to benefit from explicitly specifying what they require models to do and providing examples where possible for all but the simplest of tasks.”
Professor Gurevych added: "… our results do not mean that AI is not a threat at all. Rather, we show that the purported emergence of complex thinking skills associated with specific threats is not supported by evidence and that we can control the learning process of LLMs very well after all. Future research should therefore focus on other risks posed by the models, such as their potential to be used to generate fake news."
ENDS.
Video explainer Dr Madabushi describes his findings: https://tinyurl.com/vvhx38kp
Method of Research
Experimental study
Subject of Research
Not applicable
Article Title
Are Emergent Abilities in Large Language Models just In-Context Learning?
Article Publication Date
12-Aug-2024
No comments:
Post a Comment