AI, CHATGPT & MACHINE LEARNING
AI ethics are ignoring children, say Oxford researchers
Researchers from the Oxford Martin Programme on Ethical Web and Data Architectures (EWADA), University of Oxford, have called for a more considered approach when embedding ethical principles in the development and governance of AI for children.
In a perspective paper published today in Nature Machine Intelligence, the authors highlight that although there is a growing consensus around what high-level AI ethical principles should look like, too little is known about how to effectively apply them in principle for children. The study mapped the global landscape of existing ethics guidelines for AI and identified four main challenges in adapting such principles for children’s benefit:
- A lack of consideration for the developmental side of childhood, especially the complex and individual needs of children, age ranges, development stages, backgrounds, and characters.
- Minimal consideration for the role of guardians (e.g. parents) in childhood. For example, parents are often portrayed as having superior experience to children, when the digital world may need to reflect on this traditional role of parents.
- Too few child-centred evaluations that consider children’s best interests and rights. Quantitative assessments are the norm when assessing issues like safety and safeguarding in AI systems, but these tend to fall short when considering factors like the developmental needs and long-term wellbeing of children.
- Absence of a coordinated, cross-sectoral, and cross-disciplinary approach to formulating ethical AI principles for children that are necessary to effect impactful practice changes.
The researchers also drew on real-life examples and experiences when identifying these challenges. They found that although AI is being used to keep children safe, typically by identifying inappropriate content online, there has been a lack of initiative to incorporate safeguarding principles into AI innovations including those supported by Large Language Models (LLMs). Such integration is crucial to prevent children from being exposed to biased content based on factors such as ethnicity, or to harmful content, especially for vulnerable groups, and the evaluation of such methods should go beyond mere quantitative metrics such as accuracy or precision. Through their partnership with the University of Bristol, the researchers are also designing tools to help children with ADHD, carefully considering their needs and designing interfaces to support their sharing of data with AI-related algorithms, in ways that are aligned with their daily routes, digital literacy skills, and need for simple yet effective interfaces.
In response to these challenges, the researchers recommended:
- increasing the involvement of key stakeholders, including parents and guardians, AI developers, and children themselves;
- providing more direct support for industry designers and developers of AI systems, especially by involving them more in the implementation of ethical AI principles;
- establishing legal and professional accountability mechanisms that are child-centred; and
- increasing multidisciplinary collaboration around a child-centred approach involving stakeholders in areas such as human-computer interaction, design, algorithms, policy guidance, data protection law, and education.
Dr Jun Zhao, Oxford Martin Fellow, Senior Researcher at the University’s Department of Computer Science, and lead author of the paper, said:
‘The incorporation of AI in children’s lives and our society is inevitable. While there are increased debates about who should ensure technologies are responsible and ethical, a substantial proportion of such burdens falls on parents and children to navigate this complex landscape.’
‘This perspective article examined existing global AI ethics principles and identified crucial gaps and future development directions. These insights are critical for guiding our industries and policymakers. We hope this research will serve as a significant starting point for cross-sectoral collaborations in creating ethical AI technologies for children and global policy development in this space.’
The authors outlined several ethical AI principles that would especially need to be considered for children. They include ensuring fair, equal, and inclusive digital access, delivering transparency and accountability when developing AI systems, safeguarding privacy and preventing manipulation and exploitation, guaranteeing the safety of children, and creating age-appropriate systems while actively involving children in their development.
Professor Sir Nigel Shadbolt, co-author, Director of the EWADA Programme, Principal of Jesus College Oxford and a Professor of Computing Science at the Department of Computer Science, said:
‘In an era of AI powered algorithms children deserve systems that meet their social, emotional, and cognitive needs. Our AI systems must be ethical and respectful at all stages of development, but this is especially critical during childhood.’
Read the study ‘Challenges and opportunities in translating ethical AI principles into practice for children here’ in Nature Machine Intelligence (only after embargo lifts) – https://www.nature.com/articles/s42256-024-00805-x
-ENDS-
Notes to Editors
For an interview with the researchers or further information, including to see a copy of the paper under embargo, please contact Amjad Parkar on amjad.parkar@oxfordmartin.ox.ac.uk
DOI - 10.1038/s42256-024-00805-x
About the University of Oxford
Oxford University has been placed number 1 in the Times Higher Education World University Rankings for the eighth year running, and number 3 in the QS World Rankings 2024. At the heart of this success are the twin-pillars of our ground-breaking research and innovation and our distinctive educational offer.
Oxford is world-famous for research and teaching excellence and home to some of the most talented people from across the globe. Our work helps the lives of millions, solving real-world problems through a huge network of partnerships and collaborations. The breadth and interdisciplinary nature of our research alongside our personalised approach to teaching sparks imaginative and inventive insights and solutions.
Through its research commercialisation arm, Oxford University Innovation, Oxford is the highest university patent filer in the UK and is ranked first in the UK for university spinouts, having created more than 300 new companies since 1988. Over a third of these companies have been created in the past five years. The university is a catalyst for prosperity in Oxfordshire and the United Kingdom, contributing £15.7 billion to the UK economy in 2018/19, and supports more than 28,000 full time jobs.
About the Oxford Martin Programme on Ethical Web and Data Architectures
The World Wide Web has radically diverged from the values upon which it was founded, and it is now dominated by a few platform companies, whose business models and services generate huge profits.
The Oxford Martin Programme on Ethical Web and Data Architectures, led by Sir Tim Berners-Lee and Principal of Jesus College Nigel Shadbolt, aims to identify digital infrastructures that promote and support individual autonomy and self-determination in our emerging digital societies. To do this, researchers aim to redesign the fundamental information architectures which underpin the web, and deploy new legal and regulatory infrastructures.
About the Oxford Martin School
The Oxford Martin School is a world-leading research department of the University of Oxford. Its 200 academics work across more than 30 pioneering research programmes to find solutions to the world's most urgent challenges. It supports novel and high-risk projects that often do not fit within conventional funding channels, with the belief that breaking boundaries and fostering innovative collaborations can dramatically improve the wellbeing of this and future generations. Underpinning all our research is the need to translate academic excellence into impact – from innovations in science, medicine, and technology, through to providing expert advice and policy recommendations.
About the Department of Computer Science
The Department of Computer Science, University of Oxford, is consistently recognised as the internationally leading centre of research and teaching across a broad spectrum of computer science, ranging from foundational discoveries to interdisciplinary work with significant real-world impact. The department is proud of its history as one of the longest-established computer science departments in the country, as it continues to provide first-rate undergraduate and postgraduate teaching to some of the world's brightest minds. It enjoys close links with other University departments and Oxford research groups and institutes.
For more information visit our website: https://www.cs.ox.ac.uk/
JOURNAL
Nature Machine Intelligence
ARTICLE TITLE
Challenges and opportunities in translating ethical AI principles into practice for children here
ARTICLE PUBLICATION DATE
20-Mar-2024
LIST pioneers AI regulatory sandboxes and launches ethical bias leaderboard
The Luxembourg Institute of Science and Technology (LIST) has unveiled its latest initiative aimed at advancing research and development activities in the realm of AI regulatory sandboxes in Amsterdam at the AIMMES 2024 conference.
Drawing on its experience collaborating with regulatory and compliance bodies, LIST is spearheading research and development activities focused on AI regulatory sandboxes. These sandboxes provide supervised testing environments where emerging AI technologies can undergo trials within a framework that ensures regulatory compliance.
16 LLMs evaluating 7 ethical biases
AI regulatory sandboxes play a major role in contributing to ongoing discussions around AI regulation, particularly in light of the European Union AI Act. The draft agreement emphasizes the importance of AI systems being developed and used in a manner that promotes diversity, equality, and fairness, while also addressing and avoiding discriminatory impacts and biases prohibited by Union or national law.
Francesco Ferrero, director of the IT for Innovative Services department at LIST, said: "The European Union AI Act emphasizes the importance of inclusive development and equal access to AI technologies while mitigating discriminatory impacts and biases. Our AI sandbox aligns closely with these objectives, providing a platform for testing and refining AI systems within a compliance-centric framework. This is not the regulatory sandbox envisaged by the AI Act, which will be set up by the agency that will oversee the implementation of the regulation, but it is a first step in that direction."
This pioneering leaderboard, the first in the world to focus on social biases, covers 16 LLMs, including variations, and evaluates them on seven ethical biases: Ageism, LGBTIQ+phobia, Political bias, Racism, Religious bias, Sexism, and Xenophobia. The platform provides transparency by showcasing each model's performance across different biases. The platform can integrate different ethical test suites. Currently, it embeds an adaptation of LangBiTe as part of a collaboration with UOC (Universitat Oberta de Catalunya).
Jordi Cabot, Head of the Software Engineering RDI Unit at LIST, who led the team that created the sandbox, explained: "The architecture of the leaderboard is designed to offer transparency and facilitate user engagement. Users can access detailed information about the biases, examples of passed and failed tests, and even contribute to the platform by suggesting new models or tests.".
Advancing Fairness
Reflecting on the insights gained from building the leaderboard, LIST highlights the importance of context in choosing LLMs and the significance of larger models exhibiting lower biases. Challenges were encountered during evaluation attempts, including discrepancies in LLM responses and the need for explainability in assessment processes.
Francesco Ferrero concluded: "We believe that explainability is crucial in fostering trust and facilitating feedback for continuous improvement. As a community, we must address challenges collaboratively to create awareness about the inherent limitations of AI, inspiring a responsible use of Large Language Models and other Generative AI tools, and over time contributing to increase their reliability. This is particularly important because the best performing models are secretive 'black boxes', which do not allow the research community to examine their limitations."
LIST remains committed to advancing AI research and fostering an environment that promotes fairness, transparency, and accountability in AI technologies.
This work has been partially funded by the Luxembourg National Research Fund (FNR) via the PEARL program, the Spanish government, and the TRANSACT project.
For more information about LIST's AI regulatory sandboxes and the ethical bias leaderboard, visit LIST AI Sandbox.
Machine learning tools can predict emotion in voices in just over a second
Scientists showed that machine learning tools can identify emotions from audio fragments lasting just 1.5 seconds on par with human ratings
Words are important to express ourselves. What we don’t say, however, may be even more instrumental in conveying emotions. Humans can often tell how people around them feel through non-verbal cues embedded in our voice.
Now, researchers in Germany wanted to find out if technical tools, too, can accurately predict emotional undertones in fragments of voice recordings. To do so, they compared three ML models’ accuracy to recognize diverse emotions in audio excepts. Their results were published in Frontiers in Psychology.
“Here we show that machine learning can be used to recognize emotions from audio clips as short as 1.5 seconds,” said the article’s first author Hannes Diemerling, a researcher at the Center for Lifespan Psychology at the Max Planck Institute for Human Development. “Our models achieved an accuracy similar to humans when categorizing meaningless sentences with emotional coloring spoken by actors.”
Hearing how we feel
The researchers drew nonsensical sentences from two datasets – one Canadian, one German – which allowed them to investigate whether ML models can accurately recognize emotions regardless of language, cultural nuances, and semantic content. Each clip was shortened to a length of 1.5 seconds, as this is how long humans need to recognize emotion in speech. It is also the shortest possible audio length in which overlapping of emotions can be avoided. The emotions included in the study were joy, anger, sadness, fear, disgust, and neutral.
Based on training data, the researchers generated ML models which worked one of three ways: Deep neural networks (DNNs) are like complex filters that analyze sound components like frequency or pitch – for example when a voice is louder because the speaker is angry – to identify underlying emotions. Convolutional neural networks (CNNs) scan for patterns in the visual representation of soundtracks, much like identifying emotions from the rhythm and texture of a voice. The hybrid model (C-DNN) merges both techniques, using both audio and its visual spectrogram to predict emotions. The models then were tested for effectiveness on both datasets.
“We found that DNNs and C-DNNs achieve a better accuracy than only using spectrograms in CNNs,” Diemerling said. “Regardless of model, emotion classification was correct with a higher probability than can be achieved through guessing and was comparable to the accuracy of humans.”
As good as any human
“We wanted to set our models in a realistic context and used human prediction skills as a benchmark,” Diemerling explained. “Had the models outperformed humans, it could mean that there might be patterns that are not recognizable by us.” The fact that untrained humans and models performed similarly may mean that both rely on resembling recognition patters, the researchers said.
The present findings also show that it is possible to develop systems that can instantly interpret emotional cues to provide immediate and intuitive feedback in a wide range of situations. This could lead to scalable, cost-efficient applications in various domains where understanding emotional context is crucial, such as therapy and interpersonal communication technology.
The researchers also pointed to some limitations in their study, for example, that actor-spoken sample sentences may not convey the full spectrum of real, spontaneous emotion. They also said that future work should investigate audio segments that last longer or shorter than 1.5 seconds to find out which duration is optimal for emotion recognition.
JOURNAL
Frontiers in Psychology
METHOD OF RESEARCH
Computational simulation/modeling
SUBJECT OF RESEARCH
Not applicable
ARTICLE TITLE
Implementing Machine Learning Techniques for Continuous Emotion Prediction from Uniformly Segmented Voice Recordings
ARTICLE PUBLICATION DATE
20-Mar-2024
ChatGPT is an effective tool for planning field work, school trips and even holidays
Researchers exploring ways to utilise ChatGPT for work, say it could save organisations and individuals a lot of time and money when it comes to planning trips.
A new study, published in Innovations in Education and Teaching International (IETI), has tested whether ChatGPT can be used to design University field studies. It found that the free-to-use AI model is an effective tool for not only planning educational trips around the world, but also could be used by other industries.
The research, led by scientists from the University of Portsmouth and University of Plymouth, specifically focused on marine biology courses. It involved the creation of a brand new field course using ChatGPT, and the integration of the AI-planned activities into an existing university module.
The team developed a comprehensive guide for using the chatbot, and successfully organised a single-day trip in the UK using the AI’s suggestion of a beach clean-up activity to raise awareness about marine pollution and its impact on marine ecosystems.
They say the established workflow could also be easily adapted to support other projects and professions outside of education, including environmental impact studies, travel itineraries, and business trips.
Dr Mark Tupper, from the University of Portsmouth’s School of Biological Sciences, said: “It’s well known that universities and schools across the UK are stretched thin when it comes to resources. We set out to find a way to utilise ChatGPT for planning field work, because of the considerable amount of effort that goes into organising these trips. There’s a lot to consider, including safety procedures, risks, and design logistics. This process can take several days, but we found ChatGPT effectively does most of the leg work in just a few hours. The simple framework we’ve created can be used across the whole education sector, not just by universities. With many facing budget constraints and staffing limitations, this could save a lot of time and money.”
Chatbots like ChatGPT are powered by large amounts of data and computing techniques to make predictions to string words together in a meaningful way. They not only tap into a vast amount of vocabulary and information, but also understand words in context.
Since OpenAI launched the 3.0 model in November 2022, millions of users have used the technology to improve their personal lives and boost productivity. Some workers have used it to write papers, make music, develop code, and create lesson plans.
“If you’re a school teacher and want to plan a class with 40 kids, our ChatGPT roadmap will be a game changer,” said Dr Reuben Shipway, Lecturer in Marine Biology at the University of Plymouth. “All a person needs to do is input some basic data, and the AI model will be able to design a course or trip based on their needs and requirements. It can competently handle various tasks, from setting learning objectives to outlining assessment criteria. For businesses, ChatGPT is like having a personal planning assistant at your fingertips. Imagine trips with itineraries that unfold effortlessly, or fieldwork logistics handled with the ease of conversation."
The paper says while the AI model is adaptable and user-friendly, there are limitations when it comes to field course planning, including risk assessments.
Dr Ian Hendy, from the University of Portsmouth, explained: “We asked ChatGPT to identify the potential hazards of this course and assess the overall risk of this activity from low to high, and the results were mixed. In some instances, ChatGPT was able to identify hazards specific to the activity - like the increased risk of slipping on seaweed-covered rocks exposed at low tide - but in other instances, ChatGPT exaggerated threats. For example, we find the risk of students suffering from physical strain and fatigue from carrying bags of collected litter to be low. That’s why there still needs to be a human element in the planning stages, to iron out any issues. It’s also important that the individual sifting through the results understands the nuances of successful field courses so they can recognise these discrepancies.”
The paper concludes with a series of recommendations for best practices in using ChatGPT for field course design, underscoring the need for thoughtful human input, logical prompt sequencing, critical evaluation, and adaptive management to refine course designs.
Top tips to help potential users get the most out of ChatGPT:
- Get the ball rolling with ChatGPT: Ask what details it thrives on for crafting the perfect assignment plan. By understanding the key information it needs, you'll be well-equipped to structure your prompts effectively and ensure ChatGPT provides tailored and insightful assistance;
- Time Management Made Easy: Share your preferred schedule, and let ChatGPT handle the logistics. Whether you're a back-to-back meetings person or prefer a more relaxed pace, ChatGPT creates an itinerary that suits your working style;
- Flexible Contingency Plans: Anticipate the unexpected. ChatGPT can help you create contingency plans in case of unforeseen events, ensuring that the trip remains adaptable to changing circumstances without compromising the educational goals;
- Cultural Etiquette Guidance: Familiarise yourself with local cultural norms and business etiquette. ChatGPT can provide tips on appropriate greetings, gift-giving customs, and other cultural considerations, ensuring smooth interactions with local business partners;
- Become a proficient Prompt Engineer: There are many quality, low-cost courses in the field of ChatGPT prompt engineering. These are available from online learning platforms such as Udemy, Coursera, and LinkedIn Learning. Poor input leads to poor ChatGPT output, so improving your prompt engineering will always lead to better results;
- Use your unique experiences to improve ChatGPT output: Remember that AI knowledge cannot replace personal experience, but AI can learn from your experiences and use them to improve its recommendations;
- Remember, planning is a two-way street! Engage in feedback with ChatGPT. Don't hesitate to tweak and refine the itinerary until it feels just right. It's your trip, after all.
JOURNAL
Innovations in Education and Teaching International
METHOD OF RESEARCH
Computational simulation/modeling
SUBJECT OF RESEARCH
Not applicable
ARTICLE TITLE
Field courses for dummies: can ChatGPT design a higher education field course?
ARTICLE PUBLICATION DATE
19-Mar-2024
No comments:
Post a Comment