Are we trusting AI too much? New study demands accountability in Artificial Intelligence
University of Surrey
Are we putting our faith in technology that we don't fully understand? A new study from the University of Surrey comes at a time when AI systems are making decisions impacting our daily lives—from banking and healthcare to crime detection. The study calls for an immediate shift in how AI models are designed and evaluated, emphasising the need for transparency and trustworthiness in these powerful algorithms.
As AI becomes integrated into high-stakes sectors where decisions can have life-altering consequences, the risks associated with 'black box' models are greater than ever. The research sheds light on instances where AI systems must provide adequate explanations for their decisions, allowing users to trust and understand AI rather than leaving them confused and vulnerable. With cases of misdiagnosis in healthcare and erroneous fraud alerts in banking, the potential for harm – which could be life-threatening - is significant.
Surrey's researchers detail the alarming instances where AI systems have failed to adequately explain their decisions, leaving users confused and vulnerable. With misdiagnosis cases in healthcare and erroneous fraud alerts in banking, the potential for harm is significant. Fraud datasets are inherently imbalanced - 0.01% are fraudulent transactions – leading to damage on the scale of billions of dollars. It is reassuring for people to know most transactions are genuine, but the imbalance challenges AI in learning fraud patterns. Still, AI algorithms can identify a fraudulent transaction with great precision but currently lack the capability to adequately explain why it is fraudulent.
Dr Wolfgang Garn, co-author of the study and Senior Lecturer in Analytics at the University of Surrey, said:
"We must not forget that behind every algorithm’s solution, there are real people whose lives are affected by the determined decisions. Our aim is to create AI systems that are not only intelligent but also provide explanations to people - the users of technology - that they can trust and understand."
The study proposes a comprehensive framework known as SAGE (Settings, Audience, Goals, and Ethics) to address these critical issues. SAGE is designed to ensure that AI explanations are not only understandable but also contextually relevant to the end-users. By focusing on the specific needs and backgrounds of the intended audience, the SAGE framework aims to bridge the gap between complex AI decision-making processes and the human operators who depend on them.
In conjunction with this framework, the research uses Scenario-Based Design (SBD) techniques, which delve deep into real-world scenarios to find out what users truly require from AI explanations. This method encourages researchers and developers to step into the shoes of the end-users, ensuring that AI systems are crafted with empathy and understanding at their core.
Dr Wolfgang Garn continued:
"We also need to highlight the shortcomings of existing AI models, which often lack the contextual awareness necessary to provide meaningful explanations. By identifying and addressing these gaps, our paper advocates for an evolution in AI development that prioritises user-centric design principles. It calls for AI developers to engage with industry specialists and end-users actively, fostering a collaborative environment where insights from various stakeholders can shape the future of AI. The path to a safer and more reliable AI landscape begins with a commitment to understanding the technology we create and the impact it has on our lives. The stakes are too high for us to ignore the call for change."
The research highlights the importance of AI models explaining their outputs in a text form or graphical representations, catering to the diverse comprehension needs of users. This shift aims to ensure that explanations are not only accessible but also actionable, enabling users to make informed decisions based on AI insights.
The study has been published in Applied Artificial Intelligence.
[ENDS]
Journal
Applied Artificial Intelligence
Method of Research
Observational study
Article Title
Real-World Efficacy of Explainable Artificial Intelligence using the SAGE Framework and Scenario-Based Design
New AI framework aims to remove bias in key areas such as health, education, and recruitment
Researchers at the University of Navarra present this new prediction methodology that could help governments and companies eliminate algorithmic discrimination and ensure fairness in critical decision-making
image:
Caption: From left to right: Alberto García Galindo, Marcos López De Castro and Rubén Armañanzas Arnedillo
view moreCredit: Manuel Castells
Researchers from the Data Science and Artificial Intelligence Institute (DATAI) of the University of Navarra (Spain) have published an innovative methodology that improves the fairness and reliability of artificial intelligence models used in critical decision-making. These decisions significantly impact people's lives or the operations of organizations, as occurs in areas such as health, education, justice, or human resources.
The team, formed by researchers Alberto García Galindo, Marcos López De Castro and Rubén Armañanzas Arnedillo, has developed a new theoretical framework that optimizes the parameters of reliable machine learning models. These models are AI algorithms that transparently make predictions, ensuring certain confidence levels. In this contribution, the researchers propose a methodology able to reduce inequalities related to sensitive attributes such as race, gender, or socioeconomic status.
Machine Learning, one of the leading scientific journals in artificial intelligence and machine learning, presents this study. It combines advanced prediction techniques (conformal prediction) with algorithms inspired by natural evolution (evolutionary learning). The derived algorithms offer rigorous confidence levels and ensure equitable coverage among different social and demographic groups. Thus, this new AI framework provides the same reliability level regardless of individuals' characteristics, ensuring fair and unbiased results.
"The widespread use of artificial intelligence in sensitive fields has raised ethical concerns due to possible algorithmic discriminations," explains Armañanzas Arnedillo, principal investigator of DATAI at the University of Navarra. "Our approach enables businesses and public policymakers to choose models that balance efficiency and fairness according to their needs, or responding to emerging regulations. This breakthrough is part of the University of Navarra's commitment to fostering a responsible AI culture and promoting ethical and transparent use of this technology.”
Application in real scenarios
Researchers tested this method on four benchmark datasets with different characteristics from real-world domains related to economic income, criminal recidivism, hospital readmission, and school applications. The results showed that the new prediction algorithms significantly reduced inequalities without compromising the accuracy of the predictions. "In our analysis, we found, for example, striking biases in the prediction of school admissions, evidencing a significant lack of fairness based on family financial status," notes Alberto García Galindo, DATAI predoctoral researcher at the University of Navarra and first author of the paper. "In turn, these experiments demonstrated that, on many occasions, our methodology manages to reduce such biases without compromising the model's predictive ability. Specifically, with our model, we found solutions in which discrimination was practically completely reduced while maintaining prediction accuracy." The methodology offers a 'Pareto front' of optimal algorithms, "which allows us to visualize the best available options according to priorities and to understand, for each case, how algorithmic fairness and accuracy are related".
According to the researchers, this innovation has vast potential in sectors where AI must support reliable and ethical critical decision-making. Garcia Galindo points out that their method "not only contributes to fairness but also enables a deeper understanding of how the configuration of models influences the results, which could guide future research in the regulation of AI algorithms." The researchers have made the code and data from the study publicly available to encourage further research applications and transparency in this emerging field.
Journal
Machine Learning
Method of Research
Meta-analysis
Subject of Research
Not applicable
Article Title
Fair prediction sets through multi-objective hyperparameter optimization