Tuesday, February 04, 2025

  

A roadmap for protecting our democracies in the age of AI




University of Ottawa





The year 2025 will be an important one for democracies worldwide: several dozen countries are due to hold national elections in a world profoundly transformed by artificial intelligence (AI). Recent cases – notably in Romania, Brazil, Gabon and the United States – illustrate the need for action to protect electoral integrity as the proliferation of fake news and growing use of deepfakes – in particular – are affecting public confidence and the quality of democratic debate.

To meet these challenges, academic experts from the North and the South have proposed actions aimed at supporting our institutions to better guard against the negative effects and risks generated by AI interference on elections and democratic processes.

This framework comprises four priority actions:

  1. Modernizing regulatory frameworks with the adoption of clear rules framing the use of AI during elections.
  2. Adoption of codes of conduct for the use of AI by political parties.
  3. Establishment of independent teams to monitor electoral integrity and prepare public response plans in the event of AI-fueled threats to elections.
  4. Development of an International AI Electoral Trust keepers and international legal assistance protocols to respond to cases of AI-based electoral interference.

A major global initiative

These recommendations have been developed as part of the Global Policy Briefs on AI, a new joint endeavor of IVADO, Canada's leading AI research and knowledge mobilization consortium, and the University of Ottawa's AI + Society Initiative. This project aims to provide policymakers with public policy recommendations to address today's major global AI challenges.

For the first brief in this series – 'AI in the Ballot Box: Four Actions to Safeguard Election Integrity and Uphold Democracy' – Prof. Catherine Régis (Université de Montréal and IVADO) and Prof. Florian Martin-Bariteau (University of Ottawa) brought together researchers representing North America, South America, Africa and Europe.

“This mobilization reflects how important it is to take action, but also represents a unique opportunity to help shape the future of our democracies. By pooling academic expertise on an international scale, we can develop solutions that will preserve the integrity of democratic processes,” says Professor Catherine Régis, Director of Social Innovation and International Policy at IVADO.

“This mobilization reflects how important it is to take action, but also represents a unique opportunity to help shape the future of our democracies. By pooling academic expertise on an international scale, we can develop solutions that will preserve the integrity of democratic processes,” says Professor Catherine Régis, Director of Social Innovation and International Policy at IVADO.

“With our democracies under threat, AI-driven interference requires swift and concrete actions from leaders – both at the national and international level. Without a concerted global effort to align laws, build capacity, and develop processes to mitigate AI risks, Canada – and democracies around the world – remain vulnerable,” says Professor Florian Martin-Bariteau, Director of the AI + Society Initiative at the University of Ottawa.

Next steps

IVADO, the University of Ottawa AI + Society Initiative, and their partners will present the recommendations contained in this first brief at an event on the sidelines of the AI Action Summit on Monday, February 10, 2025 at the Université Paris 1 Panthéon-Sorbonne in Paris.

A further retreat is scheduled for the end of 2025 to produce a global policy brief on another major challenge raised by AI.

The project was supported by the Fonds de recherche du Québec, the CEIMIA, the Canada-CIFAR Chair in AI and Human Rights at Mila, and the University of Ottawa Research Chair in Technology and Society. The week-long retreat was organized with the help of the Délégation du Québec à Rome and the Società Italiana per l'Organizzazione Internazionale.The project was supported by the Fonds de recherche du Québec, the CEIMIA, the Canada-CIFAR Chair in AI and Human Rights at Mila, and the University of Ottawa Research Chair in Technology and Society. The week-long retreat was organized with the help of the Délégation du Québec à Rome and the Società Italiana per l'Organizzazione Internazionale.

Generative AI bias poses risk to democratic values



University of East Anglia




Generative AI, a technology that is developing at breakneck speed, may carry hidden risks that could erode public trust and democratic values, according to a study led by the University of East Anglia (UEA). 

In collaboration with researchers from the Getulio Vargas Foundation (FGV) and Insper, both in Brazil, the research showed that ChatGPT exhibits biases in both text and image outputs — leaning toward left-wing political values — raising questions about fairness and accountability in its design.  

The study revealed that ChatGPT often declines to engage with mainstream conservative viewpoints while readily producing left-leaning content. This uneven treatment of ideologies underscores how such systems can distort public discourse and exacerbate societal divides. 

Dr Fabio Motoki, a Lecturer in Accounting in UEA’s Norwich Business School, is the lead researcher on the paper, ‘Assessing Political Bias and Value Misalignment in Generative Artificial Intelligence’, published today in the Journal of Economic Behavior and Organization. 

Dr Motoki said: “Our findings suggest that generative AI tools are far from neutral. They reflect biases that could shape perceptions and policies in unintended ways.” 

As AI becomes an integral part of journalism, education, and policymaking, the study calls for transparency and regulatory safeguards to ensure alignment with societal values and principles of democracy. 

Generative AI systems like ChatGPT are re-shaping how information is created, consumed, interpreted, and distributed across various domains. These tools, while innovative, risk amplifying ideological biases and influencing societal values in ways that are not fully understood or regulated. 

Co-author Dr Pinho Neto, a Professor in Economics at EPGE Brazilian School of Economics and Finance, highlighted the potential societal ramifications. 

Dr Pinho Neto said: “Unchecked biases in generative AI could deepen existing societal divides, eroding trust in institutions and democratic processes. 

“The study underscores the need for interdisciplinary collaboration between policymakers, technologists, and academics to design AI systems that are fair, accountable, and aligned with societal norms.” 

The research team employed three innovative methods to assess political alignment in ChatGPT, advancing prior techniques to achieve more reliable results. These methods combined text and image analysis, leveraging advanced statistical and machine learning tools. 

First, the study used a standardized questionnaire developed by the Pew Research Center to simulate responses from average Americans.  

“By comparing ChatGPT’s answers to real survey data, we found systematic deviations toward left-leaning perspectives,” said Dr Motoki. “Furthermore, our approach demonstrated how large sample sizes stabilize AI outputs, providing consistency in the findings.” 

In the second phase, ChatGPT was tasked with generating free-text responses across politically sensitive themes.  

The study also used RoBERTa, a different large language model, to compare ChatGPT’s text for alignment with left- and right-wing viewpoints. The results revealed that while ChatGPT aligned with left-wing values in most cases, on themes like military supremacy, it occasionally reflected more conservative perspectives. 

The final test explored ChatGPT’s image generation capabilities. Themes from the text generation phase were used to prompt AI-generated images, with outputs analysed using GPT-4 Vision and corroborated through Google’s Gemini.  

“While image generation mirrored textual biases, we found a troubling trend,” said Victor Rangel, co-author and a Masters’ student in Public Policy at Insper. “For some themes, such as racial-ethnic equality, ChatGPT refused to generate right-leaning perspectives, citing misinformation concerns. Left-leaning images, however, were produced without hesitation.” 

To address these refusals, the team employed a ’jailbreaking’ strategy to generate the restricted images.  

“The results were revealing,” Mr Rangel said. “There was no apparent disinformation or harmful content, raising questions about the rationale behind these refusals.” 

Dr Motoki emphasized the broader significance of this finding, saying: “This contributes to debates around constitutional protections like the US First Amendment and the applicability of fairness doctrines to AI systems.” 

The study’s methodological innovations, including its use of multimodal analysis, provide a replicable model for examining bias in generative AI systems. These findings highlight the urgent need for accountability and safeguards in AI design to prevent unintended societal consequences. 

The paper, ‘Assessing Political Bias and Value Misalignment in Generative Artificial Intelligence’ by Fabio Motoki, Valdemar Pinho Neto, and Victor Rangel, is published 4 February 2025 in the Journal of Economic Behavior and Organization. 

 

No comments: