Thursday, September 05, 2024

Council of Europe opens world’s first global AI treaty for signature
Council of Europe opens world’s first global AI treaty for signature


The Council of Europe opened the world’s first legally binding global treaty on artificial intelligence (AI) for signature on Thursday. Unveiled at a conference in Vilnius, Lithuania, this historic treaty sets a new international standard by ensuring that AI systems align with human rights, democratic values, and the rule of law.

Formally known as the Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law (CETS No. 225), the treaty marks a pivotal moment in global AI regulation. It represents the first international agreement on AI governance and has been signed by the European Union (EU), the UK, the US, Israel, Andorra, Georgia, Iceland, Norway, the Republic of Moldova, and San Marino. Adopted by the Council of Europe’s Committee of Ministers on May 17, 2024, the treaty establishes a comprehensive legal framework covering the entire lifecycle of AI systems, from design and development to deployment and decommissioning. It addresses potential risks while promoting responsible innovation, using a technology-neutral approach designed to adapt as AI evolves.

This convention stands as the first legally binding global framework for AI, harmonizing with Union law, including the EU AI Act, the world’s first extensive AI regulation. The EU, through its Commission and member states, played an active role in the negotiation process, contributing significantly to the convention’s development. The treaty aligns with the principles outlined in the EU AI Act and other EU regulations, incorporating fundamental elements such as:

  • A focus on human-centric AI, consistent with human rights, democracy, and rule of law
  • A risk-based approach
  • key principles for trustworthy AI (e.g. transparency, robustness, safety, data governance and protection)
  • transparency for AI-generated content and in interactions with AI systems
  • strengthened documentation, accountability and remedies
  • support to safe innovation through regulatory sandboxes
  • risk management obligations
  • documentation obligations
  • oversight mechanisms for supervision of AI activities

The treaty will come into force three months after five signatories, including at least three Council of Europe member states, ratify it. This provision ensures that the treaty can be implemented effectively while allowing time for widespread adoption. Negotiated by the Council of Europe, the treaty involved contributions from 46 member states, the EU, and 11 non-member states, including Argentina, Australia, Canada, Costa Rica, the Holy See, Israel, Japan, Mexico, Peru, the US, and Uruguay. Representatives from the private sector, civil society, and academia also participated as observers.

Council of Europe Secretary General Marija Pejčinović Burić highlighted the treaty’s significance, stating,

We must ensure that the rise of AI upholds our standards rather than undermining them. The Framework Convention is designed to ensure just that. It is a strong and balanced text – the result of the open and inclusive approach by which it was drafted and which ensured that it benefits from multiple and expert perspectives. The Framework Convention is an open treaty with a potentially global reach. I hope that these will be the first of many signatures and that they will be followed quickly by ratifications, so that the treaty can enter into force as soon as possible.

Furthermore, the treaty parties must ensure legal remedies for victims of AI-related human rights violations and provide procedural safeguards, such as notifying individuals when they are interacting with AI systems. The treaty also requires parties to prevent AI from undermining democratic institutions and processes, including the separation of powers, judicial independence, and access to justice. While the convention does not apply to national security activities, it mandates that such activities comply with international law and democratic principles. It also excludes national defense and research and development activities, except when AI testing could affect human rights, democracy, or the rule of law.

No comments:

Post a Comment