Council of Europe’s AI Agreement: Opportunities and Challenges

The Council of Europe has taken a major step in addressing the challenges and opportunities presented by AI with its new AI treaty. As technology rapidly advances, driven by major tech companies, the speed at which AI is being developed is both exciting and worrying.

On one hand, AI is revolutionizing important sectors like healthcare and education. But on the flip side, it’s becoming more involved in harmful activities such as deepfakes, spreading misinformation, and privacy breaches. This highlights the need for strong regulations.

After years of work, the Council of Europe adopted the Framework Convention on AI on May 17, 2024. While many in the tech industry support the idea of regulating AI, some are concerned that the treaty may slow down innovation. They believe that the balance between managing risks and promoting growth hasn’t been properly evaluated.

Council of Europe’s AI Agreement: Opportunities and Challenges

Key Takeaways

  • The Framework Convention is the first legally binding global treaty on AI regulation.
  • It recognizes the benefits AI brings but also focuses on protecting against risks like discrimination, privacy violations, and misuse.
  • There are concerns that current regulations may limit innovation and create unfair competition, especially for smaller companies that may struggle with the complex rules.
  • Current AI Regulatory Efforts

    Several laws and declarations have already been made to address AI safety:

    • The Bletchley Declaration (November 2023),
    • California Senate Bill SB-1047 (February 2024),
    • The EU AI Act (August 2024),
    • The Council of Europe’s Framework Convention on AI (May 2024).

    A Big Step in AI Governance

    The Framework Convention, adopted by the Council of Europe, opened for signature on September 5, 2024, at a conference in Vilnius. Countries like the UK and EU nations were among the first to sign it. The UK’s representative, Lord Chancellor Shabana Mahmood, pointed out that AI could significantly improve public services and fuel economic growth.

    The treaty acknowledges AI's power to promote societal benefits, such as sustainable development and gender equality, but it also recognizes the dangers AI poses. Mahmood emphasized the importance of humans controlling AI, not the other way around, warning that AI could undermine core values like human rights if left unchecked.

    This convention stands out because it’s legally binding, meaning countries that sign it must comply with the agreed rules. Three key principles underlie the treaty:

  • Protecting human rights: Ensuring responsible data usage and preventing discriminatory AI systems.
  • Protecting democracy: Preventing AI from harming public institutions and democratic processes.
  • Protecting the rule of law: Regulating AI to protect citizens from potential risks.
  • However, balancing the promotion of innovation with the need for safety and accountability is a challenge. Secretary of State Peter Kyle highlighted that trust in AI innovations is essential for realizing their full potential.


    Experts’ Opinions

    Gary Marcus spoke about the need for strong ethical standards and transparency in AI development. He argued that tech giants should not be the ones shaping humanity's future alone, stressing the importance of accountability in AI systems.

    Kate Deniston and Louise Lanzkron, lawyers from Bird & Bird, warned that the treaty’s broad principles might be applied differently in each country, which could create uneven playing fields and hinder innovation in some areas.

    Mark Zuckerberg and Daniel Ek voiced their concerns about overregulation, saying that while regulation is necessary to prevent known harms, overly strict rules could stifle new technologies like open-source AI, especially in Europe.

    Dr. Fei-Fei Li criticized California’s Senate Bill SB-1047 for potentially harming innovation. She believes AI policy should encourage growth while setting appropriate limits, warning that poorly designed regulations could have negative consequences.

    Andrew Ng has also expressed concerns about SB-1047, particularly regarding its vague requirements. He pointed out that expecting developers to predict all possible harms from their AI is unrealistic and could burden smaller companies with legal challenges.



    Impact on Smaller Companies

    While big tech may have the resources to navigate complex regulations, smaller AI firms could face difficulties. Strict legal frameworks may make it harder for these companies to comply, which could stifle their ability to innovate.


    Final Thoughts

    Despite good intentions, some critics feel that current AI regulations could hinder innovation. However, the Framework Convention marks a crucial step toward managing AI's global impact. By focusing on human rights, democracy, and the rule of law, this treaty seeks to balance progress with protection—a task that will always require careful handling.



    FAQs

    Q. What is the Council of Europe's AI Convention? 

    A. The AI Convention is the first international, legally binding treaty aimed at regulating AI.


    Q. What’s the status of the EU AI Act? 

    A. The EU AI Act, adopted in May 2024, is the European Union’s first official framework for AI regulation.


    Q. What’s the EU’s AI policy? 

    A. The EU’s policy revolves around the AI Act, which aims to create consistent regulations for AI across its member states.