European AI treaty adds uncertainty for CIOs, but few specifics

Start

An AI usage treaty, negotiated by representatives of 57 countries, was unveiled Thursday, but its language is so overarching that it’s unclear if enterprise CIOs will need to do anything differently to comply.

This mostly European effort adds to a lengthy list of AI global compliance efforts on top of many new legal attempts to govern AI in the United States. The initial signatories were Andorra, Georgia, Iceland, Norway, the Republic of Moldova, San Marino, and the United Kingdom, as well as Israel, the United States of America, and the European Union.

In its announcement, the Council of Europe said, “there are serious risks and perils arising from certain activities within the lifecycle of artificial intelligence such as discrimination in a variety of contexts, gender inequality, the undermining of democratic processes, impairing human dignity or individual autonomy, or the misuses of artificial intelligence systems by some States for repressive purposes, in violation of international human rights law.”

What the treaty says

The treaty, dubbed Framework Convention on artificial intelligence and human rights, democracy, and the rule of law, did emphasize that companies must make it clear to users whether or not they are communicating with a human or an AI.

Companies under the treaty must give “notice that one is interacting with an artificial intelligence system and not with a human being” as well as “carry out risk and impact assessments in respect of actual and potential impacts on human rights, democracy and the rule of law.”

Entities must also document everything they can about AI usage and be ready to make it available to anyone who asks about it. The agreement says that entities must “document the relevant information regarding AI systems and their usage and to make it available to affected persons. The information must be sufficient to enable people concerned to challenge the decision(s) made through the use of the system or based substantially on it, and to challenge the use of the system itself” and to be able to “lodge a complaint to competent authorities.”

Double standard

One observer in the treaty negotiation process, Francesca Fanucci, a legal specialist at ECNL (European Center for Not-for-Profit Law Stichting), described the effort as having been “watered down”, mostly in dealing with private companies and national security. 

“The formulation of principles and obligations in this convention is so overbroad and fraught with caveats that it raises serious questions about their legal certainty and effective enforceability,” she told Reuters.

The final document does explicitly exclude national securities matters: “Matters relating to national defence do not fall within the scope of this Convention.”

In an interview with Computerworld, Fanucci said that the final version of the treaty treats businesses very differently than governments.

The treaty “establishes obligations for State Parties, not for private actors directly. This treaty imposes on the State Parties to apply its rules to the public sector, but to choose if and how to apply them in their national legislation to the private sector. This is a compromise reached with the countries who specifically asked to have the private sector excluded, among these were the US, Canada, Israel and the UK,” Fanucci said. “They are practically allowed to place a reservation to the treaty.”

“This double standard is disappointing,” she added.

Lack of specifics

Tim Peters, an officer of compliance firm Enghouse Systems in Canada, was one of many who applauded the idea and intent of the treaty while questioning its specifics.

“The Council of Europe’s AI treaty is a well-intentioned but fundamentally flawed attempt to regulate a rapidly evolving space with yesterday’s tools. Although the treaty touts itself as technology-neutral, this neutrality may be its Achilles’ heel,” Peters said. “AI is not a one-size-fits-all solution, and attempting to apply blanket rules that govern everything from customer service bots to autonomous weapons could stifle innovation and push Europe into a regulatory straitjacket.”

Peters added that this could ultimately undermine enterprise AI efforts. 

“Enterprise IT executives should be concerned about the unintended consequences: stifling their ability to adapt, slowing down AI development, and driving talent and investment to more AI-friendly regions,” Peters said. “Ultimately, this treaty could create a competitive divide between companies playing it safe in Europe and those pushing boundaries elsewhere. Enterprises that want to thrive need to think critically about the long-term impact of this treaty, not just on AI ethics, but on their ability to innovate.”

Another industry executive, Trustible CTO Andrew Gamino-Cheong, also questioned the agreement’s lack of specifics.

“The actual contents of the treaty aren’t particularly strong and are mostly high level statements of principles. But I think it’s mostly an effort for countries to unify in asserting their rights as sovereign entities over the digital world. For some context on what I mean, I see what’s happening with Elon Musk and Brazil as a good example of the challenges governments face with tech,” Gamino-Cheong said. “It is technologically difficult to block Starlink in Brazil, which can in turn allow access to X, which is able to set its own content rules and dodge what Brazil wants them to do. Similarly, even though Clearview AI doesn’t legally operate in the EU, their having EU citizens’ data is enough for GDPR lawsuits against them there.”

Ernst & Young managing director Brian Levine addressed questions about the enforceability of this treaty, especially with companies in the United States, even though the US was one of the signatories. It is not uncommon for American companies to ignore European fines and penalties

“One step at a time. You can’t enforce shared rules and norms until you first reach agreement on what the rules and norms are,” Levine said. “We are rapidly exiting the ‘Wild West’ phase of AI. Get ready for the shift from too little regulation and guidance to too much.”

The treaty will enter into force “on the first day of the month following the expiration of a period of three months after the date on which five signatories, including at least three Council of Europe member states, have ratified it,” the announcement said. 

Previous Story

GenAI could make the Apple Watch a powerful healthcare tool

Next Story

The SEC’s 2023 final rules on cybersecurity disclosures