Europe’s AI Act Is Agreed; Final Vote Early in 2024

In Feature Articles by Porter AndersonLeave a Comment

The world’s first broad-based legislation on artificial intelligence is approved in European Union negotiators’ marathon sessions.

Image ─ Getty iStockphoto: LucaDP

By Porter Anderson, Editor-in-Chief | @Porter_Anderson

Von der Leyen: ‘Global Guardrails for Trustworthy AI’
When Publishing Perspectives looked last week at the European Union’s bid for the world’s first comprehensive legislation on artificial intelligence, the fifth “trilogue” was set for Wednesday (December 6), with concerns being expressed about how firmly a final version of the “AI Act” might be written in regard to “foundation models”—those major systems devised to ingest vast amounts of data and produce output as “generative AI.”

As Kelvin Chan writes for the Associated Press, the news of an accord reached shortly before midnight Friday in Brussels by negotiators from the European Parliament and the member-states may well have been overlooked by many headed into their weekend during what has become a remarkably competitive international news cycle. An initial 22-hour session of discussion had been followed by a second session opening Friday morning and going deep into the night.

The European Commissioner Thierry Breton tweeted the news that an agreement had been reached:

A final, formal vote in the European Parliament must be held early in 2024. And while there are said to be details of the act still to be worked out and nailed down, the foundation-model question apparently was worked out relatively early. This is the area in which France, Italy, and Spain had initially proposed that major tech companies in the field should police themselves, something that by last week was being strongly resisted by Italy’s publishers and associated cultural organizations.

As Chan describes the upshot of the agreements reached in the new legislation, “The companies building foundation models will have to draw up technical documentation, comply with EU copyright law and detail the content used for training. The most advanced foundation models that pose ‘systemic risks’ will face extra scrutiny, including assessing and mitigating those risks, reporting serious incidents, putting cybersecurity measures in place, and reporting their energy efficiency.”

Kim Mackrael for the Wall Street Journal in a filing made after 6 p.m. Friday on the United States’ West Coast where many major tech players in the field are based, writes, “The deal agreed to by lawmakers includes bans on several AI applications, such as untargeted scraping of images to create facial-recognition databases, and sets rules for systems that lawmakers consider to be high-risk, according to a statement from the European Parliament. It also includes transparency rules for general-purpose AI systems and the models that power them,” those foundation models.

In that statement from the parliament, we read, in part, “Recognizing the potential threat to citizens’ rights and democracy posed by certain applications of AI, the co-legislators agreed to prohibit:

  • “Biometric categorization systems that use sensitive characteristics (e.g. political, religious, philosophical beliefs, sexual orientation, race);
  • “Untargeted scraping of facial images from the Internet or CCTV footage to create facial recognition databases;
  • “Emotion recognition in the workplace and educational institutions;
  • “Social scoring based on social behavior or personal characteristics;
  • “AI systems that manipulate human behavior to circumvent their free will;
  • “AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation).”

What’s more, “For AI systems classified as high-risk (due to their significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law), clear obligations were agreed. MEPs successfully managed to include a mandatory fundamental rights impact assessment, among other requirements, applicable also to the insurance and banking sectors.

“AI systems used to influence the outcome of elections and voter behavior are also classified as high-risk. Citizens will have a right to launch complaints about AI systems and receive explanations about decisions based on high-risk AI systems that impact their rights.”

In these and more regulatory features, the cost of non-compliance will be high: “Non-compliance with the rules,” according to the parliament’s statement, “can lead to fines ranging from €35 million (US$37.7 million) or 7 percent of international turnover to €7.5 million (US$ 8 million) or 1.5 percent of turnover, depending on the infringement and size of the company.”

At The New York Times, Adam Satariano writes, “even as the law was hailed as a regulatory breakthrough, questions remained about how effective it would be.

“Many aspects of the policy were not expected to take effect for 12 to 24 months, a considerable length of time for AI development. And up until the last minute of negotiations, policymakers and countries were fighting over its language and how to balance the fostering of innovation with the need to safeguard against possible harm.”

So while an agreement has been reached, its precise outlines—and how firm they may be—remain to be examined and understood.

Ursula von der Leyen

In welcoming the act, European Commission president Ursula von der Leyen said, “Our AI Act will make a substantial contribution to the development of global guardrails for trustworthy AI.

“We will continue our work at international level, in the G7, the OECD, the Council of Europe, the G20 and the UN.

“Just recently, we supported the agreement by G7 leaders under the Hiroshima AI process on International Guiding Principles and a voluntary Code of Conduct for Advanced AI systems.”

Much is to be sorted in coming weeks and months about the new legislation, its effectiveness, enforcement questions. At The Hollywood Reporter today (December 11), Scott Roxborough writes that in its transparency demands for the large general-purpose programs, “The EU has used the standard applied by US President Joe Biden in his October 30 executive order, requiring only the most powerful large-language models, defined as those that use foundational models that require training upwards of 1025 flops (a measure of computational complexity) to abide by new transparency laws. Companies that violate the regulations could face fines of up to 7 percent of their total global sales.”

For the moment, as details are hammered out and examined by industry parties—many of whom in the big-tech and music industries had relentlessly lobbied members of the European Parliament for months—many in the book publishing industry may be cheered that, as Richard Smirke writes at Billboard, “Earlier versions of the bill decreed that companies using generative AI or foundation AI models like OpenAI’s ChatGPT or Anthropic’s Claude 2 would be required to provide summaries of any copyrighted works, including music, that they use to train their systems.”

And Europe has succeeded in its goal of being the first major governmental entity to formalize abroad body of legislation on AI.


More from Publishing Perspectives on artificial intelligence is here, more on the European Union is here, more on Italy’s publishers association, the Associazione Italiana Editori, is here, and more on Italy’s publishing market is here.

About the Author

Porter Anderson

Facebook Twitter

Porter Anderson has been named International Trade Press Journalist of the Year in London Book Fair's International Excellence Awards. He is Editor-in-Chief of Publishing Perspectives. He formerly was Associate Editor for The FutureBook at London's The Bookseller. Anderson was for more than a decade a senior producer and anchor with CNN.com, CNN International, and CNN USA. As an arts critic (Fellow, National Critics Institute), he was with The Village Voice, the Dallas Times Herald, and the Tampa Tribune, now the Tampa Bay Times. He co-founded The Hot Sheet, a newsletter for authors, which now is owned and operated by Jane Friedman.

Leave a Comment