By Porter Anderson, Editor-in-Chief | @Porter_Anderson
House of Lords: Government ‘Cannot Sit on Its Hands’Today (February 2), there are sounds of approval coming from publishers in several parts of Europe, as two developments in policy on artificial intelligence move in ways the book industry’s leadership feels can be constructive.
- In London, the Publishers Association‘s CEO Dan Conway is applauding a report released this morning by the House of Lords’ Communications and Digital Committee, that document warning against allowing large language models (LLMs) to exploit the work of rights holders for financial gain.
- And the Federation of European Publishers‘ president Ricardo Franco Levi in Brussels is being joined by publishers in Germany and elsewhere in welcoming the European Union’s permanent representatives committee, called the “Coreper” (Comité des représentants permanents) for its vote today in favor of the “AI Act,” recognized as “the world’s first concrete regulation of AI.”
As we’d previewed the European Union’s Coreper vote yesterday, it’s good to find Levi today saying, “In a context in which the abuses of AI are more and more documented and contested—both in the EU and internationally—the EU has once again the opportunity to set a world standard in digital regulation, and allow AI to unleash its potential without infringing the rights of others.”
The act itself is still slated to face a plenary vote in Parliament later in the spring, but the federation’s assessment of this new European legislation is that it “introduces basic obligations in the field of copyright, recalling that general-purpose AI (such as generative AI) must respect copyright law and have policies in place to this effect. It will also ensure that these [artificial intelligence systems] are transparent on the data used for their training.”
The Börsenverein des Deutschen Buchhandels, Germany’s publishers and booksellers association, is joining the federation in welcoming the Coreper’s support, but cautions that the AI Act in its current form leaves several significant points unresolved.
Börsenverein general manager Peter Kraus vom Cleff says, “The AI Act is a good first step toward regulating AI. However, it only sets minimal standards. This law also leaves many details open that still need to be discussed and regulated. We are only at the beginning of what’s expected to be a long debate on the question of how we as a society and industry can use AI sensibly and what guardrails are needed for this.
“In particular,” he says, the association “welcomes the fact that the weakening of the transparency obligations for AI providers, which was temporarily requested by Germany, France, and Italy, was not incorporated into the law.
“Instead, providers of generative AI systems that use artistic, creative, and journalistic content as source material are now legally obliged to be transparent. This is expressly intended to enable rights holders to exercise and enforce their copyrights. Purely AI-generated content must be labeled as such.”
Nevertheless, the Börsenverein points out, some key points regarding the use of protected works by AI remain unregulated, such as the question of remuneration for content that flows into generative AI. Kraus vom Cleff says, “Authors and publishers must be able to profit economically from the use of their works, because without the influx of new content thought up and made by creative people, generative AI degenerates and therefore becomes worthless.”
Conway: ‘A Pivotal Moment for the UK’s Approach’
At the UK’s Publishers Association, Conway specifies the importance of the newly released report from the House of Lords’ Communications and Digital Committee, which is notable for its rejection of “sci-fi end-of-the-world scenarios” but a frank recognition of the profound threat seen in the training of AI on copyrighted content.
The baroness Stowell of Beeston, who chairs the committee, says, in part, in her commentary, “One area of AI disruption that can and should be tackled promptly is the use of copyrighted material to train LLMs. LLMs rely on ingesting massive datasets to work properly but that does not mean they should be able to use any material they can find without permission or paying rights holders for the privilege. This is an issue the government can get a grip of quickly and it should do so.”
That area of concernt prompts Conway to say, “This report rightly recognizes that the benefits of AI do not warrant the violation of copyright law and its underlying principles.
“As the committee states, it is not fair for tech firms to use rights holders’ content for huge financial gain without permission or compensation.
“The Publishers Association welcomes the prominent call for the government to take action to support rights holders.
“We gave evidence to the committee’s inquiry last year and it’s great to see their report backing many of our key arguments—that LLMs shouldn’t use copyright-protected works without permission or compensation, that there should be support for licensing, that there should be transparency, and that the government should legislate if necessary.
“Publishers have long embraced the benefits of AI in their work and share the committee’s ambition for a positive vision on AI, where the myriad opportunities are embraced but rights holders and human creativity are respected, permissions are sought, and licensing is supported. This report is a call to action for government at a pivotal moment for the UK’s approach to AI.”
The New British Report: ‘Addressing Risks’
What Conway is referring to becomes readily apparent in even a quick scan of the 95-page report’s executive summary.
“Some tech firms are using copyrighted material without permission, reaping vast financial rewards. … The government has a duty to act. It cannot sit on its hands for the next decade and hope the courts will provide an answer.”House of Lords Communications and Digital Committee, 'Large Language Models and Generative AI'Some tech firms are using copyrighted material without permission, reaping vast financial rewards. … the government has a duty to act. It cannot sit on its hands for the next decade and hope the courts will provide an answer.”
Maybe most unexpectedly, the House of Lords committee wants to see “a more positive vision for large language models is needed to reap the social and economic benefits, and enable the UK to compete globally.”
Indeed, the committee writes, “The government’s approach to artificial intelligence and large language models has become too focused on a narrow view of AI safety. The UK must re-balance toward boosting opportunities while tackling near-term security and societal risks. It will otherwise fail to keep pace with competitors, lose international influence, and become strategically dependent on overseas tech firms for a critical technology.”
And yet, relative to the world publishing industry’s concerns around generative AI trained on unlicensed content, the committee writes, “We have even deeper concerns about the government’s commitment to fair play around copyright. Some tech firms are using copyrighted material without permission, reaping vast financial rewards. The legalities of this are complex but the principles remain clear. The point of copyright is to reward creators for their efforts, prevent others from using works without permission, and incentivize innovation.
“The current legal framework is failing to ensure these outcomes occur and the government has a duty to act. It cannot sit on its hands for the next decade and hope the courts will provide an answer.”
Some of the points raised by the report:
- “Prepare quickly: The UK must prepare for a period of protracted international competition and technological turbulence as it seeks to take advantage of the opportunities provided by LLMs.”
- “Treat open and closed arguments with care: Open models offer greater access and competition, but raise concerns about the uncontrollable proliferation of dangerous capabilities. Closed models offer more control but also more risk of concentrated power. A nuanced approach is needed. The government must review the security implications at pace while ensuring that any new rules support rather than stifle market competition.”
- “We call for a suite of measures to boost computing power and infrastructure, skills, and support for academic spinouts. The government should also explore the options for and feasibility of developing a sovereign LLM capability, built to the highest security and ethical standards.”
- “The government should prioritize fairness and responsible innovation. It must resolve disputes definitively (including through updated legislation if needed); empower rightsholders to check if their data has been used without permission; and invest in large, high‑quality training datasets to encourage tech firms to use licensed material.”
- “Catastrophic risks (above 1,000 UK deaths and tens of billions in financial damages) are not likely within three years but cannot be ruled out, especially as next‑generation capabilities come online. There are however no agreed warning indicators for catastrophic risk. There is no cause for panic, but this intelligence blind spot requires immediate attention. Mandatory safety tests for high‑risk high‑impact models are also needed: relying on voluntary commitments from a few firms would be naïve and leaves the government unable to respond to the sudden emergence of dangerous capabilities. Wider concerns about existential risk (posing a global threat to human life) are exaggerated and must not distract policymakers from more immediate priorities.”
There’s much to read in the new House of Lords report, which is unusual for its blend of cautionary outlook and developmental encouragement. You can find this thoughtfully balanced report here (PDF).
A Programming Note
At London Book Fair (March 12 to 14), the Publishers Association’s Dan Conway will join Publishing Perspectives for Copyright and AI: A Global Discussion of Machines, Humans, and the Law, a part of the trade fair’s scheduled Main Stage series of events.
The session is scheduled for 3:15 to 4 p.m. on the fair’s opening day, March 12, on the Main Stage.
In looking at the implications of AI for copyright law and policy, our speakers will delve into the legal and ethical dimensions across different continents, while also navigating the inherent complexities of regulating this new technology.
- Maria A. Pallante, president and CEO, Association of American Publishers (AAP)
- Glenn Rollans, former president, Association of Canadian Publishers (ACP) and president and publisher of Brush Education
- Nicola Solomon, CEO, Society of Authors (SoA)
- Dan Conway, Publishers Association (PA)
We hope you’ll join us for this timely evaluation of the issues with a group of insightful specialists.
More from Publishing Perspectives on artificial intelligence is here, more on the European Union is here, more on the Federation of European Publishers is here, more on the Publishers Association is here, more on the United Kingdom is here, and more on the publishing markets and their issues in Europe is here.