Issue
Can AI companies lawfully use copyrighted material to train their models without permission or payment?
Short Answer/Summary
The UK government’s proposed opt-out model for AI training was rejected after widespread opposition. For now, liability is being determined through litigation rather than legislation, leaving uncertainty for both creators and AI developers.
Background/Facts
The UK’s copyright framework, under the Copyright, Designs and Patents Act 1988, provides only a narrow text and data mining (TDM) exception for non-commercial research. This means commercial AI training currently lacks a clear legal basis.
In late 2024, the government consulted on reform options, ranging from maintaining the current law to introducing a broader exception. Its preferred approach was an opt-out model, allowing AI developers to use copyrighted works unless rights holders actively reserved their rights.
The proposal triggered over 10,000 responses, with the majority rejecting it. Creative industries favoured a licensing-based model requiring permission and payment, while AI developers argued that mandatory licensing would hinder innovation.
On 18 March 2026, ministers confirmed they were stepping back, with no legislative reform expected before 2027.
Analysis
The central question is whether AI companies should be able to use copyrighted material to train their models without permission, and if so, on what terms.
The government’s opt-out model would have favoured AI developers by default but shifted the burden onto rights holders to actively protect their work. A licensing-based approach better reflects the principles of copyright law, but requiring licences for all training data may be commercially unrealistic given the scale of AI development.
This tension is already playing out in the courts. In Getty Images v Stability AI (UK, November 2025), the High Court ruled that Stability AI was not liable for secondary infringement, largely because Getty could not establish that the training took place within the UK jurisdiction. In Germany, GEMA v OpenAI reached a different conclusion, finding that training on copyrighted song lyrics infringed German copyright, with the TDM (text and data mining) exception held not to apply.
Without clear legislative guidance, liability is being determined piecemeal through litigation, leaving both creators and AI developers in a fragmented legal landscape where outcomes depend on jurisdiction.
Conclusion
The government's decision to delay reform leaves a significant gap in the law. With cases like Getty Images v Stability AI now determining the boundaries, copyright in AI training is increasingly being shaped in the courts rather than in Parliament.
A strict licensing model may better protect creators, a broad exception may better support innovation, but neither is risk-free. A hybrid framework combining licensing with meaningful transparency obligations is the most plausible long-term compromise.