AI and Copyright: From Fair Use to Transparency — Global Developments 2023–2025

Camille Abou Farhat Jr
November 2025

Over the last two years later, the AI global landscape has evolved considerably. Legislators have moved from principle to enforcement, while courts have begun to grapple with the foundational intellectual property questions we raised at that time: Can AI be an author? and Does training AI on copyrighted works amount to infringement?

A New Wave of AI Regulation

The past 24 months have seen the enactment of the first generation of AI laws worldwide:

  • The EU AI Act, which entered into force on 1 August 2024, now imposes phased obligations, including immediate prohibitions on social scoring, emotion recognition in workplaces, and real-time biometric surveillance. Transparency duties already apply to systems such as chatbots and content generators.
  • Australia released a Voluntary AI Safety Standard, setting non-binding best practices for testing, transparency, and accountability.
  • China introduced mandatory labeling rules for AI-generated content effective September 2025, alongside new national standards on AI safety and governance.
  • Japan adopted its AI Promotion Act (May 2025), transitioning from soft-law guidance to a statutory framework encouraging coordinated AI development.
  • South Korea passed the AI Basic Act, emphasizing transparency, labeling, and governance of high-risk systems.
  • The United Kingdom enacted the Data Use and Access Act (2025), which modernized data governance but omitted expected IP transparency provisions, leaving the Government to continue work on AI–copyright reform.

Authorship and Copyrightability

Across jurisdictions, courts and regulators continue to affirm that human authorship remains central to copyright protection.

Key developments across jurisdictions include:

  • United States: The U.S. Copyright Office (USCO) reaffirmed in January 2025 that works entirely generated by AI are not copyrightable, and that prompts alone do not meet authorship standards.
  • China: Chinese courts have delivered divergent rulings, with some recognizing copyright in AI-assisted works where human refinement was significant, while others denied protection where originality was insufficient.
  • United Kingdom: The UK Intellectual Property Office has initiated a review of Section 9(3) of the Copyright, Designs and Patents Act 1988, which attributes authorship of computer-generated works to the person making the necessary arrangements. Its future remains uncertain amid questions about its compatibility with modern originality standards.

Training AI Models and Copyright Infringement

The question of whether training AI models on copyrighted data constitutes infringement remains unresolved globally. Key cases and developments include:

  • Getty Images v. Stability AI (High Court, 2025): The UK court held that training on copyrighted material does not amount to secondary infringement, as models learn statistical associations rather than reproduce works.
  • USCO Report (May 2025): In contrast, the U.S. Copyright Office's report concluded that unlicensed use of copyrighted works in AI training may constitute prima facie infringement, rejecting claims that training is categorically transformative.
  • Thomson Reuters v. Ross Intelligence (D. Del., 2025): This decision further limited reliance on fair use, finding direct infringement in AI training that repurposed copyrighted materials.
  • Pending U.S. Cases: Several cases — including Advance Local Media v. Cohere and Concord Music Group v. Anthropic — are expected to define the limits of liability and fair use in the coming year.

Global Trends and the Path Ahead

The comparative picture reveals an emerging jurisdictional divergence:

  • Europe emphasizes procedural transparency and opt-out mechanisms for text and data mining under the DSM Directive and the AI Act.
  • The United States prioritizes substantive copyright balance through fact-specific fair use analysis.
  • The United Kingdom adopts a functional distinction between learning from data and copying data, offering relative comfort to developers who avoid direct reproduction.

Looking ahead, two regulatory trajectories are taking shape. The first is technocratic, relying on standardized metadata and machine-readable registries to enable opt-out and provenance tracking. The second is institutional, envisioning statutory or collective licensing schemes to compensate creators for the use of their works in AI training.

Which approach prevails will depend not only on law but also on political economy — the bargaining power between technology firms and creative industries, the readiness of states to legislate, and the maturity of technical systems capable of supporting transparent rights management.