The impending introduction of the European Union’s Artificial Intelligence Act represents the alliance’s latest attempt to cement its status as a regulatory powerhouse. Meanwhile, the United States lacks a cohesive AI regulatory framework. Instead, a surge of litigation has overwhelmed US courts, with leading AI firms being sued for copyright infringement, data-privacy breaches, defamation, and discrimination.
Given that litigation is expensive and often drags on for years, the EU’s strategy may appear more forward-looking. But the common-law system might actually prove to be a more effective mechanism for tackling the myriad challenges posed by generative AI. This is particularly evident in copyright law and the core question is whether the training of large language models (LLMs) should qualify as fair use, a classification that would exempt tech firms from compensating content creators. For its part, the EU’s AI Act includes a provision mandating the disclosure of copyrighted materials, enabling copyright holders to opt out of AI training databases.
But the EU’s sweeping regulation could backfire if European regulators fail to strike an appropriate balance between innovation and equity in addressing the question of fair use. For starters, restricting the use of copyrighted materials for LLM training could raise data-acquisition costs, potentially curbing the growth of the AI industry.
At the same time, a growing number of commentators and policymakers have warned that without ensuring fair compensation for content creators, the creative sector could collapse. And the future development of AI technologies depends heavily on the availability of high-quality, human-generated content. As studies have shown, training AI models on Al-generated data could corrupt them, potentially to the point of complete failure.
To be sure, striking the right balance between these two conflicting policy priorities will not be easy. Imagine a scenario in which data for AI training are abundant, particularly in emerging areas like text-to-video generation. Under these circumstances, regulation has little effect on the amount of data available to startups aiming to refine their LLMs. By adopting a more permissive approach to fair use, regulators could enable firms to improve the quality of their models, thereby boosting profits for both AI companies and content creators and enhancing overall consumer welfare.
But these dynamics can shift quickly when data for training AI models—particularly models that rely heavily on new content—are scarce. In such a scenario, permissive fair-use policies could weaken incentives to produce new content, thereby shrinking the pool of data available for AI training. Moreover, the growing sophistication of AI models could exacerbate the crisis of training data shortage by making creators overly reliant on AI for content generation.
So what we really need is a regulatory model that is both adaptable and tailored to specific contexts. The EU’s AI Act, whose broad mandate applies to all firms regardless of their specific industry sectors, combined with the pace of AI development and the competitive structure of the market, increases the likelihood of serious unintended consequences. Consequently, the common-law system, which is based on case-by-case verdict, may turn out to be a more appropriate institutional framework for regulating AI.
What is the text mainly centered on?
Dangers of over-reliance on generative AI.
The urgency to update copyright laws in the AI era.
Challenges of AI training data management.
Problems of the EU’s AI regulatory framework.
D