AI Will Profit From Artists, But New ‘Learnright’ Laws Could Help

June 26, 2023, 8:00 AM UTC

Since the 1700s, US copyright law has protected the rights of content creators—such as authors and artists—to profit from their work. When content creators have incentives to produce valuable content, society benefits. But recent generative AI systems—such as ChatGPT and DALL-E 2—are now raising questions about whether these and other laws provide enough protections for the rights of content creators.

The key problem arises from the difference between copying and learning. Copyright law generally requires people to get permission before copying someone else’s expressions of ideas, facts, or similar information. But copyright law doesn’t protect the underlying ideas or facts.

And it does not prevent humans from learning from copyrighted material and then producing their own new content, as long as the new material isn’t substantially similar to the original material.

This, too, benefits society. Without it, how would writers and artists ever learn their craft by studying the work of previous masters in their field?

But humans are no longer the only entities capable of learning from previous examples and then generating new content of their own. Today’s generative AI systems can now do that at vastly greater speed, scale, and cost efficiency than humans can.

For example, image generation systems—like DALL-E 2 and Stable Diffusion—have learned from massive numbers of images on the web. If you prompt them now with a description of an image you want, they can often produce strikingly good images of whatever you described.

The ability of these new technologies to produce vast amounts of creative content very quickly and cheaply has the potential to provide great value for society. But one question it raises is whether an appropriate share of this value will be provided to the original creators of the content used to train these systems.

For instance, artists with a distinctive style may find it harder to sell their new work to people who appreciate that style if the style can be easily replicated automatically. And news publishers whose content can now be paraphrased by generative AI systems without violating copyright laws may lose significant advertising revenue from readers who no longer need to click through to publishers’ websites.

So, what should we do about this? One promising possibility is to introduce a new body of law based on the premise that different legal protections are appropriate when these massive AI systems can process vast amounts of information far faster and less expensively than humans can.

By analogy to copyright law, we could call this new body of law “learnright” law. Just as copyright law controls the rights to copy content, learnright law would control the rights to let automated systems learn from the material.

If we have such a law, one key question would be how the creators of content could invoke learnright law protection. The simplest way to do this (and also to protect material that was previously available) would be to say that all copyrighted material would automatically have learnright protection.

The law would specify the details of exactly what kinds of AI systems and learning would be covered. Then, to legally learn from this material, the operators of generative AI systems would need to license the right to do this from owners of the original material.

An alternative approach would be to say that creators of original content would need to explicitly invoke their learnright protections, such as by posting learnright notices (similar to today’s copyright notices). This would benefit owners of generative AI systems by making much more content available for learning without charge, but would place a non-trivial burden on the content creators to explicitly protect it using learnright.

Regardless how learnright law is defined specifically, this approach could leverage society’s benefit from generative AI’s power to produce new content, while still providing sufficient incentives for humans to keep creating it, too.

This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.

Author Information

Thomas Malone is the Patrick J. McGovern Professor of Management at the MIT Sloan School of Management and founding director of the MIT Center for Collective Intelligence.

Write for Us: Author Guidelines

Learn more about Bloomberg Law or Log In to keep reading:

Learn About Bloomberg Law

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.