Oct. 30 Webinar Examines How AI Lawsuits Could Affect Copyrighted Content

Arielle Emmett

Ongoing lawsuits in the U.S. and elsewhere are testing, and could eventually determine, the legality of the role of artificial intelligence (AI) in writing and content creation in the near future.

An Oct. 30 webinar sponsored by ASJA will feature a panel of experts who’ll discuss the lawsuits and AI’s impact on copyright law and protections for writers. The webinar takes place 1:30-3 p.m. Eastern time. The event is free for ASJA members and $20 for nonmembers Register here.

Panelists include:

  • Regan Smith, senior vice president and general counsel at the News/Media Alliance, and a recognized expert in intellectual property law and policy.
  • Umair Kazi, director of policy and advocacy at the Authors Guild, a professional organization of over 15,000 writers.
  • Maggie Harrison Dupré, an award-winning journalist and senior staff writer for Futurism, where she covers AI and its intersections with media, information and the internet.
Regan Smith, Umair Kazi, and Maggie Harrison Dupré will share their expertise on October 30th.

The panel is co-moderated by ASJA member Richard Eisenberg, a freelance writer and editor and co-host of the Friends Talk Money podcast. Eisenberg was formerly managing editor of Next Avenue and editor of the site’s Money & Policy and Work & Purpose channels; and ASJA member Arielle Emmett, Ph.D., a writer, visual journalist, and traveling scholar specializing in East Asia, science writing, and human interest.

Lawsuits Could Redefine Common Content Creator Protections

The Oct. 30 webinar is the second event ASJA has hosted on the impact of AI on copyright law and protections for writers.

A major issue is whether U.S. courts will ultimately redefine such core concepts as fair use, common licensing fees and copyright infringement to protect journalists, authors and content providers from AI-related hoovering up of original content.

On court dockets are several pending cases filed against OpenAI, the inventor of ChatGPT, and Microsoft, a leading OpenAI investor. In September 2023, the Authors Guild filed a class action suit on behalf of 17 best-selling authors, among them John Grisham, Jodi Picoult, Jonathan Franzen, Elin Hilderbrand, and George R.R. Martin.

The Authors Guild suit claims that OpenAI is pirating the authors’ books to train its large language model (LLM) algorithms to imitate authorial style or produce derivative works — all without consent, compensation or attribution. Describing OpenAI’s actions as “systematic theft on a mass scale,” the Guild is seeking permanent injunctions and damages for lost licensing opportunities.

Countering the suit, OpenAI argues that its absorption of billions of e-books, copyrighted articles, and other content is used in training to stimulate creative output, not to generate derivative works. Further, the company argues its training practices constitute fair use under copyright law, although questions of infringement and responsibility for AI outputs are far from settled.

Complex Legal Landscape

ASJA’s first AI webinar in January explored the fundamental workings of the AI “thinking” machine, its implications for journalism and content creation, and how the media has responded. Job loss, fake news, AI-generated articles and reviews claiming to be human generated and appropriation and “scraping” of copyrighted images are part of an embattled legal landscape. In January 2023, Getty Images took legal action in a U.K. High Court against Stability AI, the creator of a generative AI visual engine known as Stable Diffusion. In an argument strikingly reminiscent of the Authors Guild copyright infringement case against OpenAI, Getty accused Stability AI of unlawfully pirating millions of its copyrighted images to train Stable Diffusion’s generative visual model. The case is ongoing, as is a similar California case brought against Stability AI and other AI companies by a group of artists.

These cases raise multiple questions that panelists speaking at the Oct. 30 webinar will discuss, including whether copyright law applies only to human-created works or to AI-authored creations as well. The panelists will also discuss whether rights belong to an AI system developer, a prompt engineer, or the originating human source or the company to license the work.

Precedence vs. Reluctance

To date, the U.S. government has tread lightly on intellectual property challenges to tech companies asserting fair use in their AI training models. One exception is the recent Federal Trade Commission (FTC) crackdown on fake and AI-generated consumer reviews and testimonials.

The FTC can now seek civil penalties against knowing violators creating fake reviews by prohibiting their sale or purchase. “Fake reviews not only waste people’s time and money, but also pollute the marketplace and divert business away from honest competitors,” said FTC Chair Lina M. Khan, in an FTC press release. While the ruling is an encouraging sign for consumer protection, it does not address why copyright protections for authors and artists in our AI era have failed to coalesce.

“Congress’ longtime resistance to meaningfully regulate Silicon Valley and pass comprehensive privacy laws has led to an exacerbation of glaring regulatory holes in the age of AI,” Harrison Dupré said in an email interview. “Our country has not addressed many of the legal and economic issues in the AI era, much less the issues that earlier digital technologies have raised.”

**

ASJA member Arielle Emmett, Ph.D., is a writer, visual journalist, and traveling scholar specializing in East Asia, science writing, and human interest.