Temporary fix with plastic for a hole in a ceiling
News Analysis

Are GenAI Copyright Protections Enough to Quell IP Concerns?

7 minute read
David Barry avatar
SAVED
A growing number of big tech companies are offering indemnity for copyright infringements through generative AI, but some experts say that's still not enough.

Just a few weeks ago, the Actors Guild, on behalf of a group of prominent authors, filed a class auction suit in the Southern District of New York against OpenAI, describing the company's use of their material in building the ChatGPT model as “a flagrant and harmful infringements of plaintiffs’ registered copyrights.”

The suit, published on the Guild's website, describes the ChatGPT program as a “massive commercial enterprise” that is reliant upon “systematic theft on a mass scale.”

This comes on the heels of an open letter signed earlier this year by more than 15,000 authors and addressed to the CEOs of a number of AI companies, including OpenAI, Alphabet, Meta, Stability AI, IBM and Microsoft, demanding that these companies “obtain consent from, credit and fairly compensate authors” before using their materials to train AI.

Authors Guild CEO Mary Rasenberger said in a statement that the problem with ChatGPT and other generative AI models is that they can only generate material that is derivative of what came before it.

"They copy sentence structure, voice, storytelling, and context from books and other ingested texts. The outputs are mere remixes without the addition of any human voice. Regurgitated culture is no replacement for human art.”

This is not the first copyright case to land in the courts, but this one is likely to generate a lot more publicity — and probably bad publicity for generative AI — given that the list of the authors behind the suit includes household names such as John Grisham and George R.R. Martin.

Growing Concerns

All of this doesn't come as a surprise to anyone following the technology space. In fact, a recent Acrolinx survey of Fortune 500 companies found IP concerns to be prominent for enterprise leaders considering generative AI technology.

Asked what their primary concern about the use of generative AI is, 25 of the 86 respondents selected intellectual property as their biggest concern. This was followed closely by customer security compliance concerns, which received 23 votes.

Comparatively, the other three answers (bias and inaccuracy, privacy risk, and public data availability and quality) received between nine to 16 votes each. Interestingly, the concern that was ranked the lowest, at 43 votes, is privacy risk.

A total of 86 companies participating in a survey may not seem to make for a statistically significant study, but that number represents 17% of the Fortune 500 universe, which is a significant representation (statistics show a viable sample size at a minimum of 5%). Plus, the veracity of these particularly findings is corroborated by the recent rush of vendors that guarantee the integrity and safety of their offerings.

Related Article: Who’s Responsible for Responsible AI?

A Show of Support for Responsible AI

The filing of the complaint by the Authors Guild came the same week as IBM announced that it is going to indemnity companies that have been found to breach copyright, or any other similar IP claims, that stem from using its generative AI offering. In its statement, IBM said it is providing this indemnity because it "believes in the creation, deployment and utilization of AI models that advance business innovation responsibly."

Getty Images made a similar announcement last week. In a new partnership with Nvidia, the company is launching Generative AI by Getty Images, a new tool that lets people create images using Getty’s library of licensed photos. The company says the new offering will only be trained on Getty Images library, including premium content, and guarantees to give content creators full copyright indemnification.

In a statement about the release, the company's chief product officer, Grant Farhall, said: "We’ve listened to customers about the swift growth of generative AI — and have heard both excitement and hesitation — and tried to be intentional around how we developed our own toolWe’ve created a service that allows brands and marketers to safely embrace AI and stretch their creative possibilities, while compensating creators for inclusion of their visuals in the underlying training sets.”

The new generative AI from Getty can be integrated into existing workflows and applications through an API. The intent is for organizations to soon be able to customize it with proprietary data to produce images.

Then there was Microsoft, which offered copyright protection for Copilot users at the beginning of September. Brad Smith, vice chair and president of the company, explained: “As customers ask whether they can use Microsoft’s Copilot services and the output they generate without worrying about copyright claims, we are providing a straightforward answer: yes, you can, and if you are challenged on copyright grounds, we will assume responsibility for the potential legal risks involved.”

As this makes its way through the courts, there will undoubtedly be more similar announcements, as generative AI developers try to convince organizations that it is safe to use the data generated by their different offerings.

Related Article: Can We Trust Tech Companies to Regulate Generative AI?

A Two-Tiered Response

These statements and protections are welcome for generative AI users. But it remains unclear how far they will go, said Alec Foster, head of marketing and policy at AI Alignment.

Learning Opportunities

Microsoft’s Copilot copyright offering adds a layer of legal certainty for users. By assuming responsibility for potential copyright infringement lawsuits, Microsoft is significantly lowering the risk for businesses and individual users to leverage AI-generated work. “This gesture could indeed make Copilot more attractive to a broad range of users, particularly those previously hesitant due to the murky waters of AI and copyright law,” Foster said.

But while this policy can protect businesses from legal repercussions of generating AI texts and images that may infringe on existing copyrights, it doesn't confer any copyright protections on the AI-generated content itself, Foster said. This means that while a business may be free to create such content without fearing lawsuits, they won't be able to protect these creations from being copied or used by others.

This reality could create a two-tiered response from the business community, Foster said.

On one hand, smaller businesses might find this arrangement extremely attractive. The ability to generate high-quality content without the threat of lawsuits can be a game-changer for startups and smaller enterprises that don't have the legal muscle to fight lengthy copyright cases.

On the other hand, larger businesses may find this lack of protection a significant downside. Corporations invest substantial resources in content creation, and the inability to secure copyrights on AI-generated material might deter them from utilizing such technologies for mission-critical or high-value projects.

“While Microsoft's new Copilot copyright commitment provides a robust safeguard against legal ramifications for AI-generated work, it opens up a set of considerations around the copyrightability of such output,” Foster said.

Businesses will need to weigh the pros and cons carefully, considering the nature of their operations, the value they place on proprietary content and their risk tolerance.

Related Article: Ready to Roll out Generative AI at Work? Use These Tips to Reduce Risk

Indemnification Is Not Enough

Michael Mattioli, a professor of law at Indiana University, said while indemnification gives a semblance of security, it is not a real solution for several reasons:

1. Threat of Litigation

Indemnification, Mattioli said, doesn't prevent a lawsuit. Even if a company is eventually indemnified, the costs — both financial and reputational — associated with being sued can be daunting. The hassle of court proceedings could discourage companies from using AI-generated content in the first place.

2. Ambiguity in Modified Content

The indemnification offers lack clarity about how far they reach. For example, if a user writes prompts that make infringement highly likely, it's uncertain if Microsoft or IBM would still indemnify them. Mattioli said he's concerned that large companies will refuse to indemnify users who contribute to the infringement through targeted prompts.

3. Financial Caps

It's common for companies promising indemnity to place caps, or limits, on the amount covered. This is particularly concerning for businesses that might face multiple claims. Companies must assess risk to decide whether a cap sufficiently covers potential exposure.

4. Fair Use

Mattioli describes this point as "fair use chilled by overcaution inspired by indemnification." In simpler terms, the indemnification guarantees could prevent defendant companies from invoking fair use defenses, leading them instead to settle. This might leave the law ambiguous on this matter, AI stifling innovation over the long term.

“While indemnification addresses some financial risks, it raises new questions and doesn't fully quell legal or business concerns," he said.

About the Author

David Barry

David is a European-based journalist of 35 years who has spent the last 15 following the development of workplace technologies, from the early days of document management, enterprise content management and content services. Now, with the development of new remote and hybrid work models, he covers the evolution of technologies that enable collaboration, communications and work and has recently spent a great deal of time exploring the far reaches of AI, generative AI and General AI.

Main image: anystock on Adobe Stock