Amazon is Taking Measures to Slow the Influx of AI Content

By L. V. Gaudet

In breaking – not so breaking – news, Amazon KDP has taken some measures dealing with the anticipated, and, for many of us authors, feared, surge in AI-generated book content by non-writers.

Quickly generated poor quality content is not new to KDP and other online self-publishing retailers, or the publishers putting out the submission calls so many of us are competing for.

While publishers of ‘zines, anthologies, books, and other forms are increasingly including in their guidelines that submitting AI-generated or assisted content will not only result in an automatic rejection, but also risks being blacklisted from ever submitting to them again, there is very little that can be done to curb AI-generated content that is ‘self-published’ by the content curator.

From the start of the self-publishing boom, there has been a problem with non-writers, armed with the false belief that authoring is an easy get-rich-quick scheme, aka ‘side hustle’, mass uploading illegitimate content onto self-publishing platforms, along with mass submitting similar content to publishers for everything from flash pieces to full length manuscripts.

KDP issued a statement on Monday, September 18, 2023 announcing they are lowering the volume limits on new title creations. This won’t affect most authors and publishers since most do not mass upload.

What this means is users will be restricted to a daily limit of three new titles.

A previous statement, dated September 6, 2023, announced the addition of AI questions to the publishing process, requiring publishers to inform them of AI-generated content, but not AI-assisted. It is still to be seen what, if anything, KDP will do with content labeled as AI-generated.

The messaging boils down to this: KDP is watching to see how the proliferation and ever-evolving quality of AI-generated and AI-assisted content plays out. 

Maybe they are waiting for the ever-expanding class-action lawsuits against OpenAI for copyright infringements to work through the courts and provide legal boundaries? Is there a potential these lawsuits could go further to include publishers and publishing platforms who did not take reasonable measures to block or remove AI-generated content if OpenAI loses? The first round of lawsuits will likely have to play out to lay the groundwork for any further actions.

A few earlier infringement cases offer opposing precedents to the fair use argument.

AI-generated and assisted work of all kinds are also in a battle over the right to copyright protection.

In August, a federal US judge upheld a previous finding that a piece of AI-generated art is ineligible for copyright protection, because it lacks human authorship. Another case made the distinction between human and non-human created art in human vs. monkey photography copyright protection. AI-generated and AI-assisted creations run under different rules based on the lack of humanity involved in creating the former.

This is also expected to affect the ongoing writers strike that, together with the actors strike, has left film productions at a standstill. If no one will touch that AI-assisted script with a ten-foot pencil because of the writers strike, and a studio cannot copyright an AI-generated script, my money is on the studios not wanting to invest in something they cannot copyright.

AI and the writers strike is another precedent in the making for writers vs. non-writers. The big fear being that studios will eventually replace actors and writers by implementing AI-generated content to produce AI-imaged actors and AI-generated scripts.

Skip to content