Whatever your position on AI, there’s no doubt that it dominated the conversation in 2025, bringing significant changes and challenges to the publishing industry.

While we are not here to tell you if and how you should use it (despite what AI companies say, the technology is still in its early stages), we want to address the topic and present some of our considerations as AI has become part of many authors’ lives.

AI for authors. A guide

Our Take

Important note: AI is evolving extremely rapidly, meaning that whatever we write here today may be obsolete in 6 months, but also that whatever you may have experienced or read 9 months ago may be up for reconsideration.

We strongly believe AI is a powerful tool, and, like any tool, it has strengths and limitations. It’s not magic, it’s not the solution to every question, it’s not a replacement for human designers or editors. It can be, when thoughtfully used, a tool to improve work, speed up processes, or help you get stuff done.

We have been testing different AI models and software in the last 18 months for different tasks. As of today (and we don’t expect this to change anytime soon), we don’t use AI for editorial tasks (copyediting, proofreading, rewriting) or book design, especially when working on clients’ projects. We use AI internally to help us brainstorm, expand, or draft ideas, research topics, generate variations, and summarize content. Designers may use AI to enhance images that are too small to print, or to speed up retouching or image manipulation (think of this as advanced Photoshop). We don’t use AI to generate new images, except in some cases that fall into the brainstorming or research category. We also use AI to speed up video editing, transcriptions, and data analysis.

As you can sense, the use of AI spans across many tasks and departments, but, at least for us, it’s always linked to specific objectives and controlled outcomes.

A Decision Framework

AI is a very polarizing topic, with some people going all-in and others refusing to engage entirely. Whether you are on the fence or a daily user, we recently came across a useful framework for deciding how to use and trust AI. This framework isn’t absolute—many tasks will fall somewhere in between, and you’ll need to use your judgment—but it provides a helpful starting point for thinking about AI use.

LOW-STAKES VS. HIGH-STAKES TASKS

Made with Napkin AI

AI is a powerful tool, but it’s not an answering machine and, more importantly, it’s not always right or trustworthy. So, how can you use it confidently?

Here’s our take: use it extensively for low-stakes tasks, use it consciously for high-stakes tasks.

Low-stakes tasks are useful but don’t matter as much. These are not critical to your work (or life): they are relatively simple, you have a good sense of the expected result, you need a fast and specific answer, and it’s okay if the output doesn’t work perfectly (either because you can fix it or discard it). When used for low-stakes tasks, AI can improve your performance or help you achieve results faster. Low-stakes tasks are great for experimenting, testing the limits, and getting basic help on things you are not an expert on.

Some examples:

  • You want to update your author website, but you don’t know how to create a table. AI can easily and reliably generate it for you or provide detailed instructions on how to do it for your specific use case.
  • You interviewed people for your book and now need to transcribe all the recordings. AI can do it for you very fast and cheaply.
  • You are developing a character in your book and need help researching the background. AI can help you find details that fit your overall story.

High-stakes tasks are important, crucial things that must be right. For this scenario, we advise thinking of AI as a specialized tool that processes information based on patterns it has learned, rather than as an assistant who truly understands your work. AI can be incredibly useful for analyzing data, comparing information, and identifying patterns—but it doesn’t gain experience from your feedback, doesn’t understand context the way humans do, and can’t make nuanced judgments. Always verify its work before relying on it for anything important.

AI can analyze a large amount of data, quickly compare tables, review materials, or summarize long documents. However, make sure you can always reference the sources and check that the results are correct before using any AI output in your work.

Some examples:

  • You have lots of sources you used to build your scenes in a historical fiction book. Now, you want to check that what you are writing in Chapter 8 is accurate. AI can help you find and reference the passages in your research body very quickly.
  • For your next book, you want to find experts to interview who have written about a specific topic. You can use AI to start researching them for you much quicker than Google. Once AI provides a list of people, you can vet them.
  • You want to come up with a list of topics for your newsletter based on insights from your book. AI can help you brainstorm and suggest new angles based on some successful cases. You can review the list and adjust it, improve it, or dismiss it based on your knowledge.
  • You want to find comparable authors and titles. AI can build a list for you to review in just a few minutes.

No matter how you use these tools, we always recommend checking the privacy settings to avoid having your content uploaded and your chat used for training purposes.

When to Avoid AI

There are cases when you should avoid using AI (or at least, the type of AI we most commonly have access to).

When we talk about AI, we are referencing tools like ChatGPT, Google Gemini, or Claude. These are all Large Language Models (LLMs)—multi-purpose tools designed to process and generate text based on patterns they learned from vast amounts of training data. These tools are quite good for language tasks (as the name suggests), but they have significant limitations when it comes to images and specific design tasks.

AI tools process images differently than humans do. They often fail at measurements, have difficulty interpreting complex visual information, and struggle with basic rules of physics and spatial relationships. Also, because they are trained on billions of generic data points, they don’t always know what to prioritize and use as a reliable source. So, you can’t expect AI to correctly check if a book cover adheres to a printer’s specifications or provide correct information on how to format an interior layout. On this, you should trust your book designer.

You should also be very careful (and always disclose) when you use AI to generate images. Most of the time, images are not suitable for print due to low resolution and potential copyright concerns, and cannot be adjusted or used as you intended. AI-generated images should always be considered examples or sketches for a designer, not final assets.

Similarly, AI can’t currently edit text at a deep level. It works for spell check, can help you find a better synonym, or suggest a punchy title, but it can’t replace a human editor. While AI can identify patterns and common errors, it doesn’t understand authorial voice, nuanced meaning, reader experience, or the subtle craft decisions that make writing effective.

Understanding Publishing Platform Policies

If you’re using AI in your writing process, it’s important to understand how major self-publishing platforms handle AI-generated content. The policies and requirements vary, and staying informed will help you avoid potential issues.

Amazon KDP (Kindle Direct Publishing)

As of our latest review, KDP requires authors to disclose when AI-generated content was used in the writing or creation of their book. This applies to text, images, and translations. Notably, Amazon distinguishes between AI-generated content (where AI creates the content) and AI-assisted content (where AI helps with editing, refining, or brainstorming). You are required to inform Amazon if your book contains AI-generated content, but not if you used AI-assisted tools like grammar checkers or idea generators.

Amazon also prohibits content that appears to be mass-produced or spam-like, regardless of whether it was created by AI or humans. Quality and originality remain the key standards.

IngramSpark

IngramSpark’s approach focuses on quality and copyright compliance. They expect that content uploaded to their platform is original and that authors hold the necessary rights. While they don’t have a specific AI disclosure requirement at this time, they do enforce strict quality standards and reserve the right to remove content that doesn’t meet their guidelines.

Important Considerations

  • Platform policies are evolving rapidly. Always check the current terms of service before publishing.
  • Copyright and originality matter. Regardless of how you create content, you are responsible for ensuring it is original and doesn’t infringe on others’ rights.
  • Reader expectations are important. Some readers are skeptical about AI use in books. Consider whether and how you want to communicate your use of AI tools to your audience.
  • When in doubt, disclose. Being transparent about your process builds trust with both platforms and readers.

We recommend checking platform guidelines regularly and keeping records of how you used AI in your creative process, just in case questions arise later.

AI and Audiobooks

AI has also impacted the way audiobooks are narrated. While we have evidence that human narration is still preferred by listeners, AI now allows for a cheaper and faster way to generate narrated versions of books and long-form content. With improvements in output quality and low entry costs, a growing number of authors have used AI narration.

For your main book, we strongly recommend professional voice actors for quality and listener preference. However, AI narration can work well for supplementary content like bonus chapters, blog posts in audio format, or marketing samples where budget or timing constraints make professional narration impractical.

Moving Forward

AI is a tool, not a silver bullet to solve all problems, nor a dangerous machine ready to kill us all. Like any other tool in the publishing process—from word processors to design software—its value depends on how thoughtfully and appropriately you use it. The key is understanding its capabilities and limitations, using it for tasks where it genuinely helps, and maintaining human judgment and creativity at the center of your work.

As this technology continues to evolve, so will best practices and guidelines. We’re committed to staying informed and helping our authors navigate these changes.
We’d love to hear from you: How are you thinking about AI in your writing process? Have you experimented with AI tools? What concerns or questions do you have? Share your thoughts with us via email—your experiences help shape these ongoing conversations in our community.