Ethical Use of AI for Writers: What You Need to Know

Artificial intelligence has quickly become part of the writing process. Whether authors planned for it or not. Whether authors like it or not. From brainstorming ideas to refining sentences, AI tools can be useful, efficient, and even inspiring. I used AI to flesh out the outline of my soon-to-be-published book. (Having published my first book in 2010, I can’t even tell you how much time AI saved me not having to create the initial outline on my own.)

But AI raises an important question: How can writers use AI without compromising the integrity of their work?

I have some thoughts. 

What “Ethical AI Use” Really Means

Ethical use of AI in writing doesn’t mean avoiding it altogether. And I’m in plenty of writing and publishing groups where “AI” is pretty much a dirty word. But the thing is, it’s not going anywhere. I am choosing to learn about it. And how to use it in a way that preserves authorship and originality.

Here’s what I think ethical AI use means:

  • The ideas, voice, and structure of the work remain the writer’s.

  • AI is used as a tool—not a substitute for thinking or creativity.

  • Any use of AI that does not mislead readers, publishers, or collaborators.

In other words, AI should support your writing; it should not replace it.

Where AI Can Be Helpful

Used thoughtfully, AI can streamline parts of the writing process that often slow authors down. For example, AI can help:

  • Generate ideas when you’re stuck at the beginning

  • Offer alternative phrasing for awkward or unclear sentences

  • Summarize research or organize notes

  • Identify inconsistencies or gaps in logic

  • Provide a starting point for outlines

I liken these to working with a brainstorming partner or early-stage assistant. The key is that you retain control over the final content.

Where AI Can Hurt Your Writing

The risks of AI aren’t always obvious. But in almost all cases, the output sounds polished but lacks depth, originality, or authenticity.

Overreliance on AI can lead to:

  • Generic or repetitive language that weakens your voice

  • Shallow explanations that don’t reflect real expertise

  • Structural sameness across chapters or sections

  • Loss of nuance, particularly in complex or emotional material

  • A manuscript that feels technically correct but not compelling

One of the clearest signs of heavy AI use is writing that feels flat. It says the right things, but it doesn’t say them in a way that feels distinctly human. 

Maintaining Your Voice

One of the most important responsibilities you have as an author is preserving your voice. (As an editor, this was always one of my most important responsibilities and goals.) This is where AI use requires the most care.

Here’s how you can use AI writing tools and still maintain your voice:

  • Use AI for ideas or rough phrasing—but rewrite in your own words

  • Read your work aloud to ensure it sounds natural and consistent

  • Avoid copying and pasting large sections without revision

  • Be especially cautious with personal stories or experiential writing

Your voice is what connects you to readers. If that disappears—well, your work loses its value.

Why AI Detection Tools Are Not the Answer

As AI use has grown, so has interest in tools that claim to detect whether a piece of writing was generated by AI. At first glance, these tools seem like a simple solution. In practice, they are anything but reliable.

AI detection tools are notoriously inconsistent. The same passage of text can produce different results depending on the platform—or even when run through the same tool multiple times. This lack of consistency alone makes them difficult to trust in any meaningful way.

False positives are also a significant issue. Well-written, clearly structured content—especially from experienced writers—is often flagged as AI-generated simply because it is polished and grammatically clean. In other words, strong writing can work against the writer. I’ve run pieces I wrote a decade ago into these tools; the results are almost always more than 50 percent AI-generated. Apparently, running the Gettysburg Address through AI detectors usually results in high AI-generated scores. I mean—c’mon!

Additionally, there is also no agreed-upon standard for how AI detection works. Most tools rely on proprietary algorithms that are not transparent, and there is no universal benchmark for accuracy. So the conclusions are really just guesses rather than definitive answers.

Perhaps most importantly, these tools cannot prove authorship. They may attempt to identify patterns, but they cannot determine who wrote a piece of text or how it was developed. In an environment where many writers use AI as part of a broader workflow, this limitation becomes even more significant.

In many cases, different tools will produce contradictory results: one flagging a passage as AI-generated while another classifies it as human-written. Minor edits to sentence structure or word choice can also dramatically change a detection score without meaningfully altering the content itself. I have read “tips” for escaping AI detection by adding errors to the text. <insert eyeroll here>

Ongoing Legal Questions Around AI Training Data

The conversation around ethical AI use is not happening in a vacuum. It is actively being shaped by ongoing legal challenges, many of which center on how AI models are trained.

Several lawsuits have been filed against AI companies, including a high-profile class action against Anthropic, alleging that copyrighted books were used to train AI systems without the authors’ or publishers’ permission. These cases raise important questions about consent, compensation, and the boundaries of fair use.

At the heart of these lawsuits is a fundamental issue: Should creative work be used to train AI models without the author’s knowledge or approval? Uh, no.

For authors, this is not an abstract concern. Many writers have discovered that their published work may have been included in large training datasets, often without clear disclosure. In some cases, authors involved in these lawsuits have seen their own titles referenced in the broader claims. My work is listed in one of these cases, which makes me wonder if that’s why my writing (even from the pre-AI days) is flagged as AI.

While these cases are still working their way through the courts, they highlight a rapidly evolving legal landscape. The outcomes could have significant implications for how AI tools are developed and how writers’ intellectual property is protected going forward.

Transparency and Professional Standards

In some cases—particularly in professional, academic, or collaborative publishing—transparency around AI use may be required. Even when it’s not explicitly required, it’s worth asking:

  • Would I feel comfortable explaining how I used AI in this work?

  • Does this accurately represent my thinking and expertise?

If the answer is no, then you should reconsider how you are using AI.

A Practical Guideline

Here’s a simple way to think about AI use:

  • If the AI is doing the thinking, it’s too much.

  • If the AI is supporting your thinking, it’s appropriate.

This guideline helps keep the balance where it belongs—with the author.

Final Thoughts

AI is not going away. To my fellow writers, I say this: Don’t necessarily reject it. But be sure to use it in a way that strengthens your work rather than diminishing it. Used well, AI can make you more efficient. Used poorly, it can make your writing forgettable.

The difference is intention and awareness.

If you’re unsure whether your manuscript reflects your voice, or you’d like professional feedback before moving forward, I offer manuscript evaluations and sample edits to help you assess where you stand. Contact me whenever you’re ready!

Next
Next

I Turned in My Book Manuscript, and Here’s a Sneak Peek