Draft — not yet published.

The Authoring Protocol

How to use AI for writing without losing track of what you actually think.

April 21, 2026

I write with AI assistance. Claude helps me draft, restructure, tighten prose, find better phrasings. I’ve published two essays this way and I’m writing a third right now. This isn’t a confession. It’s a fact about how I work, and I suspect it’s a fact about how most people who write seriously in 2026 work, whether they say so or not.

The problem isn’t using AI. The problem is what happens when you use it without constraints.

You start with a rough paragraph. You ask the model to clean it up. The result is better than what you wrote, so you accept it. Then another paragraph. Then a section. Somewhere around the third acceptance, a subtle shift occurs. The text still sounds like you, roughly, but the ideas have drifted. The model suggested a framing you hadn’t considered. You kept it because it was elegant. It suggested a connection between two concepts. You kept it because it was plausible. It added a qualifier that softened your claim. You kept it because it sounded measured.

None of these individual moves are wrong. Each one is a reasonable editorial choice. But accumulated over a full essay, they produce a text whose intellectual trajectory was shaped more by the model’s weighted average of training data than by your actual thinking. The arguments are coherent. The prose is clean. And you can no longer point at any specific sentence and say with confidence whether the idea behind it was yours or the model’s.

I call this epistemic displacement. Not because the model is lying to you, but because fluency masks origin. A generated paragraph that sounds right feels like it IS right, feels like it’s what you meant to say. The gap between “this is well-written” and “this is what I think” closes silently. By the time you notice, you’re editing a document that represents the model’s understanding of your topic more than your own.

This is the writing-specific version of a broader problem. In prediction markets, I’ve watched traders accept model outputs as beliefs without checking the derivation. In software architecture, I’ve watched developers accept generated code without understanding the design decisions embedded in it. The failure mode is always the same: fluency substitutes for understanding, and the human loses track of what they actually know.

What most people do

The standard response is a disclaimer. “I used ChatGPT to help write this.” Honest, admirable even, in a landscape where many people say nothing. But a disclaimer is not a protocol. It doesn’t tell you what was AI-generated and what was human-originated. It doesn’t prevent the drift I described above. It doesn’t distinguish between an author who used AI to fix comma placement and an author who accepted wholesale paragraphs of generated argumentation.

The disclaimer treats AI assistance as binary: used or not used. In practice, assistance exists on a spectrum. Spell-checking is assistance. So is “rewrite this section to be more persuasive.” The first changes nothing about authorship. The second might change everything. A useful protocol needs to distinguish between these cases, not collapse them into a single disclosure line.

Some writers go further and publish their prompts, which is more transparent but still insufficient. The prompts show what you asked for. They don’t show what you accepted without questioning, what the model added that you didn’t catch, or which arguments originated from you and which were suggested during the drafting process.

What’s missing is a formal structure. Something that governs the relationship between author and model throughout the writing process, not just at the disclosure stage.

Five layers

I wrote a protocol for this. Originally for the StateCraft canon, a body of formal writing about reasoning under uncertainty, where the stakes of epistemic displacement are especially high. But the protocol applies to any serious writing. It defines five ordered layers, each with a different relationship to AI assistance.

The first layer is observation. Raw author material. Notes, intuitions, distinctions, tensions, analogies. The stuff you scribble in a notebook or type into a scratch file at 2am. This layer is exclusively human. Not because AI can’t generate observations, but because observations that don’t originate from the author aren’t the author’s work. If the model suggests a distinction you hadn’t noticed, that’s the model’s observation, not yours. You can adopt it, but only by moving it through the subsequent layers with full awareness that it didn’t start with you.

The second layer is clarification. Here AI is genuinely useful. You have a rough idea. The model helps you extract what you mean, compare it against related concepts, disambiguate terms, find structural gaps. This is the Socratic function: not generating ideas, but pressure-testing them. When I was writing the Primitive-Driven Design essay, I used Claude extensively at this layer. “Does this distinction between simple and easy hold up if I extend it to this case?” “What’s the strongest objection to this claim?” The model didn’t originate the claims. It helped me stress-test them.

The third layer is ratification. The author explicitly approves, rejects, or revises each claim. This is the sovereignty layer. Ratification means more than “this sounds right.” It means the author owns the claim, can defend it, and would stand behind it in conversation. Article V of the protocol puts it directly: “To ratify a sentence because it ‘sounds right’ is insufficient. The question is not whether the prose is elegant, but whether the proposition is truly owned, intended, and defensible by the author.”

The fourth layer is rendering. This is where AI does its best work. Taking ratified ideas and expressing them well. Organizing sections, finding rhythm, tightening sentences, cutting redundancy. The PDD essay’s final prose was rendered with Claude’s help. Every argument in it went through layers one through three first. By the time rendering begins, the intellectual work is done. What remains is craft, and craft is exactly what language models excel at.

The fifth layer is integration. Placing the rendered text into context, connecting it to prior work, preserving dependencies and open questions. For the StateCraft canon, this means linking new texts to existing doctrine. For standalone essays, it means ensuring the piece doesn’t contradict or silently revise positions taken in earlier work.

The ordering matters. No later layer may silently replace an earlier one. Rendering cannot introduce new claims. Integration cannot revise ratified positions. The layers flow in one direction.

The twelve articles, distilled

The full protocol has twelve articles. Rather than reproducing all of them, I’ll group them by what they protect.

The first group protects origin and sovereignty. Article I states that the origin of doctrine is human. No generated formulation gets treated as origin just because it’s fluent. Article IV defines what AI may and may not do: it may extract, arrange, contrast, reformulate, oppose, render. It may not determine doctrine, settle ambiguity on its own authority, or convert uncertainty into closure. Article V requires explicit ratification of each proposition, not just each sentence.

The second group governs process and status. Article II establishes the five layers. Article III requires every proposition to occupy an explicit status: observation, candidate claim, ratified claim, derived claim, open question, or rhetorical rendering. Status confusion is prohibited. You cannot let a candidate claim silently become canonical just because it survived a few editing passes. Article IX says texts must grow compositionally, each one defining, diagnosing, prescribing, formalizing, or applying, but never attempting total doctrine without being explicitly marked as synthesis.

The third group preserves integrity and traceability. Article VI requires provenance: every text must remain recoverable in relation to its originating observations, candidate claims, ratified claims, prior dependencies, and unresolved questions. Article X demands replayability: the path from raw observation to rendered text must remain inspectable. Article XI governs revision: changes must be additive, explicit, and historically legible. No silent drift through accumulated edits.

The fourth group keeps style honest. Article VII subordinates style to substance. No prose may sound stronger than the thought it bears, conceal an unresolved leap, counterfeit assent through eloquence, or suppress uncertainty for smoothness. Article VIII requires open questions to remain visible until resolved. If a concept is unstable, it must be marked unstable. False closure is a defect. Article XII frames the protocol’s purpose: not to purify authorship from all assistance, but to preserve it from displacement.

These articles are deliberately formal. They read like legislation because the commitments need to be precise enough to be violated. A vague guideline like “be honest about AI use” cannot be violated because it never says what honesty requires. These articles do.

In practice

Day to day, this looks less formal than the articles suggest.

I write rough notes. Sometimes in full sentences, sometimes in fragments. I discuss them with Claude, usually through several rounds where I push back, reject framings, ask for counterarguments. I approve or reject each formulation. When something the model says is better than what I had, I adopt it consciously, knowing the idea entered through clarification rather than observation. When something sounds good but doesn’t match what I actually think, I reject it, even if the rejection makes the prose worse.

The PDD essay went through four drafts, three panel reviews with Claude acting as different critics, two rounds of feedback from colleagues, and multiple editorial passes. The final text is cleaner than anything I would produce unassisted. But every argument in it, the payment terminal story, the billing primitives, the distinction between simple and easy, the claim about measurable architectural cost, originated from my experience and my thinking. Claude helped me say it better. It didn’t tell me what to say.

The concrete rule I follow: I never accept a paragraph I couldn’t have written myself, even if I didn’t. This isn’t about capability. It’s about comprehension. If I read a generated paragraph and understand every claim well enough to defend it in conversation, it passes. If I read it and think “huh, that’s a good point, I hadn’t thought of that,” it fails. Not because good points are bad, but because a point I hadn’t thought of is, by definition, not my authorship. I can investigate it, adopt it through ratification, make it mine. But I can’t just keep it because it’s clever.

Both published essays carry a shortened version of this protocol as an authoring note at the bottom. The note is intentionally brief: “All arguments, examples, judgments, and claims are the author’s. AI was used for rendering, not for originating the thesis or authoring claims.” That’s the disclosure. The protocol is the discipline behind it.

What this is actually about

This protocol isn’t about purity. I’m not arguing that touching a language model contaminates your work. I use one constantly and I’ll keep using one. The tools are too good to ignore, and pretending otherwise is performative.

What the protocol protects is something more basic: knowing what you think. If you write an essay and can’t tell which ideas are yours and which the model suggested, you’ve lost something more important than a byline. You’ve lost contact with your own reasoning. The text might be excellent. The arguments might be sound. But they’re not yours in any meaningful sense, and the next time someone asks you to defend them, you’ll be reconstructing the model’s logic rather than expressing your own.

The authoring protocol is a commitment to staying in that loop. Origin stays human. Assistance stays bounded. Ratification stays explicit. Everything else is rendering.


Authoring note. Drafting assistance by Claude. All arguments, examples, judgments, and claims are the author’s. AI was used for rendering (organizing, rephrasing, tightening), not for originating the thesis or authoring claims.