StateCraft: A State System for Uncertainty
A formal model of state reasoning under uncertainty
StateCraft is a formal model of state reasoning under uncertainty.
It is not first an implementation, a simulation shell, or a rhetorical framework. It is a model. Its purpose is to define the object classes, status distinctions, governing invariants, and constitutional boundaries required for any system that must preserve and reason about the state of a domain under incomplete knowledge.
The core claim: uncertainty does not justify the collapse of epistemic layers. It requires their explicit organization. A system gains its authority under uncertainty not by dissolving observations, facts, beliefs, derived state, hypothetical explorations, interpretations, and explanations into one synthetic surface, but by preserving their distinct statuses within one state-bearing order.
StateCraft therefore begins from a refusal and a positive construction.
The refusal is directed against flat semantic reasoning, in which all contents are treated as equivalent representational material awaiting synthesis into outputs. The positive construction is a layered state model in which object kinds remain distinct, authority over claims remains explicit, admission into canonical truth requires visible verification, derivation is deterministic, counterfactuals are sandboxed, interpretation declares itself, explanation is structural before it is narrative, and action may not outrun justification.
The model is implementation-independent. It may be enacted manually, computationally, or through any number of software architectures. What makes an implementation StateCraft-compatible is not the interface, database, or language it uses, but whether it preserves the model defined here.
1. Scope of the model
StateCraft applies wherever a system must preserve a domain state under incomplete knowledge and yet still support reasoning, comparison, intervention, and action. The model is meant for domains carrying burdens like incomplete observation, uncertain claims, temporally evolving state, scenario branching, and action under non-trivial epistemic risk. It is designed for domains in which canonical state, uncertainty, interpretation, and hypothetical variation must coexist without being collapsed into one another.
The model does not attempt to replace all forms of thought. It is specifically a model of state reasoning: how things stand, on what grounds, what remains uncertain, what may be projected, what alternative futures are reachable, and what actions are licit given the present state of the order.
2. Epistemic layer architecture
The model defines eight epistemic layers, ordered by increasing derivational distance from raw observation. Each layer boundary is enforced architecturally, not by convention. Information flows strictly upward through these layers. At no point does a higher layer contaminate a lower one.
Layer 0: Observations. Raw records from sources, not yet canonical. Observations are the ingestion boundary. They carry provenance but no epistemic authority. An observation records that something was seen, not that it is true.
Layer 1: Canonical Facts. Immutable, append-only, sourced, and vocabulary-controlled through explicit predicates. Facts are the bedrock of the system. Every fact must reference a valid predicate, and its value must conform to the predicate’s declared type and constraints. Corrections are new facts, not mutations. The fact ledger is the audit trail. Between observation and fact stands a visible admission pipeline: claim candidates are extracted from observations, verified against domain-scoped policies, checked for evidence, and admitted only through explicit passage.
Layer 2: Beliefs. Formal uncertainty representations with explicit distribution types, parameters, and justification. Beliefs are attached to entity-variable pairs where the relevant variable is not directly or fully observable. Every belief must be linked to supporting facts or observations. Beliefs are versioned: updates create new versions, not mutations.
Layer 3: Projections. Deterministic derivation from facts and beliefs. Given the same facts and beliefs, the same projection must result. This is the model’s strongest computational commitment: no hidden state, no side effects, no non-determinism in the derivation pipeline. Projections reference the specific facts and beliefs from which they were derived. A snapshot freezes the aggregate of all projections at a single moment as an immutable baseline.
The derivational layer may itself contain declared deterministic rules for computing intermediate state. These rules are not a new primitive. They are part of Projection’s lawful machinery: explicit formulas, mappings, bounded transforms, and relation traversals by which a domain defines how admitted facts and formal beliefs become projected state. Where such rules exist, they must remain declarative, inspectable, and traceable. A projection may therefore depend not only on primitive facts and beliefs, but on visibly declared derivation rules that compute further state from them without mutating canonical truth.
Layer 4: Regimes. Qualitative classifications derived from applying configurable lenses to projected state. Regimes are always derived outputs, never directly asserted. Every regime must reference the lens that produced it, the projection it was derived from, and a derivation trace. A regime without declared provenance is an unexamined assumption, not a classification.
Layer 5: Scenarios. Sandboxed counterfactual exploration that never contaminates canonical state. Scenarios operate on immutable snapshots. Hypothetical facts are injected into isolated sandboxes. The same deterministic projection logic used for canonical state is reapplied to hypothetical inputs, ensuring comparability between baseline and counterfactual. Transitions define how entity state may change; trajectories trace the paths through state space during simulation.
Layer 6: Explanations. Structural derivation traces linking outputs to source facts and beliefs. Explanation is authoritative before it is narrative. The structural trace is machine-traversable. Natural language narrative may be generated from the trace, potentially with AI assistance, but the narrative is always secondary to the structure.
Layer 7: AI Assistance. AI serves two bounded functions: candidate generation at the ingestion boundary and narrative generation at the explanation boundary. In neither role does AI replace structural authority. AI may render truth. It may not author it.
The architectural significance of this layering is that each boundary enforces a change in epistemic status. Crossing a boundary requires visible process: observation to fact requires verification, fact and belief to projection requires deterministic derivation, projection to regime requires declared lens, canonical to counterfactual requires sandboxing. These are not optional refinements. They are the model’s answer to epistemic collapse.
3. Primitive object classes
StateCraft contains nineteen primitive object classes. These are first-class conceptual objects of the model, not implementation classes. Each exists because the model cannot preserve epistemic separation without it.
The primitives are presented in the order of the admission and derivation chain: from raw ingestion through canonical truth, uncertainty, derivation, classification, exploration, explanation, action, and bounded assistance.
3.1 Observation
An Observation is a raw or normalized record from a source. It is not yet a fact. Observations are the ingestion boundary of the model, the point at which the external world enters the system’s awareness. An observation carries provenance (source, retrieval time, format, observer type) but no epistemic authority. It records that something was seen, not that it is true.
Without observation as a distinct primitive, there is no legible boundary between what the system has received and what it has admitted. Systems that skip this distinction treat all incoming content as already warranting trust. This is the first form of epistemic collapse: the erasure of the gap between reception and admission.
3.2 Claim Candidate
A Claim Candidate is a structured assertion extracted from one or more observations, pending verification. It may be machine-generated, human-authored, or parser-derived. It is not canonical truth until verified and admitted.
Claim candidates carry a status (pending, admitted, rejected, or suspended) that makes visible the system’s current judgment about each assertion’s readiness for canonical inclusion. The passage from observation to fact is not instantaneous. It involves extraction, structuring, and evaluation. A system that skips this passage hides the labor of admission inside an opaque intake process.
3.3 Verification Policy
A Verification Policy declares what checks are required before a class of claim candidates may become canonical facts. Different domains may require different policies. A geopolitical domain may require multi-source corroboration. A financial domain may require institutional attestation. A monitoring domain may require automated threshold checks.
Without explicit policies, the threshold between non-canonical and canonical content becomes implicit, inconsistent, and ungovernable. Admission standards are domain-scoped judgments. They must remain inspectable.
3.4 Fact Check
A Fact Check is an executed verification act over a claim candidate, producing a verdict and an evidence trail. Fact checks may be manual, source-adapter-based, cross-source, or trusted-system based. The verdict may be confirmed, rejected, inconclusive, or partially confirmed.
The act of verification is itself part of the epistemic record. A system that merely admits or rejects without preserving the verification act loses the ability to revisit, contest, or audit its own admission decisions.
Together, Observation, Claim Candidate, Verification Policy, and Fact Check form the admission pipeline: the visible, reconstructible passage by which raw content earns canonical status. This pipeline enforces Axiom II at the most fundamental layer of the system. Nothing may enter authority without a legible status.
3.5 Entity
An Entity is a durable referent within the model: anything that may possess attributes, participate in relations, appear in events, or be affected by transitions. Entities persist across time. State does not live on the entity as canonical truth; it is derived from facts and beliefs about the entity.
An entity may be a person, institution, territory, process, military asset, market instrument, or any other domain object that must remain identifiable across time and reasoning steps. Entities may hold aliases, may participate in typed relations with other entities, and may carry an epistemic status of their own.
Entities must remain distinguishable from claims about entities. An entity is not its current condition. This is the ontological commitment from which much later discipline follows.
3.6 Predicate
A Predicate is a vocabulary-controlling object that defines what may be validly asserted about which entity types. Each predicate has a name, a value type (string, numeric, enum, boolean), optional allowed values for enums, and an entity type scope. Predicates are append-only: they may be deprecated but not deleted.
Without vocabulary control, the fact layer becomes an unstructured heap. Any arbitrary string as a fact predicate is a category error, not an uncertain fact. Predicates enforce Axiom I at the data layer: they ensure that every fact references a valid predicate and that fact values conform to declared type and constraints. Without predicates, the model cannot prevent ontological flattening at the very point where observation becomes canonical truth.
3.7 Fact
A Fact is a canonical recorded truth about an entity, expressed through an allowed predicate and accompanied by provenance and confidence. Facts are append-only: they may be inserted but never updated or deleted. Corrections are new facts, not mutations. The fact ledger is the audit trail.
Every fact must reference a valid predicate. Every fact must carry a source — the provenance record linking it to its origin. Fact conflicts, where two facts assert contradictory things about the same entity, are tracked explicitly in separate conflict records, not silently overwritten. Post-admission adjustments to a fact’s effective confidence are recorded as separate append-only objects that accumulate without mutating the original fact.
Canonical truth must have a floor. Without immutable, sourced, predicate-controlled facts, there is no stable basis from which projection, classification, or scenario comparison can proceed. The append-only invariant is what makes historical integrity possible.
3.8 Belief
A Belief is a formal uncertainty representation attached to an entity-variable pair where the relevant variable is not directly or fully observable. A belief carries an explicit distribution type (normal, beta, categorical, or other), parameters for that distribution, and justification linking it to supporting facts or observations.
Beliefs are versioned. Updating a belief creates a new version, not a mutation. This preserves the history of the system’s uncertainty over time.
Much of what matters in state reasoning is not directly observable. Intent, cohesion, reserves, hidden damage, latent exposure, threshold behavior: these variables are consequential but epistemically different from observed facts. A system that cannot distinguish between what it knows and what it believes it knows will treat conjecture as evidence. Belief is not error. It is disciplined uncertainty carried without silently upgrading it into canonical truth.
3.9 Projection
A Projection is a derived state representation for an entity or for the world, computed from facts and beliefs through deterministic logic. Projection is the model’s term for what common speech calls “state” or “condition,” but the model insists on the derived name because it carries an essential commitment: state is never stored as authority. It is always computed.
Given the same facts and beliefs, the same projection must result. No hidden state, no side effects, no non-determinism in the derivation pipeline. Projections reference the specific facts and beliefs from which they were derived. A projection without a traceable derivation is merely an assertion dressed as computation.
Projection may include declarative derivation rules that compute intermediate predicates from already admitted facts, explicit beliefs, and declared graph relations. Such rules belong to Layer 3 rather than forming a separate primitive, because they do not create a new object kind with independent epistemic status. They specify how projection lawfully proceeds. In a pack-based implementation, this means that domain logic may declare structured derivation expressions as ontology content while the core engine remains domain-agnostic. The rule is declarative content. The projected output remains derived state. Neither becomes canonical fact merely by being computed.
3.10 Snapshot
A Snapshot is an immutable baseline artifact representing the aggregate of all entity projections at a specified moment under an explicit evidentiary cutoff. Later evidence creates later snapshots, not silent mutation of prior snapshots.
Counterfactual reasoning demands a stable reference point. Without an immutable baseline, scenario comparison becomes impossible: one cannot ask what changed relative to something that was itself changing. Snapshots are the anchor against which hypothetical explorations are measured.
3.11 Lens
A Lens is a declared interpretive frame applied to domain content, state, or scenario outputs. A lens may classify, rank, filter, or otherwise organize model contents according to explicit, configurable criteria stored as structured rules.
Applying a lens does not alter the ontological status of the underlying objects. A lens is a perspective, not a transformation of truth. Multiple lenses may be applied to the same state, producing different but non-contradictory classifications. What a military lens classifies as critical, an economic lens may classify as tolerable. Neither overrules the other. Both are declared interpretive acts.
3.12 Regime
A Regime is the qualitative classification produced by applying a lens to projected state. Regimes are always derived outputs, never directly asserted. A regime must reference the lens that produced it, the projection data it evaluated, and the specific rules that fired to produce the classification.
Classification is one of the most consequential and most easily concealed forms of interpretation. To say that a situation is “stable,” “escalatory,” “critical,” or “resilient” is not a neutral observation. It is a judgment made through a declared evaluative frame. When that frame is hidden, the classification impersonates objective description. The regime primitive forces the frame into visibility.
3.13 Scenario
A Scenario is a bounded hypothetical configuration applied to a snapshot. It introduces altered assumptions, hypothetical facts, or interventions into an isolated sandbox. The same projection logic used for canonical state is reapplied to hypothetical inputs, so that baseline and counterfactual remain comparable.
Scenarios are the model’s answer to the question “what if?” without contaminating “what is.” Scenario results exist only within the sandbox. They are persisted only as summary records. They may never be silently reimported into the canonical layer. The contamination of baseline by scenario is one of the recurring corruptions of modern reasoning. The model prohibits it structurally.
3.14 Transition
A Transition is a precondition-effect pair defining how an entity’s state may change within a simulation. A transition carries a probability, an entity type scope, one or more preconditions evaluated against current state, and a set of effects applied when the transition fires. Transitions are stored in a catalog and may be discovered automatically from predicate allowed values.
State change in a domain is not arbitrary. It is governed by rules, constraints, and probabilities that must remain inspectable. A simulation that can change anything without declared transition rules is not exploring reachable futures. It is inventing convenient ones.
3.15 Trajectory
A Trajectory is a sequence of transition steps through state space during simulation, ending in a final state with a cumulative probability score. Trajectories are the individual paths produced by Monte Carlo or similar exploration strategies when a scenario is simulated.
The distinction between a single simulation outcome and the set of reachable outcomes is consequential. A trajectory is one path, not a prediction. The ensemble of trajectories across many runs produces the distribution of reachable futures. This distinction must be preserved to prevent any single trajectory from being mistaken for a forecast.
3.16 Explanation
An Explanation is a structural derivation trace linking outputs to source facts and beliefs. It is not a persuasive narrative. It is a structured path through the model’s objects, admissions, transformations, and assumptions that produced the result.
Narrative has its place. Without it, many truths remain socially unusable. But where narrative substitutes for structure, persuasion begins to replace accountability. Every projection, classification, and recommendation must therefore possess a traceable derivation. Natural language narrative may be generated from the trace, and AI may assist in that rendering, but the narrative is always secondary to the structure it renders.
3.17 Graph
A Graph is a navigational representation built from entities, relations, and causal links. It provides a means of traversing the model’s contents through their structural connections rather than through flat enumeration.
The graph is a read-only projection consumer. It does not serve as a source of truth. Its purpose is to make the web of relations, dependencies, and influences inspectable. A system that stores thousands of entities and relations but offers no way to navigate among them structurally fails the corrigibility commitment even if its data is sound.
3.18 Recommendation
A Recommendation is a bounded, action-oriented output with justification strength and proportionality constraints. It makes explicit whether some action is licensed, constrained, delayed, or forbidden, and on what grounds.
The passage from reasoning to action is morally consequential. A system that permits weak grounds to justify strong intervention is not bold. It is illegitimate. Recommendations carry a justification ceiling: weak justification permits only monitoring, moderate justification permits engagement, strong justification permits escalation with explicit risk acknowledgment. Action must not outrun justification.
3.19 AI Candidate and AI Narrative
An AI Candidate is machine-generated content at the ingestion boundary: proposed observations, suggested entity matches, draft claim candidates. An AI Narrative is machine-generated content at the explanation boundary: natural language renderings of structural derivation traces. Both carry mandatory provenance metadata: model identifier, prompt signature, validation status.
The boundary between human authority and machine assistance must remain visible. AI outputs are always non-canonical until validated through the admission pipeline. They may not bypass verification. They may not silently enter the fact layer. They may not define canonical truth through fluency alone. An AI Candidate that passes verification becomes a fact not because AI generated it, but because the admission pipeline admitted it. The distinction is jurisdictional. AI may render truth. It may not author it.
4. Status classes
Every object handled by StateCraft must possess an explicit status. The purpose of status is to prevent object types from becoming epistemically flat.
The minimum status grammar of the model is as follows.
Observed. The object or claim is held on the basis of direct observation or direct source acquisition.
Admitted. The object or claim has crossed the relevant threshold for canonical inclusion through the admission pipeline.
Inferred. The object or claim is not directly observed but has been derived from other model contents through explicit reasoning.
Believed. The object or claim is carried as uncertain but relevant, with explicit distribution parameters.
Disputed. The object or claim is contested, unresolved, or under active challenge. Disputes are tracked as explicit conflict records.
Projected. The object or claim concerns expected or possible conditions derived from some model process.
Hypothetical. The object or claim belongs to a scenario or branch rather than to the canonical layer.
Interpretive. The object or claim is produced through a declared lens, evaluative scheme, or frame rather than through direct admission into canonical state.
A compatible implementation may refine or extend this grammar, but it may not collapse these statuses into one undifferentiated class of output.
5. Core relations among object classes
The model requires that the primitive classes relate to one another in constrained ways. These constraints are not suggestions. They are the structural skeleton of epistemic separation.
Observations produce Claim Candidates through extraction.
Claim Candidates are evaluated against Verification Policies through Fact Checks.
Admitted Claim Candidates produce Facts.
Entities may participate in typed Relations with other entities.
Facts must reference valid Predicates and concern Entities, their attributes, or conditions.
Beliefs may concern the same kinds of objects as Facts, but do not thereby share the same epistemic authority.
Projections are derived from Facts and Beliefs through deterministic logic.
Snapshots aggregate Projections at a moment and serve as immutable baselines.
Lenses are applied to Projections and Scenario results to produce Regimes.
Scenarios begin from Snapshots plus explicit hypothetical modifications.
Transitions define admissible state changes within Scenario simulations.
Trajectories record the paths produced by simulation runs.
Explanations reference all relevant object classes but preserve their distinct statuses.
Graphs are built from Entities, Relations, and causal links as navigational projections.
Recommendations depend on Projections, Beliefs, Regimes, and Explanations, but must not exceed the justificatory burden carried by those upstream objects.
AI Candidates and AI Narratives enter at the ingestion and explanation boundaries respectively, and must pass through the model’s admission processes before acquiring any canonical status.
6. Core operations
StateCraft defines admissible operations that govern how objects enter, transform, and leave the model’s holdings.
Observe. To record a raw event, signal, report, or measurement from a source into the model as an observation with provenance.
Extract. To derive a structured claim candidate from one or more observations. Extraction may be manual, rule-based, or AI-assisted.
Verify. To execute a fact check against a claim candidate under the applicable verification policy, producing a verdict and evidence trail.
Admit. To move a verified claim into the canonical layer as a fact or other admitted object. Admission must be visible and reconstructible.
Register. To introduce an entity into the model as a tracked, identifiable referent.
Attribute. To assign a condition, property, or relation to an entity under explicit grounds and through valid predicates.
Derive. To produce a projection from upstream facts and beliefs through deterministic transformation. Derivation is central because the model does not permit raw asserted state to substitute for visible grounds.
Where a domain requires intermediate computed variables, the Derive operation may apply explicitly declared derivation rules over entity-local state and declared relations. These rules are lawful only if they remain visible, deterministic, and subordinate to admitted inputs. They may fill projected state. They may not silently rewrite the canonical layer.
Snapshot. To freeze the current aggregate of projections as an immutable baseline at a specified moment.
Classify. To apply a lens to projected state or scenario results, producing a regime classification under declared criteria.
Simulate. To run one or more transitions or scenario conditions across a bounded hypothetical branch, producing trajectories.
Explain. To produce a structural trace of how an object, projection, classification, or recommendation arose.
Compare. To place two or more states, scenarios, or classifications in explicit relation without collapsing them into one another.
Recommend. To produce an action-oriented output with justification strength proportional to the evidentiary grounds supporting it.
Revise. To alter the model’s holdings through visible status change, new admissions, new facts, re-derivations, or dispute resolution. Revision does not mean silent overwrite. Traceability must survive.
7. Invariants
The following sixteen invariants are constitutive. They correspond to and operationalize the twelve axioms established earlier in the canon. A system that violates them may still be useful, but it is no longer faithfully StateCraft.
7.1 Canonical truth is append-only. Facts may be inserted but never updated or deleted. Corrections are new facts that supersede old ones. The ledger is the audit trail. (Axioms I, IV)
7.2 Identity persists; state does not live on identity. Entities endure across time. Their condition is derived from facts and beliefs, not stored on them as authority. An entity is not its current state. (Axiom I)
7.3 Uncertainty must be explicit. Where uncertainty exists, it must remain visible as a first-class belief with distribution type, parameters, and justification. Uncertainty may not be silently absorbed into the smooth surface of an answer. (Axiom III)
7.4 Observations are not facts. Raw records from sources carry provenance but no epistemic authority. They must pass through the admission pipeline before acquiring canonical status. (Axioms I, II)
7.5 Facts require explicit verification and admission. No claim may become canonical without passing through the visible pipeline of claim candidacy, verification policy, and fact check. The labor of admission must be preserved as part of the epistemic record. (Axiom II)
7.6 Assertions must conform to declared vocabulary. Every fact must reference a valid predicate. Its value must conform to the predicate’s type and constraints. Arbitrary strings as predicates are category errors. (Axiom I)
7.7 State is derived, not asserted. Projections are computed deterministically from facts and beliefs. Same inputs, same outputs. Condition may not be declared by authority; it must be earned through derivation. (Axiom V)
7.8 Snapshots are immutable artifacts. A snapshot, once taken, may not be mutated. Later evidence produces later snapshots. Without stable baselines, counterfactual comparison becomes impossible. (Axioms IV, VIII)
7.9 Hypotheses are sandboxed. Scenario simulations operate on immutable snapshots. Hypothetical facts exist only within the sandbox. They may never contaminate canonical state. (Axiom IV)
7.10 Simulated futures are exploratory, not canonical. Trajectories produced by simulation are reachable paths, not predictions. No single trajectory may be mistaken for a forecast or reimported into the canonical layer as though it had already occurred. (Axioms IV, V)
7.11 Interpretation is explicit and separable from truth. Lenses must declare themselves. Regime classifications must reference the lens, projection, and rules that produced them. Interpretive outputs may not masquerade as ontologically primitive facts. (Axiom VI)
7.12 Structural explanation is authoritative; narrative is secondary. Every projection and classification must possess a traceable derivation. Natural language narrative may accompany the trace but may not substitute for it. Where rhetoric replaces structure, accountability decays into persuasion. (Axiom VII)
7.13 AI may assist but not author canonical truth. Generative systems may help search, summarize, propose, and narrate. They may not collapse the truth path into synthetic closure. All AI outputs carry provenance metadata and must pass through the model’s admission processes before acquiring any canonical status. (Axiom X)
7.14 Action outputs must be proportional to evidentiary strength. No recommendation may authorize stronger action than its justificatory grounds can bear. Weak justification permits monitoring. Moderate justification permits engagement. Strong justification permits escalation with explicit risk acknowledgment. (Axiom IX)
7.15 Corrigibility is superior to seamlessness. The system must prefer inspectability over integration smoothness. Every conclusion must be navigable to its basis, challengeable with counter-evidence, overridable by human judgment, and rough rather than opaque. A smooth but uninspectable path is weaker, not stronger. (Axiom XI)
7.16 Architecture must preserve these boundaries. These invariants are not aspirations. They are governing constraints. A system that enforces them at the specification level but allows them to be circumvented by implementation shortcuts has not preserved them. The architecture must be the discipline, not merely describe it. (Axiom XII)
8. Constitutional boundaries
The model enforces seven constitutional boundary checks that may be validated automatically against any domain configuration. These checks verify that a domain instantiation respects the model’s epistemic commitments. They are the operational form of the question: does this domain still respect the canon?
8.1 Predicate type safety. Every fact must reference a valid predicate with conforming value types and constraints.
8.2 Append-only truth. Facts may be inserted but never updated or deleted.
8.3 Derivation integrity. Projections must be reproducible from their declared inputs. Same facts and beliefs, same projection.
8.4 Sandbox isolation. Counterfactual facts must not persist to the canonical store.
8.5 Explanation completeness. Every projection must have a traceable derivation.
8.6 AI boundary. AI-generated content must carry provenance metadata and may not bypass verification.
8.7 Proportionality. Recommendations must bind action strength to evidentiary justification.
These checks may be implemented as a validation function applied to any domain configuration before installation, ensuring that domain specializations cannot violate the model’s core invariants.
9. Epistemic posture and time
StateCraft requires that state reasoning remain explicit about temporal posture.
The model distinguishes at minimum among the following:
Present canonical posture — what is currently admitted and derived from the latest facts and beliefs.
Historical posture — what was or could have been held at a prior time reference, given the facts available at that moment.
Retrospective posture — what is now said about prior conditions, with the benefit of later evidence.
Scenario posture — what is explored under hypothetical branch conditions applied to a snapshot.
A compatible implementation may refine these, but it may not flatten them into one timeless surface. To confuse contemporaneous judgment with hindsight is to confuse what a system failed to know with what it was never in a position to know.
10. Conformance conditions
An implementation, manual method, or protocol may be called StateCraft-compatible only if it preserves the following:
The nineteen primitive object distinctions of the model.
An explicit status grammar that keeps canonical, uncertain, hypothetical, and interpretive objects distinguishable.
A visible admission pipeline from observation through verification to canonical fact.
Deterministic projection from facts and beliefs, with traceable derivation.
Immutable snapshots as baselines for counterfactual comparison.
Declared lenses producing explicitly attributed regime classifications.
Sandboxed scenario simulation that never contaminates canonical state.
Structural explanation traces for all derived outputs.
The invariants that protect canonical state, uncertainty visibility, admission integrity, and explanation traceability.
The constitutional boundaries that validate domain configurations against the model’s epistemic commitments.
A visible boundary preventing generated or rhetorically rendered outputs from becoming sovereign truth objects by mere fluency.
StateCraft compatibility therefore does not require one specific software design. It requires preservation of the model’s ontology, status grammar, operational grammar, invariants, and constitutional boundaries.
11. Model taxonomy
The model distinguishes between several categories of specification that must not be confused:
A primitive is a first-class conceptual object of the model: entity, predicate, observation, claim candidate, verification policy, fact check, fact, belief, projection, snapshot, scenario, transition, trajectory, lens, regime, explanation, graph, recommendation, or AI candidate/narrative.
An invariant is a rule the model refuses to violate. The sixteen invariants are constitutive constraints, not preferences.
A constitutional boundary is a validatable check that verifies domain conformance against the model’s invariants.
A protocol is a lifecycle contract governing how primitives interact over time, such as the passage from observation through admission to canonical fact, or from snapshot through scenario to counterfactual comparison.
An execution strategy is a named implementation pattern for realizing a protocol, such as Monte Carlo simulation, sensitivity analysis, gap analysis, or Bayesian auto-update.
An extension is a domain-specific augmentation of a primitive, such as driving forces, cascade effects, or fuzzy lens evaluation.
This taxonomy prevents category confusion between the model’s objects, rules, boundaries, lifecycle contracts, and implementation patterns.
12. Protocol horizon
Because the model is implementation-independent, it naturally points toward a protocol horizon.
A later protocol may standardize serialization, interoperability, exchange rules, or conformance testing for StateCraft-compatible systems. But that protocol is downstream of the model, not prior to it. The model defines what a protocol would have to preserve.
Likewise, a later implementation may instantiate the model through services, stores, engines, or interfaces. A domain pack specification may enable portable ontologies: entities, predicates, facts, beliefs, relations, transitions, lenses, and scenarios packaged as installable units validated against the constitutional boundaries before installation. But these too are downstream. They do not author the model. They realize it.
13. Formal definition
StateCraft may therefore be defined formally as follows:
StateCraft is a formal model of state reasoning under uncertainty in which domain observations must pass through a visible admission pipeline before earning canonical status; in which canonical facts are immutable, append-only, and vocabulary-controlled; in which beliefs carry explicit distributions and justification; in which state is derived deterministically from facts and beliefs, never asserted; in which counterfactual exploration is sandboxed against immutable baselines; in which interpretation declares itself through configurable lenses producing attributed classifications; in which explanation is structural before it is narrative; in which action must not outrun justification; and in which generative intelligence may assist at the boundaries but may not become sovereign over truth.
14. Closing clarification
StateCraft is called a state system for uncertainty because its center is neither language generation nor pure static fact storage. It is concerned with how things stand, under what grounds, through what verified admission, across what transitions, toward what justified action.
The craft is in the discipline. Distinct kinds must not be collapsed. Admission must remain visible and verified. Uncertainty must remain explicit. Derivation must remain deterministic. Snapshots must remain immutable. Scenarios must remain sandboxed. Classification must remain declared. Explanation must remain structural. Action must remain proportional. And intelligence, however powerful, must remain bounded in its jurisdiction over canonical truth.
Maintaining that order is the work.
Everything else — protocol, implementation, engine, domain pack, interface — belongs downstream of this definition.
Revision history
v1 (2026-03-27): Initial formal model. Eleven primitive object classes (Entity, Relation, Event, Fact, Belief, State, Transition, Scenario, Lens, Explanation, Action Condition), eight status classes, eleven core operations, ten invariants.
v2 (2026-03-31): Major revision absorbing Paper v3’s evolved model. Nineteen primitive object classes. Epistemic layer architecture (eight layers). Sixteen invariants mapped to the twelve axioms. Seven constitutional boundary checks. Model taxonomy distinguishing primitives from invariants, protocols, strategies, and extensions. Admission pipeline (Observation → Claim Candidate → Verification Policy → Fact Check → Fact) as central structural commitment. Key changes from v1: Event subsumed by Observation (events are captured as facts about bounded occurrences); Relation subsumed into Entity connections; State renamed to Projection (emphasizing derivation over assertion); Action Condition renamed to Recommendation (with explicit proportionality). New primitives: Predicate, Observation, Claim Candidate, Verification Policy, Fact Check, Projection, Snapshot, Trajectory, Regime, Graph, AI Candidate/AI Narrative. New sections: Epistemic Layer Architecture, Constitutional Boundaries, Model Taxonomy.
This text was produced under the Canon Authoring Protocol. See 00-authoring-protocol.md, Author’s Declaration.