The Coherence Framework
The AI system with humanity's values at the core.
The AI system with humanity's values at the core.
Agencies work with multiple brand clients. A single AI intelligence system is too generic to serve brands with unique products and services. I believe we need to incorporate values and beliefs to properly convey a company's brand and to imbue AI systems with humanity.
Companies are building AI the same way: business-oriented prompts that leverage structured data. And that's a best case scenario. Few are there. By the time there are twenty prompts across three teams, no one can explain why the system makes the choices it does.
That's not a tooling problem. It's an architecture problem. The prompts are fine. What's missing is the layer above them: the thing that gives the whole system something to inherit from.
The Coherence Framework is the architecture I ended up writing. A four-layer framework for how businesses should design, govern, and optimize AI. Not more rules, a shared foundation with human-defined values and beliefs at the core.
Most AI systems are built prompt-by-prompt. No shared values. No governing principles. No mechanism for judgment.
Drift
Each prompt author picks their own beliefs. Over time, the system’s behavior fragments.
Rigidity
Without a value system, every edge case requires a new rule instead of better judgment.
Opacity
No one can explain why the system made a particular choice, because there’s no framework to trace it back to.
Drift is the slow loss of coherence that happens when every prompt author embeds their own judgment, their own priorities, their own interpretation of what the business values.
Without a shared foundation, the system fragments. The outputs start feeling like they came from different companies. Then the fixes start: more rules, more guardrails, more review. None of which address the actual problem, which is that the system has no center of gravity.
Four layers. Each inherits from the one above and informs the one below. Bottom to top:
A layered architecture for AI governance.
Operating System
Constitutional governance. Holds the Values — unchanging principles that every system inherits. Tracks change over time.
Systems (Agents)
Orchestration. Selects the right data, prompts, and skills for the need.
Prompts & Skills
Instructions with context, intent, restriction, and Belief awareness. Beliefs are working principles that evolve with experience, always rooted in Values.
Data
Pure information. No opinion, no direction. The atoms.
Each layer inherits from above, informs below.
Most AI frameworks stop at rules and guardrails. This one doesn't. The center of gravity is the relationship between Values and Beliefs, and it's the part I'd point at first.
Values are unchanging. They live at the Operating System level and are constitutional: honesty, clarity, transparency of thought. They don't shift with context, audience, or experience. Every system, agent, prompt, and skill operates under them.
Beliefs are current working principles. They live at the Prompts & Skills level and are malleable, informed by experience, refined over time. A belief like “exploration should precede optimization” or “the audience's experience is often the better source” gives a prompt a basis for judgment the instructions alone can't cover. Beliefs are always rooted in Values but evolve as the system learns.
The relationship is directional: Values produce Beliefs. Beliefs produce voice and behavior. Understanding the Values behind the Beliefs lets a system reason through situations the Beliefs don't explicitly cover. That's the whole point of articulating them.
A rules engine tells a system what to do. A Value and Belief system lets it reason about what to do when the situation isn't covered. That difference is where the framework does the work no other framework I've seen does.
The decision-making DNA of the system.
Values
Unchanging. Constitutional.
Beliefs
Malleable. Informed by experience.
The relationship is directional: Values produce Beliefs, Beliefs produce voice and behavior.
I use storytelling to frame the AI framework in relation to business goals. The Values & Beliefs layer maps to a three-beat narrative structure, and it may be the right way to teach the framework to teams who aren't thinking architecturally yet.
It reframes the system as a human story. Not an abstract diagram, but a story about how good AI systems get designed with humanity at their core.
Look at Pixar's Toy Story:
Not to spoil things, but it was ultimately Woody's value of making sure Andy's happiness that kept Woody centered. His ability to alter his beliefs but still connect to his values is what made him a better character in the end.
Look at your favorite novels, your most re-watched documentaries. The main characters aren't driven purely by action, they are driven by value-based decision making that ultimately makes them unique and engaging.
This is the core of The Coherence Framework.
Beliefs need governance or they drift. The registry is the mechanism.
Core Beliefs apply universally across all systems. They're derived directly from Values and are non-negotiable.
Domain Beliefs are scoped to a specific system or practice area. They represent how that domain fulfills Core Beliefs. A design system might believe every decision should be expressed as a quantifiable spec. A marketing system might believe exploration should precede optimization. Both serve the same Core Belief through different operational philosophies.
Every Domain Belief traces back through a Core Belief to a Value. If you can't trace the line, the Belief is an orphan and needs examination.
Reconciliation happens on a cadence: quarterly, after a major project, whenever something feels off. It's the human-in-the-loop moment. Not approving every action. Periodically checking alignment.
Each system has a human-set tolerance band controlling its operating latitude. A design system might run tight (2/10): stay close to specs, don't improvise. A marketing system might run loose (7/10): explore widely, converge before committing.
Tighter bands mean fewer tokens, lower cost, more deterministic output. Looser bands mean bigger context windows, more reasoning, higher cost. The tolerance band is a governance lever with a direct line to economics.
It's also how the human keeps a hand on the dial without having to approve every decision.
Belief provenance is static: a Domain Belief traces back through a Core Belief to a Value. That tells you the lineage of a principle. It doesn't tell you why the system did what it did on a Tuesday afternoon for a specific customer.
Decision provenance is the runtime version. For any action, you'd want to reconstruct the chain: what was the request, which system picked it up, what tolerance band was in force, which Beliefs were invoked, which data it pulled, which Value those Beliefs rolled up to.
Scoping makes it harder. A project inherits from an account, which inherits from the business. A log has to capture which scope governed at the moment the decision happened, because the same prompt under a tight project band and a loose account band will produce different outputs.
The shape of it is a beta-discovery question. Thin aggregate logs on every action, deep capture on the small subset that trips a flag, tolerance-aware compression to control volume. The value isn't in the one record you pull. It's in the shape of the distribution you can query.
This is a living framework. The articulation is the breakthrough; the holes are expected. The edges I'm still working:
The framework is the artifact, not the endpoint. Every project I take it into will push against a different edge, and that's where the next version gets written. Build for the real workflow. Lower the friction between the system and the judgment it's supposed to carry.