Part 2: What Alignment Actually Is

Alignment Engineering

Published:Apr 22, 2026·5 min readinEngineering
Author:
A
Aditya Bhatnagar
Part 2: What Alignment Actually Is

By the end of this post, you will have a precise definition of alignment in software engineering, a five-layer decomposition of where alignment must hold, and an understanding of the three forces that cause alignment to decay by default. These are the concepts that everything in Parts 3 and 4 builds on. Get them right, and the rest of the series becomes actionable.

This is the second post in a four-part series on alignment engineering. In Part 1, we argued that coding agents have solved the generation problem but left the alignment problem untouched, and that the costliest failure mode in software engineering is building the wrong thing.

The short version, for readers arriving here directly: alignment is the convergence of what the customer needs, what the team understands, and what the software does. When those three agree, the system builds the right thing. When they diverge, the system builds something that works but solves the wrong problem. The rest of this post describes how that convergence holds or fails, precisely enough to engineer against.


Alignment is a word people use loosely. It shows up in retrospectives, in planning docs, in vague complaints that "we're not aligned." Everyone agrees it matters. Almost nobody defines it precisely enough to build around.

The definition I have found most useful is this:

Alignment is the convergence of multiple systems, people, documents, and code, onto the same understanding of what should happen and why.

When alignment holds, the customer's intent, the team's interpretation, and the software's execution all point at the same target: building the right thing. When it breaks, they diverge. The distance between them is the cost.

An Inspiration from Representation Theory

In his foundational text Artificial Intelligence, Patrick Henry Winston argued that representing a problem correctly almost solves it. A well-chosen representation exposes structure, makes the right operations obvious, and collapses apparent complexity into tractable steps. A poorly chosen representation hides structure and turns simple problems into intractable ones.

Alignment has the same property. Most teams fail to solve alignment because they have not represented it correctly. They treat it as a communication problem, or a documentation problem, or a "we need to have a kickoff meeting" problem. All of those are downstream symptoms. The upstream issue is that alignment is not one thing. It is a stack of distinct properties, operating at different layers, each of which can hold or fail independently. If you cannot name which layer is broken, you cannot fix it.

The rest of this post is an attempt to represent alignment precisely enough that it becomes engineerable.

The Three Things That Must Agree

At the highest level, alignment in software engineering requires three things to converge.

Intent is what the customer actually needs to change in their world. Not what they asked for. Not what they said in the first meeting. The real state transition from a reality that is not working to one that is.

Interpretation is what the team understands the problem to be. It is always a compressed version of intent. Some detail is lost, some assumptions are added. The question is whether the compression preserved the essence or distorted it.

Execution is what the software actually does when it runs. It is the final, concrete expression of every decision made upstream. It either changes the customer's reality in the way they needed, or it does not.

Perfect alignment means intent, interpretation, and execution collapse into one. In practice, they never perfectly overlap. So alignment is really about minimizing the drift between them, continuously, at every stage. A system that minimizes the drift builds the right thing. A system that does not, builds something else.

In our healthcare project's first build, interpretation and execution were reasonable. The PM wrote sensible requirements, the engineering lead built a competent technical plan, the coding agents produced working software. What had gone wrong was higher up the chain. Intent had been compressed through a single person's lens, and no one else was looking. The software converged with the interpretation. The interpretation had diverged from the intent. The system had produced something, but not the right thing.

Five Layers Where Alignment Holds or Fails

So far we have established that alignment requires three things to agree: intent, interpretation, and execution. Now we turn to the deeper decomposition. The three things are the what. The five layers are the how.

Each layer can be aligned or misaligned independently. A system that looks aligned at one layer can be profoundly broken at another.

  • Ontological alignment asks: do we agree on what things are? Does "user" mean the same thing to the engineer, the designer, and the customer? Does "fast" mean 100 milliseconds or "faster than the competitor"? Most misalignment starts here, with two people using the same word to mean different things and never discovering the difference until something breaks.

  • Teleological alignment asks: do we agree on what success looks like? The customer wants their workflow to feel effortless. The engineer reads that as a latency target. Operations reads it as uptime. These can coexist, but when they conflict, which wins? If the answer is not explicit, everyone resolves conflicts differently, and the system drifts.

  • Behavioral alignment asks: do we agree on what the system does in every situation, including the ones nobody planned for? The happy path is easy. The real test is what happens when input is ambiguous, when an edge case appears, when the spec is silent. Without explicit behavioral defaults, each developer fills the gaps with their own inference.

  • Temporal alignment asks: does all of the above stay true over time? People leave. Requirements evolve. The codebase changes through hundreds of small patches. What was aligned on day one drifts by day ninety unless something actively maintains it.

  • Reflexive alignment asks: does the system know when it is drifting? This is the highest layer. A system that can detect its own misalignment can correct itself. A system that cannot will feel fine right up until the customer says "this is not what I needed," and by then the cost of correction has multiplied.

Each layer is a different kind of agreement, and each has its own failure mode. A team can have perfect teleological alignment (everyone agrees on the goal) and catastrophic ontological drift (nobody realizes they mean different things by the same words). A team can have strong behavioral alignment at launch and lose all of it by the next quarter because temporal alignment was never maintained.

The reason this decomposition matters is the same reason Winston's representation principle matters. Once you can name the layer, you can see the failure. Once you can see the failure, you can engineer against it.

Five layers where alignment holds or fails

Why Alignment Decays by Default

We have represented alignment as three things that must agree, operating across five layers. Now we turn to the forces that pull them apart.

Alignment does not hold still. It degrades unless something actively maintains it. Three forces work against it continuously.

  • Lossy translation. Every time intent passes from one person to another, from one document to another, from one layer to another, information is lost. The customer's frustration becomes a sentence. The sentence becomes a specification. The specification becomes a task. Each step compresses. Five small losses compound into one large misalignment.

  • The butterfly effect. Small misunderstandings at the top cascade into large failures at the bottom. A slightly wrong definition in the requirements becomes a subtly wrong invariant in the blueprint, which becomes a meaningfully wrong behavior in the code. By the time the divergence is visible, the original cause is nearly impossible to trace.

  • Incentive drift. People work toward what they are measured on. If the system rewards velocity, people ship fast. If it rewards features completed, people complete features. Neither of those is the same as "did the customer's reality change?" The incentive structure quietly replaces the customer as the thing the system aligns to.

These three forces are always present. They are not bugs. They are the default behavior of any system that translates intent across layers, involves humans with incentives, and operates over time. Which is every software development system.

The implication is direct: alignment is not a state you achieve. It is a state you manufacture continuously, against forces that are always pulling it apart. A system that does not actively manufacture alignment will decay into misalignment. There is no neutral position. Building the right thing is not something that happens by default. It is something the structure must produce.

Three forces that pull alignment apart

Why This Matters for Everything That Follows

The reason to represent alignment this precisely is that most arguments about software development are actually arguments about which layer is failing. When someone says "we need better documentation," they usually mean temporal or ontological alignment. When someone says "the engineers don't understand the business," they usually mean teleological alignment. When someone says "we ship too much of the wrong thing," they usually mean reflexive alignment. These are different problems requiring different solutions, and treating them as one problem guarantees that none of them get solved.

In Part 3, we describe the seven properties we have found an aligned system must have. Each one addresses a specific way the layers can fail, and each one maps to a specific capability in Software Factory. The properties are derived from first principles rather than borrowed from existing methodology. They are what we have found to be necessary, not what is conventional.


Previous: Part 1: Why Building Fast Is Not Enough

Next: Part 3: Seven Properties of an Aligned System

More from 8090