Mechanistic Interpretability: Part 4 - Mathematical Framework for Transformer Circuits
Following our exploration of polysemanticity and monosemanticity in Part 3, we now turn to the specific architecture that dominates modern AI: the Transformer. To understand the learned algorithms within these powerful models, we need a robust mathematical framework that allows for systematic decomposition and analysis of their internal workings, particularly their attention mechanisms and information flow. This part lays out such a framework, building on the conceptual foundations from Part 1 and Part 2.
Deconstructing the Transformer
Transformers, while complex, possess a highly structured architecture that lends itself to mathematical decomposition. Key components include token embeddings, positional encodings, multiple layers of attention and MLP (Multi-Layer Perceptron) blocks, and a final unembedding step. For the development of a clear mathematical framework, we often initially simplify by focusing on attention-only models or omitting biases and layer normalization, as these can be added back later without fundamentally altering the core computational pathways.
The Residual Stream: A Central Communication Bus
The residual stream is arguably the most critical architectural feature for enabling mechanistic analysis. At each layer , the output is the sum of the input from the previous layer and the computations performed by the current layer’s components (e.g., attention head outputs, MLP outputs):
where is the output of the -th component in layer (e.g., an attention head or an MLP block), which itself is a function of the input to that layer, . This additive, linear structure means the residual stream acts like a shared communication bus or a “results highway.” Any component can read from the accumulated state of the stream and write its own contribution back. This has profound implications:
- Linearity for Analysis: The primary information pathway is linear, allowing for techniques like path expansion and virtual weight computation.
- Non-Privileged Basis: The residual stream itself doesn’t inherently have a privileged basis. Any global orthogonal transformation applied consistently to all interacting weight matrices would leave the model functionally unchanged. This reinforces the idea (from Part 2) that features are directions, not necessarily neuron alignments.
- Superposition at Scale: With many components writing to and reading from a residual stream of fixed dimensionality (), it naturally becomes a place where multiple signals (feature activations) are superposed.
Virtual Weights: Unveiling Effective Connectivity
Because of the residual stream’s additive nature, a component in a later layer doesn’t just see the direct output of layer ; it effectively sees the sum of outputs from all preceding components in layers that wrote to the stream. Virtual weights quantify the effective linear transformation from the input of an earlier component (or its output contribution) to an input processing stage of a later component, considering all intermediate additions and transformations in the residual stream.
Let’s define some terms:
- Let be the effective output matrix of a component (e.g., an attention head or an MLP block) in layer . If is the input to component from the residual stream, its output contribution to the stream is . For an attention head , would be its value-output transformation (a matrix), assuming the attention pattern itself is fixed or we are analyzing a specific path of information flow through a value vector. For an MLP layer, if it’s a simple linear transformation, would be its weight matrix. If it’s a non-linear MLP (e.g., with ReLU), represents an effective linear matrix for a specific input or an average sense, or considers the path through specific active neurons. For the purpose of linear path analysis, we often approximate non-linear components by their local linear behavior or focus on paths where non-linearities are fixed (e.g. a specific ReLU activation pattern). A common formulation for a two-layer MLP is (again, ).
- Let be an input projection matrix of a component in layer . For an attention head , this could be its query matrix (), key matrix (), or value matrix ().
1. Direct Virtual Weight (No Intermediate Layers, i.e., or within the same layer if analyzing parallel components):
If component outputs to the stream, and component (in the next layer or a later component in the same layer reading from the updated stream) uses an input projection , the part of ’s projected input that comes from via is . The direct virtual weight matrix mapping the input (that fed into ) to this specific contribution at ’s input projection is:
For example, the virtual weight from the input of Head ’s OV circuit (matrix ) to the Query input projection of Head (matrix ) in an immediately subsequent processing step is . This resulting matrix has dimensions .
2. Virtual Weight Across Intermediate Layers:
Now, consider components in layer and in a later layer (). The signal from passes through intermediate layers (for ).
Each intermediate layer applies a linear transformation to the signal passing through its residual stream. If layer contains components (heads or MLPs) with effective output matrices (as defined above, noting the linear approximation for MLPs if non-linear), then a signal entering layer from the previous layer’s residual stream is transformed to upon exiting layer . Let be this full linear transformation for layer , representing the cumulative effect of all parallel components in that layer on a signal passing through the residual stream.
The output contribution from component (where was its input from the stream) becomes by the time it reaches the input of layer . This transformed signal is then processed by ’s input projection . Thus, the full virtual weight matrix from the input of component to the specific projected input of component is:
If there are no intermediate layers (), the product term is empty (or an identity matrix), reducing to the direct case. This concept is crucial for understanding how non-adjacent layers and components influence each other, effectively forming long-range circuits by composing these linear transformations.
Decomposing the Attention Head
The attention mechanism is the heart of the Transformer. It dynamically routes information based on context. An attention head computes its output by attending to various positions in the input sequence and constructing a weighted sum of their value vectors.
Mathematically, for a single attention head, given input token representations , the head first projects these into Query (), Key (), and Value () vectors for each token (query) and (key/value source) using weight matrices :
(Note: are row vectors of dimension .)
Attention scores are computed as the dot product of a query vector with a key vector, scaled by :
These scores are then normalized via Softmax across all source positions to get attention weights . The output for query token from this head, before the final output projection, is a weighted sum of value vectors: .
This output (a dimensional row vector) is then projected back to the model dimension using the output weight matrix . The head’s final contribution to the residual stream for token is
This mechanism can be decomposed into two key conceptual circuits:
-
Query-Key (QK) Circuit: Determines where to attend. The QK circuit computes the attention scores (before softmax). The core of this computation is the term . Let’s derive its form in terms of the original residual stream vectors and :
Using the matrix transpose property , we have . Substituting this back, we get:
This expression shows that the unnormalized attention score between token and token is a bilinear form .
The matrix is an effective matrix that defines how pairs of token representations in the residual stream are compared to produce attention scores. Since is and is , the rank of is at most , which is typically much smaller than . This low-rank structure implies that the QK circuit is specialized in comparing specific types of information, effectively projecting the -dimensional token representations into a shared -dimensional space for comparison.
-
Output-Value (OV) Circuit: Determines what information to move from the attended positions and how it’s transformed. Once attention weights are computed, the OV circuit processes the value vectors. The full transformation from an original token representation (at a source position ) to its potential contribution to the output (if fully attended, i.e., ) is .
The matrix is an effective matrix. (Since is and is , their product is ).
This matrix describes the transformation applied to a value vector (derived from a token representation in the residual stream) before it’s written back to the residual stream at position . Its rank is also at most . For example, if (identity matrix), the head primarily copies information from attended positions. If it’s different, it transforms the information.
Analyzing the properties (e.g., SVD, eigenvalues) of and reveals the specific attention patterns and information processing strategies of individual heads.
Path Expansion and Compositional Power
The overall computation of a Transformer can be viewed as a sum over all possible paths that information can take from the input embedding to the final output logits. Each path involves a sequence of components (attention heads, MLP layers) and their respective weight matrices. While attention introduces non-linearity via softmax, analyzing specific paths (e.g., by fixing attention patterns or looking at linear segments of the computation) is a key strategy.
For an attention-only transformer, the output logit for a token given a previous token (position ) can be written as:
where is the token embedding matrix (row per token, ), is the unembedding matrix (often , so ), and is the output vector added to the residual stream by head in layer .
This can be expanded. For instance, the output contribution of head acting on the stream input (output of layer ) is .
The simplest paths are:
-
Zero-Layer Path: The direct connection from embedding to unembedding. If token at position has embedding vector , then the direct contribution to logits is . This path effectively captures token co-occurrence statistics similar to bigrams if .
-
One-Layer Paths: Paths passing through a single attention head. The term describes the influence of head acting on the initial embedding (if it’s in the first layer) on the logit for token . This can implement more complex statistics like skip-trigrams.
Composition Mechanisms: The Source of Complex Algorithms
The true power of multi-layer Transformers comes from composition, where the output of earlier components influences the computation of later components. This is where virtual weights become essential for analysis. For attention heads, this occurs in three main ways:
-
Q-Composition (Query Composition): The output of head (in layer ) modifies the residual stream. When head (in layer ) computes its Query vector, it reads from this modified stream. Thus, influences what attends to. Let be the input to ’s OV circuit (matrix ). Its output contribution is . If there are intermediate layers transforming this by a product of matrices , this signal becomes as it enters the layer of .
The query vector for is formed from the stream as . The part of this query that comes from via is .
The virtual weight matrix for this Q-composition path is .
-
K-Composition (Key Composition): Similarly, can influence the Key vectors that uses for comparison. The output from () influences the stream from which forms its key vectors . The virtual weight matrix for this K-composition path (from input of ’s OV to ’s K-projection) is .
-
V-Composition (Value Composition): And can influence the Value vectors that aggregates. The output from () influences the stream from which forms its value vectors . This then passes through ’s output projection . The virtual weight matrix for this V-composition path (from input of ’s OV to ’s OV output) is .
These composition mechanisms, understood via virtual weights, allow for the construction of virtual attention heads, where the combined effect of multiple heads implements a more complex attention pattern or information transformation than any single head could. For instance, K-composition is fundamental to induction heads (explored in Part 9).
Emergent Complexity in Two-Layer Models
While a zero-layer Transformer is limited to bigrams and a one-layer attention-only Transformer to skip-trigrams, a two-layer Transformer can already exhibit qualitatively new capabilities due to composition. For example, an induction head typically requires at least two heads working in sequence:
- A “previous token” head (Head 1) in an earlier layer copies (parts of) the -th token’s representation into the residual stream.
- An “induction” head (Head 2) in a later layer uses this copied representation. Specifically, via K-composition, the Key vectors generated by for previous tokens in the sequence are modulated by the output of . If is looking for the token that followed previous instances of token , its Query vector (also potentially influenced by ’s output via Q-composition) will match strongly with Key vectors of tokens that are , and the overall QK circuit of is further specialized to shift attention to the token after these matched tokens. The OV circuit of then copies this successfully identified token. This is a form of in-context learning that is impossible with single heads in isolation.
Let’s derive the explicit mathematical formulations for zero-layer, one-layer, and two-layer transformers to better understand this emergent complexity:
Zero-Layer Transformer: Direct Token Mapping
In a zero-layer transformer, we have direct connections from token embeddings to output logits without any intermediate attention or MLP layers. The mathematical formulation for predicting the next token is simply:
Where:
-
$$\mathbf{E} \in \mathbb{R}^{ V \times d_{\text{model}}}$$ is the embedding matrix -
$$\mathbf{U} \in \mathbb{R}^{d_{\text{model}} \times V }$$ is the unembedding matrix - is the position of the input token
- is the candidate output token (in the vocabulary)
When the unembedding matrix is (approximately) the transpose of the embedding matrix (), this computation reduces to measuring token similarity:
This formulation can only capture simple bigram statistics based on embedding similarity. The zero-layer transformer effectively learns which tokens tend to follow other tokens directly, without any contextual understanding.
One-Layer Transformer: Attention-Based Contextual Processing
A one-layer transformer introduces attention mechanisms between the embedding and unembedding steps. For a model with attention heads, the logits are computed as:
For each attention head processing position , the output contribution is:
Where the attention weights are calculated using softmax over attention scores:
And the attention scores are:
This can be rewritten using the effective QK matrix as described earlier:
Where .
The one-layer transformer can learn to selectively attend to previous tokens based on their relevance to the current position. This allows it to implement skip-trigram patterns by, for example, having position attend strongly to positions and to predict the next token.
However, a one-layer transformer cannot implement the induction pattern (copying a token that followed a similar context elsewhere in the sequence) because each head operates independently on the original token embeddings.
Two-Layer Transformer: Composition and Emergent Capabilities
In a two-layer transformer, the output of the first layer’s attention heads becomes the input for the second layer’s heads, enabling composition. For a model with heads in layer 1 and heads in layer 2, the logits are:
Where is the residual stream after layer 2:
And is the residual stream after layer 1:
Let’s consider the induction head mechanism in detail. Suppose we’re at position in the sequence, and we’ve previously seen the pattern “ ” somewhere earlier in the sequence. Now at position we see token “” again, and we want to predict “” at position . This requires:
-
A “previous token” head () in layer 1 that copies token ’s representation (the new occurrence of “”) to position :
This is achieved by having the OV circuit of approximate the identity function () and having the QK circuit attend to the previous token.
-
An “induction” head () in layer 2 that:
a. Forms query vectors from the updated stream at position , which now contains information about token (i.e., “”):
This query includes contributions from :
b. Forms key vectors for previous positions:
c. Computes attention scores:
The virtual weight matrix for this K-composition path is:
If is structured appropriately, will attend strongly to positions where the token is the same as token (i.e., other occurrences of “”).
d. Once attends to previous occurrences of “”, it then needs to shift attention to the tokens that followed them (i.e., “”). This can be implemented through appropriate training of the QK circuit to focus on tokens at position when matching with token at position .
e. Finally, the OV circuit of copies the attended token (“”) to position :
Where is the position of a previous occurrence of “”.
This complex interaction between heads across layers enables the two-layer transformer to implement in-context learning - predicting “” after seeing “” based on previously observed “ ” patterns. This capability emerges from the composition of simpler operations and cannot be achieved in models with fewer layers.
The key insight is that the output of the first layer’s heads modifies the residual stream in a way that influences the attention patterns of the second layer’s heads. This composition enables the emergence of algorithmic capabilities that transcend what each individual head can do in isolation.
Conclusion
This mathematical framework provides the tools to dissect Transformers into their constituent computational parts: the residual stream as a communication bus, attention heads decomposed into QK and OV circuits, and the powerful concept of composition that allows simple components to build complex algorithms. By analyzing virtual weights, path expansions, and composition strengths, we can start to reverse-engineer the specific computations learned by these models.
This foundation is crucial for understanding phenomena like superposition within these architectures and for developing techniques to extract and validate the features and circuits that implement their remarkable capabilities. In Part 5, we will explore the validation of learned features and circuits, building on the mathematical framework established here.
References and Further Reading
This framework is primarily based on the work by Elhage et al. and Olsson et al. in the Transformer Circuits Thread:
- Elhage, N., Nanda, N., Olsson, C., Henighan, T., Joseph, N., Mann, B., … & Olah, C. (2021). A Mathematical Framework for Transformer Circuits. Transformer Circuits Thread
- Olsson, C., Elhage, N., Nanda, N., Joseph, N., DasSarma, N., Henighan, T., … & Olah, C. (2022). In-context Learning and Induction Heads. Transformer Circuits Thread
- Insights on the residual stream and attention also draw from the original Transformer paper: Vaswani, A., et al. (2017). Attention Is All You Need. NeurIPS.