Hallucination and Context Filling: A Toy Model
Have you ever noticed that LLMs sometimes seem more likely to “hallucinate” or generate nonsensical information when their context window is packed with information? This post dives into a toy model to explore a potential reason for this. We’ll focus on how a specific mechanism—where the influence of individual pieces of information is scaled down by as more (say, ) pieces arrive—can lead to a less “peaked” (flatter) probability distribution for the next word, possibly making the model more uncertain and prone to hallucination.
The Toy Model: Core Components
Let’s lay out the building blocks of our simplified model.
Context Richness and Logit Influence Vectors
Imagine the LLM processing distinct “contextual features” or pieces of information. As the input context gets richer (e.g., a longer prompt or conversation history), naturally increases. Each piece of information, let’s call it feature (for ), tries to nudge the model’s prediction for the next word. We represent this nudge as an “effective logit influence vector” . This vector has a value for every word in the LLM’s vocabulary (size ), telling us how feature wants to increase or decrease the pre-softmax logit for each potential next word.
The Key Mechanism: Postulating the Scaling
The heart of our toy model is how these individual influences combine. We can motivate the specific form of this combination by considering a common pattern in neural architectures: the normalized aggregation of information from multiple sources. For instance, attention mechanisms weight and sum value vectors, and Mixture of Experts (MoE) models combine expert outputs using gating weights that often sum to one (or are normalized).
Let’s model a scenario where distinct contextual cues have been processed into intermediate representations (e.g., hidden states from different attention heads or parallel processing pathways), denoted . These are then combined using weights before a final linear projection maps them to the logit space:
The weights could be generated by a softmax normalization of scores for each cue (i.e., ). If, for the specific set of broad contextual cues being aggregated, their scores are approximately equal (e.g., they are deemed undifferentiated in their relevance for this particular aggregation step, or the scoring mechanism is operating in a regime of low sensitivity for these cues), then for all .
Under this condition of roughly uniform weighting, the raw combined logit vector becomes:
Based on this, we define the “effective logit influence vector” of cue as its contribution after processing by and then centering it around its mean effect: . Assuming the final logit vector that we model is also centered (i.e., its expectation is zero, consistent with using centered for easier variance analysis later), it is then postulated to be the arithmetic mean of these centered effective logit influence vectors:
This explicit scaling of the centered influences is crucial. It means that as more information comes in, each individual piece of information has proportionally less say in the deviations from the mean logit, preventing any single piece from unduly dominating the variance. If we just summed the centered influences (a scaling), the variance of the sum would typically grow with . If we scaled by (common when averaging independent, identically distributed random variables), the variance of the sum might stay constant if the influences were uncorrelated. The scaling, however, is a stronger form of down-weighting that, as we’ll see, can actively reduce the variance of the combined logits under certain conditions.
Simplifying Assumptions for Logit Influences ()
To make our model tractable, we’ll make a few statistical assumptions about these influence vectors :
- Zero Mean Influence: On average, each is zero (). This just means we’re looking at influences as deviations from some baseline, which simplifies our math, especially variance calculations.
- Common Covariance: The “shape” of the random fluctuation of each is the same, described by a common covariance matrix: .
- Inter-Feature Covariance: The way influences from different features ( and ) vary together is described by .
Deriving the Expected Variance of Logits
Our main goal is to see how the “spread” of the final logit values across the vocabulary changes as increases. We measure this spread using the sample variance: , where is the average logit value.
-
Covariance of the Scaled Logit Vector (): We start with our formula . Using standard properties of covariance (specifically, its bilinearity and how it behaves with sums of random vectors), we find the covariance matrix of :
Expanding the covariance of the sum (akin to , but for vectors):
Plugging in our assumptions ( and ):
-
Expected Sample Variance of Logits: Since we assumed , it follows that for any logit . This simplifies things. The expected sample variance of the logits can be related to the covariance matrix using a function . This function essentially measures the average variance of the components of a zero-mean random vector whose covariance is . (It’s derived from the general sample variance formula , which simplifies nicely when the mean is zero). So, we have:
Because is a linear function of its matrix argument (it involves traces and linear combinations), we can write:
-
Introducing Simplified Correlation: Let’s define as the inherent expected sample variance if we only had a single feature’s influence. To make the sum of cross-feature terms easier to handle, we make another simplifying assumption: we assume that the “shape” of these cross-covariances is somewhat similar, differing mainly by a scalar factor. Specifically, we let . Here, is a scalar “effective correlation” that captures how much the typical spread pattern of influence from feature aligns with that of feature , normalized by . Now, let be the average of these effective correlations over all distinct pairs of features (this term is zero if ). The sum then becomes .
-
Final Result for Expected Logit Variance: Substituting this back into our equation for the expected sample variance:
Factoring out and simplifying the terms within the parenthesis leads to:
This looks a bit complicated, but it can be rewritten in a more insightful way:
This is our key result for how the expected spread of logits changes!
What the Model Shows
This final equation is quite revealing. It tells us how the expected variance (or “spread”) of the logits across the vocabulary changes based on the number of contextual features () and their average effective correlation ():
-
Independent Features (): If the influences from different contextual features are effectively uncorrelated (), then .
Interpretation: The logit variance shrinks proportionally to . If new information is entirely “fresh” and unrelated to what the model has already processed, its primary effect is to reduce the logit spread. While the logits themselves become more concentrated around their mean, this doesn’t mean the model becomes more certain about a single token. Rather, as we’ll see when connecting to softmax, this reduced variance across the set of all logits leads to them being more similar to each other, resulting in a flatter probability distribution and thus greater uncertainty.
-
Positively Correlated Features (): The variance still decreases as grows, thanks to the term which still pushes it down. However, as gets very large, the variance doesn’t go to zero. Instead, it approaches a floor: .
Interpretation: Shared or redundant information (positive correlation) limits how much the scaling can reduce logit variance. While the variance still decreases with (as long as ), it approaches a floor of . This means that even with a large amount of information, if it’s partially correlated, the logits won’t become as similar to each other (their differences won’t shrink as much) as they would with purely independent information. The resulting softmax distribution, while potentially becoming flatter and indicating more uncertainty compared to small (if initial was high), will not flatten indefinitely as it does when .
-
Perfectly Correlated Features (): If all contextual influences are perfectly correlated in their effect on logit spread (), then .
Interpretation: If all contextual influences are perfectly correlated (), the expected logit variance remains constant at , regardless of . The scaling, in this scenario, effectively processes the same underlying signal repeatedly. The logit spread (and thus the similarity between logits) doesn’t change from the single-feature case. This implies that adding more, perfectly redundant information neither increases nor decreases the model’s certainty or uncertainty (as reflected by the flatness of the softmax distribution) compared to having just one piece of that information.
The takeaway is this: as long as new contextual features bring at least some new, uncorrelated influence (), our scaling mechanism will cause the expected variance of the logits to decrease as context richness grows.
Quantifying Flatness of Final Distribution
We focus on the common scenario of partially correlated features () to see how increasing context richness () can lead to a flatter softmax distribution, potentially increasing hallucination. Recall that is the effective average correlation influencing logit differences.
To quantify this flattening more directly, we can consider the expected Chi-Squared distance, , between the model’s softmax output distribution and a perfectly uniform distribution .
When the logits are relatively small (which occurs when their variance is small, as for large ), we can approximate . This leads to:
Thus, the expected deviation from a uniform distribution scales with the expected variance of the logits themselves:
For the partially correlated case (), this means:
A smaller indicates that is closer to uniform (flatter). As increases, this value decreases (if ), signifying a flatter distribution and thus higher model uncertainty, approaching a limit set by .
This argument shows that for partially correlated features, increasing context richness () causes the variance of logit differences to decrease (due to the term), increasing the likelihood of logits being numerically close. This, in turn, leads to a flatter softmax probability distribution, indicating higher model uncertainty, although the extent of this flattening is limited by the non-zero correlation (via ).
Conclusion
This toy model, centered on the idea of scaling of logit influences from various contextual features, paints a plausible picture of how increasing context richness () could, somewhat counterintuitively, lead to trouble. The key result, , shows that the expected spread of pre-softmax logits tends to decrease as more features are added, especially if these features are diverse (low average correlation ). This reduction in logit variance makes the logits cluster together, leading to a flatter softmax probability distribution. This flatter distribution signifies higher model uncertainty, offering a mathematical pathway to understanding why an LLM might become more prone to hallucination when its context window is filled with a large amount of information. While a simplification, this model provides a conceptual framework for thinking about the delicate balance of information aggregation in LLMs.