Stefan F. Schouten
banner
stefanfs.me
Stefan F. Schouten
@stefanfs.me
PhD candidate CLTL VU Amsterdam
Prev. Research Intern Huawei
stefanfs.me
In the paper, we also explain:
(1) how Contrastive Eigenproblems take inspiration from, and are explanatory of, Contrast-Consistent Search, a well-known Contrastive Probing method, and
(2) why COPA does not isolate one direction.
As well as some more theoretical results.

arxiv.org/abs/2511.02089
LLM Probing with Contrastive Eigenproblems: Improving Understanding and Applicability of CCS
Contrast-Consistent Search (CCS) is an unsupervised probing method able to test whether large language models represent binary features, such as sentence truth, in their internal activations. While CC...
arxiv.org
December 3, 2025 at 3:33 PM
We can also use multiple types of contrast, and solve one Contrastive Eigenproblem for all of them. When applying this to the 'sentence polarity' and 'sentence truth' features, the top eigenvectors yield directions for both, and the polarity-sensitive truth direction first found by Bürger et. al.
December 3, 2025 at 3:33 PM
But, how do we know we succeeded? What if the model does not represent features the way we think it will?

Solving the Contrastive Eigenproblem gives eigenvalues that show how many contrastive directions are captured by your activations.

Below, only 'amazon' isolates a single direction.
December 3, 2025 at 3:33 PM
Contrastive probing methods use pairs of activations with no further supervision. The goal is for each pair to be based on inputs which differ in exactly one way: one has the feature of interest, and the other does not (without needing to know which).
December 3, 2025 at 3:33 PM
Finally, when intervening on hidden states, we find that the truth-value directions identified are causal mediators in the inference process.
July 14, 2025 at 2:55 PM
Even directions identified from single sentences show some sensitivity to the context, but sensitivity increases when probes are based on examples where sentences appear in inferential contexts.
July 14, 2025 at 2:55 PM
Regardless of probing method and dataset, truth-value directions are found to be sensitive to context. However, we also find they are sensitive to the presence of irrelevant information.
July 14, 2025 at 2:55 PM
We use probing techniques that identify directions in the model's latent space which encode if sentences are more or less likely to be true. By manipulating inputs and hidden states we evaluate whether probabilities update in appropriate ways.
July 14, 2025 at 2:55 PM
Even truth-value directions identified from individual sentences still show some sensitivity to context, although the sensitivity increases when probes are based on sentences appearing in an inferential context.
July 14, 2025 at 2:39 PM
We find that regardless of probing method and dataset, models are found to incorporate in-context information when assigning truth-values to sentences. However, we also find they are sensitive to irrelevant information.
July 14, 2025 at 2:39 PM
We use probing techniques that identify directions in the model's latent space used to represent sentences as more or less likely to be true. By manipulating both inputs as well as hidden states, we test if probabilities update as expected.
July 14, 2025 at 2:39 PM