Stefan F. Schouten
banner
stefanfs.me
Stefan F. Schouten
@stefanfs.me
PhD candidate CLTL VU Amsterdam
Prev. Research Intern Huawei
stefanfs.me
We can also use multiple types of contrast, and solve one Contrastive Eigenproblem for all of them. When applying this to the 'sentence polarity' and 'sentence truth' features, the top eigenvectors yield directions for both, and the polarity-sensitive truth direction first found by Bürger et. al.
December 3, 2025 at 3:33 PM
But, how do we know we succeeded? What if the model does not represent features the way we think it will?

Solving the Contrastive Eigenproblem gives eigenvalues that show how many contrastive directions are captured by your activations.

Below, only 'amazon' isolates a single direction.
December 3, 2025 at 3:33 PM
How do you know your contrastive probing data identifies a unique feature?
How can we identify directions that model combinations of features?

We propose Contrastive Eigenproblems to tackle both of these issues.

Come see the poster at the MechInterp Workshop @ NeurIPS this sunday!
December 3, 2025 at 3:33 PM
Finally, when intervening on hidden states, we find that the truth-value directions identified are causal mediators in the inference process.
July 14, 2025 at 2:55 PM
Even directions identified from single sentences show some sensitivity to the context, but sensitivity increases when probes are based on examples where sentences appear in inferential contexts.
July 14, 2025 at 2:55 PM
Regardless of probing method and dataset, truth-value directions are found to be sensitive to context. However, we also find they are sensitive to the presence of irrelevant information.
July 14, 2025 at 2:55 PM
We use probing techniques that identify directions in the model's latent space which encode if sentences are more or less likely to be true. By manipulating inputs and hidden states we evaluate whether probabilities update in appropriate ways.
July 14, 2025 at 2:55 PM
📢 Our paper on 'Truth-value Judgment in LLMs' was accepted to @colmweb.org #COLM2025!

In this paper, we investigate how LLMs keep track of the truth of sentences when reasoning.
July 14, 2025 at 2:55 PM
Even truth-value directions identified from individual sentences still show some sensitivity to context, although the sensitivity increases when probes are based on sentences appearing in an inferential context.
July 14, 2025 at 2:39 PM
We find that regardless of probing method and dataset, models are found to incorporate in-context information when assigning truth-values to sentences. However, we also find they are sensitive to irrelevant information.
July 14, 2025 at 2:39 PM
We use probing techniques that identify directions in the model's latent space used to represent sentences as more or less likely to be true. By manipulating both inputs as well as hidden states, we test if probabilities update as expected.
July 14, 2025 at 2:39 PM