Tobias Gerstenberg
banner
tobigerstenberg.bsky.social
Tobias Gerstenberg
@tobigerstenberg.bsky.social
Tea drinking assistant professor of cognitive psychology at Stanford.

https://cicl.stanford.edu
fantastic!! congratulations 🎉
November 26, 2025 at 12:14 AM
These results suggest that people build internal causal models that abstract away irrelevant information. Through surprise tests, we gain insights into what these models look like, finding that they shape memory, prediction, and generalization.

w Steven Shin, Chuqi Hu & @paulm-k.bsky.social
October 24, 2025 at 7:15 PM
We develop a causal abstraction that infers a causal story of how the data was generated, paying more attention to factors that mattered for the prediction task. This model captures participants' generalization judgments better than a feature-based model, despite having many fewer parameters.
October 24, 2025 at 7:15 PM
This time, we also asked participants to predict what would happen in novel situations. For example, we showed them two familiar cubes on a novel ramp. These generalization trials also featured ramps that were facing the opposite direction from what they had seen before.
October 24, 2025 at 7:15 PM
In Exp 3, participants either viewed forward-facing ramps, or backward-facing ones. The cubes always ended up on the right side. Again, after having learned to predict which cubes cross the finish line, we surprised and asked them where exactly the cubes would be. Ps made similar errors as in Exp 2.
October 24, 2025 at 7:15 PM
Exp 2: Participants predict whether a cube on ramp will cross a finish line. Either cube color or ramp color is diagnostic. A surprise question about exactly where the cube will end up reveals systematic errors: they knew on which side of the line the cube would end up, but not the exact location.
October 24, 2025 at 7:15 PM
Exp 1: Participants learn whether color or shape matter for turning on a machine. In a surprise test, we ask them what they saw last. They frequently misremember (e.g., choosing a differently colored object when only the shape mattered). Only happens with enough evidence to learn the rule!
October 24, 2025 at 7:15 PM
When people are asked to predict what happens next, do they learn simple feature-outcome mappings, or do they learn causal models that capture the underlying generative process? If they do learn causal models, how can we tell? We ran 3 experiments (N=1080) using two paradigms to find out.
October 24, 2025 at 7:15 PM