Casper Kerrén
ckerren.bsky.social
Casper Kerrén
@ckerren.bsky.social
Postdoc Max Planck Institute for Human Cognitive and Brain Sciences.
Critically, inference stretches neural distances along relevant dimensions and compresses irrelevant ones right before a decision, predicts faster RTs, and this re-shaping precedes feedback-related frontal theta tracking model-derived PE.
January 8, 2026 at 7:46 AM
The brain’s representational space flexes with inferred complexity.

Neural effective dimensionality scales up in 2D vs 1D, and is higher on correct vs incorrect trials. In 2D, the two attended features show up as near-orthogonal axes in a shared planar manifold plane.
6/8
January 8, 2026 at 7:46 AM
Eyes tell the same story
Gaze selectively shifts toward task-relevant features, irrelevant features drop out. Gaze entropy decreases as beliefs stabilise, and negative prediction errors from the HSI model trigger broader sampling (exploration), while positive PEs tighten focus (exploitation).
5/8
January 8, 2026 at 7:46 AM
A Hidden State Inference (HSI) model best explained choices and inferred contexts, beating Q-learning variants (standard, forgetting, counterfactual).

HSI captures something structurally different from incremental RL.
4/8
January 8, 2026 at 7:46 AM
Participants adapted fast: first trial after a switch was at chance level, then rapid recovery. RTs drop and accuracy rises within context blocks - they used the structure to take decisions.
3/8
January 8, 2026 at 7:46 AM
Serial reversal learning task with same cars, same feature space (3 dimensions), but the rule silently flips. Different dimensions matter in different trials. Sometimes one dim matters, sometimes two dims. You only find out via feedback, meaning participants had to infer the latent state.
2/8
January 8, 2026 at 7:46 AM
New preprint: Inference over hidden contexts shapes the geometry of conceptual knowledge for flexible behaviour.

In this pre-reg study, our core claim was that we don’t just learn stimulus-reward. We infer hidden context and that inference re-wires attention and neural state space on the fly.
1/8
January 8, 2026 at 7:46 AM
📄 In our new paper, we argue:
The best retrieval cue matches the memory now,
not just how it was encoded.

Always a pleasure working with @lindedomingo.bsky.social.

Amusing summary below courtesy of ChatGPT:
May 7, 2025 at 8:35 PM
🔗 A synchrony bridge?
We found theta–gamma phase–amplitude coupling (TG-PAC) between hippocampus and cortex right after ripples.

TG-PAC peaks before cortical expansion, suggesting it may help coordinate the shift from compressed hippocampal codes to expanded cortical states.
April 29, 2025 at 6:00 AM
🌐 After ripples, the brain’s state space unfolds.

Cortical dimensionality expands — neural patterns spread apart, making memories easier to decode.
More expansion → Faster retrieval and more reinstatement.

This raised a question:
🧠 What mechanism is driving this cortical transformation?
April 29, 2025 at 6:00 AM
🧠 First, ripple characteristics:

More ripples on correct vs. incorrect trials 📈

Ripples cluster before memory responses ⏳

Timing suggests ripples help initiate retrieval, not just reflect it.
April 29, 2025 at 6:00 AM
During an associative memory task, we tracked how ripple events in the hippocampus related to cortical dynamics.

🌟 Hypothesis: Ripples trigger a shift from compressed to expanded neural representations in cortex, making memories readable again.
April 29, 2025 at 6:00 AM
🧠✨How do we rebuild our memories? In our new study, we show that hippocampal ripples kickstart a coordinated expansion of cortical activity that helps reconstruct past experiences.

We recorded iEEG from patients during memory retrieval... and found something really cool 👇(thread)
April 29, 2025 at 6:00 AM
By examining cross-species evidence, we highlight neural mechanisms that may support episodic memory and identify crucial questions for future research.
February 14, 2025 at 9:16 AM
We put forward the hypothesis of how dimensionality reduction and expansion enable the brain to encode, store and retrieve a vast amount of episodic memory information. Reduction compresses sensory input into simplified, storable codes, while expansion reconstructs vivid details.
February 14, 2025 at 9:16 AM
What began as a slightly intoxicated walk on Huntington beach in California with my buddy @benjamingriffiths.bsky.social summer of 2023, ended in a published Opinion paper today in @cp-trendscognsci.bsky.social, together with Daniel Reznik and Christian Doeller at @mpicbs.bsky.social.
February 14, 2025 at 9:16 AM
We put forward the hypothesis of how dimensionality reduction and expansion enable the brain to encode, store and retrieve a vast amount of episodic memory information. Reduction compresses sensory input into simplified, storable codes, while expansion reconstructs vivid details.
February 14, 2025 at 9:13 AM
We put forward the hypothesis of how dimensionality reduction and expansion enable the brain to encode, store and retrieve a vast amount of episodic memory information. Reduction compresses sensory input into simplified, storable codes, while expansion reconstructs vivid details.
February 14, 2025 at 9:07 AM