Tim Behrens
behrenstimb.bsky.social
Tim Behrens
@behrenstimb.bsky.social
Slowly becoming a neuroscientist.
EiC @elife.bsky.social
Pinned
OK If we are moving to Bluesky I am rescuing my favourite ever twitter thread (Jan 2019).

The renamed:

Bluesky-sized history of neuroscience (biased by my interests)
Reposted by Tim Behrens
Our work with @georgkeller.bsky.social on testing predictive processing (PP) models in cortex is out on biorvix now! www.biorxiv.org/content/10.6... A short thread on our findings and thoughts on where we should move on from PP below.
A functional influence based circuit motif that constrains the set of plausible algorithms of cortical function
There are several plausible algorithms for cortical function that are specific enough to make testable predictions of the interactions between functionally identified cell types. Many of these algorithms are based on some variant of predictive processing. Here we set out to experimentally distinguish between two such predictive processing variants. A central point of variability between them lies in the proposed vertical communication between layer 2/3 and layer 5, which stems from the diverging assumptions about the computational role of layer 5. One assumes a hierarchically organized architecture and proposes that, within a given node of the network, layer 5 conveys unexplained bottom-up input to prediction error neurons of layer 2/3. The other proposes a non-hierarchical architecture in which internal representation neurons of layer 5 provide predictions for the local prediction error neurons of layer 2/3. We show that the functional influence of layer 2/3 cell types on layer 5 is incompatible with the hierarchical variant, while the functional influence of layer 5 cell types on prediction error neurons of layer 2/3 is incompatible with the non-hierarchical variant. Given these data, we can constrain the space of plausible algorithms of cortical function. We propose a model for cortical function based on a combination of a joint embedding predictive architecture (JEPA) and predictive processing that makes experimentally testable predictions. ### Competing Interest Statement The authors have declared no competing interest. Swiss National Science Foundation, https://ror.org/00yjd3n13 Novartis Foundation, https://ror.org/04f9t1x17 European Research Council, https://ror.org/0472cxd90, 865617
www.biorxiv.org
January 30, 2026 at 2:37 PM
This is super cool!
We think cortex might function like a JEPA. It looks like prediction errors in layer 2/3 are not computed against input (as is the idea in predictive processing), but against a representation in latent space (i.e. like in a JEPA arxiv.org/abs/2301.08243 or RPL doi.org/10.1101/2025...).
January 30, 2026 at 6:26 PM
Reposted by Tim Behrens
We think cortex might function like a JEPA. It looks like prediction errors in layer 2/3 are not computed against input (as is the idea in predictive processing), but against a representation in latent space (i.e. like in a JEPA arxiv.org/abs/2301.08243 or RPL doi.org/10.1101/2025...).
January 30, 2026 at 2:51 PM
Reposted by Tim Behrens
The supplementary videos for this preprint are fantastic. Some wild examples of decoding the animal's attentional focus and/or intent
www.biorxiv.org/content/10.6...
January 28, 2026 at 2:04 PM
This is totally wild. Remember the object they are attending to is presented egocentrically, but the allocentric theta sweeps follow it. The whole system is wired up to provide something like an "integrated attention reflex".
The hippocampal map has its own attentional control signal!
Our new study reveals that theta #sweeps can be instantly biased towards behaviourally relevant locations. See 📹 in post 4/6 and preprint here 👉
www.biorxiv.org/content/10.6...
🧵(1/6)
Attention-like regulation of theta sweeps in the brain's spatial navigation circuit
Spatial attention supports navigation by prioritizing information from selected locations. A candidate neural mechanism is provided by theta-paced sweeps in grid- and place-cell population activity, which sample nearby space in a left-right-alternating pattern coordinated by parasubicular direction signals. During exploration, this alternation promotes uniform spatial coverage, but whether sweeps can be flexibly tuned to locations of particular interest remains unclear. Using large-scale Neuropixels recordings in freely-behaving rats, we show that sweeps and direction signals are rapidly and dynamically modulated: they track moving targets during pursuit, precede orienting responses during immobility, and reverse during backward locomotion — without prior spatial learning. Similar modulation occurs during REM sleep. Canonical head-direction signals remain head-aligned. These findings identify sweeps as a flexible, attention-like mechanism for selectively sampling allocentric cognitive maps. ### Competing Interest Statement The authors have declared no competing interest. European Research Council, Synergy Grant 951319 (EIM) The Research Council of Norway, Centre of Neural Computation 223262 (EIM, MBM), Centre for Algorithms in the Cortex 332640 (EIM, MBM), National Infrastructure grant (NORBRAIN, 295721 and 350201) The Kavli Foundation, https://ror.org/00kztt736 Ministry of Science and Education, Norway (EIM, MBM) Faculty of Medicine and Health Sciences; NTNU, Norway (AZV)
www.biorxiv.org
January 28, 2026 at 10:28 AM
Reposted by Tim Behrens
A geometric shape regularity effect in the human brain.

🔗 buff.ly/4UsILev
January 27, 2026 at 4:11 PM
Reposted by Tim Behrens
Authors can now include video explainers in their papers with eLife!

Take a look at our first example from @mathiassablemeyer.bsky.social in ‘A geometric shape regularity effect in the human brain’
buff.ly/LzvvVb9
January 26, 2026 at 11:28 PM
www.bbc.co.uk/news/videos/...

(BBC verify is known for clear fact-based analysis without any political angle).
Unpicking the second Minneapolis shooting frame by frame
BBC Verify has analysed footage of the shooting from multiple angles, piecing together a detailed picture of what happened.
www.bbc.co.uk
January 26, 2026 at 6:22 PM
Reposted by Tim Behrens
*Multi-region computations in the brain*
When two regions are better than one...
doi.org/10.1016/j.ne...
#neuroskyence
January 23, 2026 at 6:15 PM
Reposted by Tim Behrens
At @elife.bsky.social you can now include explainer videos with every figure. Like going to a seminar while you engage with the paper. First example here elifesciences.org/articles/106...

Click the arrows next to each figure to get a video of @mathiassablemeyer.bsky.social explaining it for you!
January 22, 2026 at 6:16 PM
Reposted by Tim Behrens
Should you go to academia or industry for research in AI or cognitive science? It's the most common question I get asked by PhD students, and I've written up some of my thoughts on the answer, as an epilogue to my research-focused series on these fields: infinitefaculty.substack.com/p/on-researc...
On research careers in academia and industry
The epilogue to a series on Cognitive Science and AI
infinitefaculty.substack.com
January 23, 2026 at 3:13 PM
Reposted by Tim Behrens
At @elife.bsky.social you can now include explainer videos with every figure. Like going to a seminar while you engage with the paper. First example here elifesciences.org/articles/106...

Click the arrows next to each figure to get a video of @mathiassablemeyer.bsky.social explaining it for you!
January 23, 2026 at 2:38 PM
I like journal clubs where you are only allowed to say positive things about a paper. They are so much more satisfying.
My suspicion from dealing with Reviewer 2 over a 30-year career was that PIs can contribute by supporting balance in e.g., journal clubs. I think new generations of GLPs are raised at PhD/postdoc level, when trainees are encouraged to be destructively critical of other labs' work.
Let's talk about "grumpy lab person". Many labs have them. With an eye to keeping science at its most rigorous, they cross the line into criticism that's too harsh. They are the ones who risk killing your scientific spirit. They are reviewer 2. /1
January 23, 2026 at 8:51 AM
Reposted by Tim Behrens
very useful and a step towards the publication of the future.

If you think about it, it's kind of ridiculous that we're still stuck with what is basically a digitized version of a printed paper rather than using the vast possibilities of the web
At @elife.bsky.social you can now include explainer videos with every figure. Like going to a seminar while you engage with the paper. First example here elifesciences.org/articles/106...

Click the arrows next to each figure to get a video of @mathiassablemeyer.bsky.social explaining it for you!
January 23, 2026 at 8:39 AM
Reposted by Tim Behrens
Such a great idea!
At @elife.bsky.social you can now include explainer videos with every figure. Like going to a seminar while you engage with the paper. First example here elifesciences.org/articles/106...

Click the arrows next to each figure to get a video of @mathiassablemeyer.bsky.social explaining it for you!
January 22, 2026 at 10:35 PM
Reposted by Tim Behrens
This is amazing
At @elife.bsky.social you can now include explainer videos with every figure. Like going to a seminar while you engage with the paper. First example here elifesciences.org/articles/106...

Click the arrows next to each figure to get a video of @mathiassablemeyer.bsky.social explaining it for you!
January 22, 2026 at 9:36 PM
Reposted by Tim Behrens
As a student, this is very cool!!
January 22, 2026 at 7:54 PM
This is a cool paper!
Excited to see this Version Of Record of my work out in @elife.bsky.social!
elifesciences.org/articles/106...
We investigate the mental representation of geometric shapes in adults and children using fMRI and MEG. Each figure has a video of me explaining the figure: go and read it, or read below.
January 22, 2026 at 7:18 PM
Reposted by Tim Behrens
Excited to see this Version Of Record of my work out in @elife.bsky.social!
elifesciences.org/articles/106...
We investigate the mental representation of geometric shapes in adults and children using fMRI and MEG. Each figure has a video of me explaining the figure: go and read it, or read below.
January 22, 2026 at 6:29 PM
At @elife.bsky.social you can now include explainer videos with every figure. Like going to a seminar while you engage with the paper. First example here elifesciences.org/articles/106...

Click the arrows next to each figure to get a video of @mathiassablemeyer.bsky.social explaining it for you!
January 22, 2026 at 6:16 PM
Reposted by Tim Behrens
I am happy to share that I have received a Postdoc.Mobility fellowship from @snsf.ch 🎉🥳

This fellowship will support my research at
@sainsburywellcome.bsky.social in the groups of Tom Mrsic-Flogel and @behrenstimb.bsky.social to uncover how the frontal cortex keeps track of goals.
January 18, 2026 at 4:46 PM
Reposted by Tim Behrens
Who did this> 🤣
January 16, 2026 at 11:19 PM
Reposted by Tim Behrens
If you happen to be in Oxford on Monday, I'll be speaking in @oxexppsy.bsky.social about my recent research on cognition and emotions navigating the Marshall Islands: @oxexppsy.bsky.social

www.psy.ox.ac.uk/events
Events
www.psy.ox.ac.uk
January 16, 2026 at 2:31 PM
Put your high-level hypotheses in a stand-alone coloured box that no reviewers can miss
January 16, 2026 at 10:47 AM