Tom George
@tomnotgeorge.bsky.social
220 followers 330 following 31 posts
Neuroscience/ML PhD @UCL • NeuroAI, navigation, hippocampus, ... • Open-source software tools for science (https://github.com/RatInABox-Lab/RatInABox) • Co-organiser of TReND CaMinA summer school 🔎👀 for a postdoc position…
Posts Media Videos Starter Packs
tomnotgeorge.bsky.social
woah, are my retinas working, or is that one character away from #RatInABox 👀...

@dlevenstein.bsky.social is right, could be time for a colab
tomnotgeorge.bsky.social
Deadline extended until 31st January!!!
marcusghosh.bsky.social
Join us in Zambia for the third TReND-CaMinA course: computational neuroscience & machine learning in Africa.

📆Applications open until 15.01.

🧠🧪

trendinafrica.org/trend-camina/
A poster advertising the TREND CAMINA summer school.
tomnotgeorge.bsky.social
Very nice work by @sjshipley.bsky.social on place cells and Alzheimers!
shipleysj.bsky.social
Excited to share that our preprint is out!
doi.org/10.1101/2024...

The structure of place cell reactivations was disordered in AD mice compared to WT! This was predictive of reduced place cell stability and memory performance on a radial-arm maze task.

Here are a few details:

1/5🧵
Disordered Hippocampal Reactivations Predict Spatial Memory Deficits in a Mouse Model of Alzheimer's Disease
Alzheimer's disease (AD) is characterised by progressive memory decline associated with hippocampal degeneration. However, the specific physiological mechanisms underlying hippocampal dysfunction in A...
doi.org
tomnotgeorge.bsky.social
Thats great to hear, reach out if you run into any problems!
tomnotgeorge.bsky.social
Great question. Local optima will always be hard to identify. Ofc if you have a reason to believe behaviour really _isn't_ a good initialisation then you shouldn't use it.

You can always / we already track the log-likelihood of held-out spikes. If this increases then things are looking good.
tomnotgeorge.bsky.social
you were right though....the grass is greener over here ;)
tomnotgeorge.bsky.social
This isn’t cheating, behaviour has always been there for the taking and we should exploit it (many techniques specialise in joint behavioural-neural analysis). If we ignore behaviour SIMPL still works but the latent space isn’t smooth and “identifiable”...certainly something to consider.

20/21
tomnotgeorge.bsky.social
Initialising at behaviour is a powerful trick here. In many regions (e.g., but not limited to, hippocampus 👀), a behavioural correlate (position👀) exists which is VERY CLOSE to the true latent. Starting right next to the global maxima help makes optimisation straightforward.
tomnotgeorge.bsky.social
These non-local dynamics aren’t a new discovery by any means but this is, in our opinion, the correct and quickest way to find them.

18/21
tomnotgeorge.bsky.social
And there’s cool stuff in the optimised latent too. It mostly tracks behaviour (hippocampus is still mostly a cognitive map) but does occasional big jumps as though the animal is contemplating another location in the environment.

17/21
tomnotgeorge.bsky.social
Dubious analogy: Using behaviour alone to study neural representations (status quo for hippocampus) is like wearing mittens and trying to a figure out the shape of a delicate statue in the dark. Everything is blurred.

16/21
tomnotgeorge.bsky.social
The old paradigm of “just smooth spikes against position” is wrong! Those aren’t tuning curves in a causal sense…they’re just smoothed spikes. These “real” tuning curves (the output of an algorithm like SIMPL) are the ones we should be analysing/theorising about.

15/21
tomnotgeorge.bsky.social
It’s quite a sizeable effect. The median place cell has 23% more place fields...the median place field is 34% smaller and has a firing rate 45% higher. It’s hard to overstate this result…

14/21
tomnotgeorge.bsky.social
When applied to a similarly large (but now real) hippocampal dataset SIMPL optimises the tuning curves. “Real” place fields, it turns out, are much smaller, sharper, more numerous and more uniformly-distributed than previously thought.

13/21
tomnotgeorge.bsky.social
SIMPL outperforms CEBRA — a contemporary, more general-purpose, neural-net-based technique — in terms of performance and compute-time. It’s over 30x faster. It also beats pi-VAE and GPLVM.

12/21
tomnotgeorge.bsky.social
Let’s test SIMPL: We make artificial grid cell data and add noise to the position (latent) variable. This noise blurs the grid fields out of recognition. Apply SIMPL and you recover a perfect estimate of the true trajectory and grid fields in a handful of compute-seconds.

11/21
tomnotgeorge.bsky.social
I think this gif explains it well. The animal is "thinking" of the green location but located at the yellow. Spikes plotted against green give sharp grid fields but against yellow are blurred.

In the brain this discrepancy will be caused by replay, planning, uncertainty and more.
tomnotgeorge.bsky.social
behaviour =/= latent.

This is obvious in non-navigational regions. But for HPC/MEC/etc. it’s definitely often overlooked…behaviour alone explains the spikes SO well (read: grid cells look pretty) it’s common to just stop there. But that leaves some error

9/21
tomnotgeorge.bsky.social
In order to know the “true” tuning curves we need to know the “true” latent which passed through those curves to generate spikes. i.e. what was the animal thinking of…not what was the animal doing. This latent, of course, is often close to a behavioural readout such as position
tomnotgeorge.bsky.social
So what’s the idea inspiring this? Basically, tuning curves (defined as plotting spikes against behaviour) aren’t the brains “real” tuning curves in any causal sense. But often we analyse and theorise about them as though they are. That's a problem.

7/21
tomnotgeorge.bsky.social
SIMPL is also "identifiable", returning not just any tuning curves but specifically THE tuning curves which generated the data (there are some caveats / subtleties here)

6/21