Ellie Holton
@eleanor-holton.bsky.social
160 followers 210 following 12 posts
Cog neuro of learning and decision-making at University of Oxford with @summerfieldlab.bsky.social https://eleanorholton.github.io/
Posts Media Videos Starter Packs
Pinned
eleanor-holton.bsky.social
New preprint out with @summerfieldlab.bsky.social! When does new learning interfere with existing knowledge? We compare continual learning in humans and artificial neural networks, revealing similar patterns of transfer & catastrophic interference (1/8) osf.io/preprints/ps...
Reposted by Ellie Holton
danmirea.bsky.social
🚨Out now in @cp-trendscognsci.bsky.social 🚨

We explore the use of cognitive theories/models with real-world data for understanding mental health.

We review emerging studies and discuss challenges and opportunities of this approach.

With @yaelniv.bsky.social and @eriknook.bsky.social

Thread ⬇️
Reposted by Ellie Holton
Reposted by Ellie Holton
Check out @tifenpan.bsky.social 's just published paper! we demonstrate how to use RNNs to infer latent variables from cognitive models, even when standard methods don't work easily.
Reposted by Ellie Holton
bonan.bsky.social
My Lab at the University of Edinburgh🇬🇧 has funded PhD positions for this cycle!

We study the computational principles of how people learn, reason, and communicate.

It's a new lab, and you will be playing a big role in shaping its culture and foundations.

Spread the words!
Reposted by Ellie Holton
tsonj.bsky.social
New preprint by William D'Alessandro and myself:

The promise and peril of AI surrogacy in psychological research

osf.io/preprints/ps...
OSF
osf.io
Reposted by Ellie Holton
denislan.bsky.social
My first PhD paper - with @lhuntneuro.bsky.social and @summerfieldlab.bsky.social - is now out in @plosbiology.org! We ask: how do humans (and deep neural networks) navigate flexibly even in unfamiliar environments, such as a new city? Link: plos.io/45uSwNm 🧵 (1/6)
Cartoon image of me looking at a map, with a stadium behind me and a hotel and ferris wheel across the river in the background. I am thinking about going to the ferris wheel
Reposted by Ellie Holton
qlu.bsky.social
I’m thrilled to announce that I will start as a presidential assistant professor in Neuroscience at the City U of Hong Kong in Jan 2026!
I have RA, PhD, and postdoc positions available! Come work with me on neural network models + experiments on human memory!
RT appreciated!
(1/5)
Reposted by Ellie Holton
elliottwimmer.bsky.social
We are excited to post a new preprint with Shiyi Liang @shiyiliang.bsky.social:

'Reinforcement learning is positively associated with anhedonia symptoms' osf.io/preprints/ps...
(a bit late here – a version was online back in December)

@mpc-comppsych.bsky.social
OSF
osf.io
Reposted by Ellie Holton
katenuss.bsky.social
New preprint 📝 - another fun collaboration with @arikahn.bsky.social, @licezhang.bsky.social, @nathanieldaw.bsky.social, @hartleylabnyu.bsky.social

We ask: Why do children and adults often derive different representations of their environments from the same experiences? 🧠👶🔎

osf.io/preprints/ps...
OSF
osf.io
Reposted by Ellie Holton
levikumle.bsky.social
Excited to see this latest paper from my PhD out! Huge thanks to everyone who contributed!
eleanor-holton.bsky.social
Ah yes true, they would probably relearn Rule A with feedback but I expect they would retain good generalisation (to untrained test examples from the same task) unlike the splitter group
eleanor-holton.bsky.social
Thanks! Great question, our task isn’t long enough to tell but I’d love to know this too - my prediction would be that more splitters would become lumpers (ie learn to generalise with time and/or sleep like “grokking”) but very interested if you’d predict otherwise!
eleanor-holton.bsky.social
Our work bridges psychology & AI, showing that continual learning is shaped by how knowledge is organised, with shared trade-offs across both systems.
eleanor-holton.bsky.social
We could capture this mixture of behaviour by tweaking the training regime of ANNs, (‘rich’ vs. ‘lazy’) shifting them towards shared representations (enabling transfer/generalisation but at the cost of interference) versus separated representations. (7/8)
eleanor-holton.bsky.social
While humans learning similar tasks showed more interference than those learning dissimilar tasks, this wasn’t the case for everyone. Some people avoided interference, but they were also worse at transfer to new tasks & at generalisation within a task! (6/8)
eleanor-holton.bsky.social
In ANNs, this can be explained by whether tasks share solutions. Similar tasks were learned by adapting existing representations which are corrupted in the process. Dissimilar tasks were learned as orthogonal representations, reducing interference. (5/8)
eleanor-holton.bsky.social
When Task A & B were similar (‘Near’), both humans & ANNs learned faster—but at a cost. Greater transfer across tasks led to higher interference compared to learning dissimilar tasks (‘Far’) (4/8)
eleanor-holton.bsky.social
We taught humans and ANNs two sequential rule-learning tasks (Task A then Task B), and then re-tested their knowledge of the first (Task A). We studied how patterns of transfer to Task B, and interference on return to Task A, differed as a function of task similarity (3/8)
eleanor-holton.bsky.social
Artificial neural networks often struggle to learn new tasks without overwriting previous ones, while humans seamlessly integrate new knowledge throughout life. But could the principles governing continual learning in both systems be more alike than we think? (2/8)
eleanor-holton.bsky.social
New preprint out with @summerfieldlab.bsky.social! When does new learning interfere with existing knowledge? We compare continual learning in humans and artificial neural networks, revealing similar patterns of transfer & catastrophic interference (1/8) osf.io/preprints/ps...