Erin Grant
@eringrant.me
5.3K followers 1.3K following 30 posts
Senior Research Fellow @ ucl.ac.uk/gatsby & sainsburywellcome.org {learning, representations, structure} in 🧠💭🤖 my work 🤓: eringrant.github.io not active: sigmoid.social/@eringrant @[email protected], twitter.com/ermgrant @ermgrant
Posts Media Videos Starter Packs
eringrant.me
Hoping you find out and share! 🤗
eringrant.me
Congrats Richard!!
Reposted by Erin Grant
alonaf.bsky.social
I am hiring a post doc at UAlberta, affiliated with Amii! We study language processing in the brain using LLMs and neuroimaging. Looking for someone with experience with ideally both neuroimaging and LLMs, or a willingness to learn. Email me with Qs
apps.ualberta.ca/careers/post...
Postdoctoral Fellow - Language Models and Neuroscience - [email protected]
University of Alberta: [email protected]
apps.ualberta.ca
Reposted by Erin Grant
dataonbrainmind.bsky.social
📢 10 days left to submit to the Data on the Brain & Mind Workshop at #NeurIPS2025!

📝 Call for:
• Findings (4 or 8 pages)
• Tutorials

If you’re submitting to ICLR or NeurIPS, consider submitting here too—and highlight how to use a cog neuro dataset in our tutorial track!
🔗 data-brain-mind.github.io
Data on the Brain & Mind
data-brain-mind.github.io
eringrant.me
I’m recruiting committee members for the Technical Program Committee at #CCN2026.

Please apply if you want to help make submission, review & selection of contributed work (Extended Abstracts & Proceedings) more useful for everyone! 🌐

Helps to have: programming/communications/editorial experience.
Reposted by Erin Grant
rdgao.bsky.social
arguably the most important component of AI for neuroscience:

data, and its usability
dataonbrainmind.bsky.social
🚨 Excited to announce our #NeurIPS2025 Workshop: Data on the Brain & Mind

📣 Call for: Findings (4- or 8-page) + Tutorials tracks

🎙️ Speakers include @dyamins.bsky.social @lauragwilliams.bsky.social @cpehlevan.bsky.social

🌐 Learn more: data-brain-mind.github.io
Reposted by Erin Grant
neurograce.bsky.social
The rumors are true! #CCN2026 will be held at NYU. @toddgureckis.bsky.social and I will be executive-chairing. Get in touch if you want to be involved!
eringrant.me
many thanks to my collaborators, @saxelab.bsky.social and especially Lukas :)
eringrant.me
I like the how Rosa Cao (sites.google.com/site/luosha) & @dyamins.bsky.social speculated about task constraints here (doi.org/10.1016/j.co...). I think the Platonic Representation hypothesis is a version of their argument, for multi-modal learning.
eringrant.me
Definitely! Task constraints certainly play a role in determining representational structure, which might interact with what we consider here (efficiency of implementation). We don't explicitly study it. Someone should!
eringrant.me
Main takeaway: Valid representational comparison relies on implicit assumptions (task-optimization *plus* efficient implementation). ⚠️ More work to do on making these assumptions explicit!

🧠 CCN poster (today): 2025.ccneuro.org/poster/?id=w...

📄 ICML paper (July): icml.cc/virtual/2025/poster/44890
ICML Poster Not all solutions are created equal: An analytical dissociation of functional and representational similarity in deep linear neural networksICML 2025
icml.cc
eringrant.me
Our theory predicts that representational alignment is consistent with *efficient* implementation of similar function. Comparing representations is ill-posed in general, but becomes well-posed under minimum-norm constraints, which we link to computational advantages (noise robustness).
eringrant.me
Function-representation dissociations and the representation-computation link persist in deep nonlinear networks! Using function-invariant reparametrisations (@bsimsek.bsky.social), we break representational identifiability but degrade generalization (a computational consequence).
Function-representation dissociation in ReLU networks. (A-B) MNIST representations before/after prediction-preserving reparametrisation. (C) RSM after function-preserving reparametrisation. (D-E) Performance under input/parameter noise for different solution types.
eringrant.me
We demonstrate that representation analysis and comparison is ill-posed, giving both false negatives and false positives, unless we work with *task-specific representations*. These are interpretable *and* robust to noise (i.e., representational identifiability comes with computational advantages).
Hidden-layer representations for a semantic hierarchy task. (A) Task structure. (B) Input/target encoding. (C-E) Hidden representations and representational similarity matrices for task-agnostic (C: LSS) vs. task-specific (D: MRNS, E: MWNS) solutions.
eringrant.me
We parametrised this solution hierarchy to find differences in handling of task-irrelevant dimensions: Some solutions compress away (creating task-specific, interpretable representations), while others preserve arbitrary structure in null spaces (creating arbitrary, uninterpretable representations).
The solution manifold. (A) Solution manifold for a 3-parameter linear network, showing GLS and constrained LSS, MRNS, and MWNS solutions. (B-E) Input/output weight relationships and parametrisation structure for each solution type.
eringrant.me
To analyse this dissociation in a tractable model of representation learning, we characterize *all* task solutions for two-layer linear networks. Within this solution manifold, we identify a solution hierarchy in terms of what implicit objectives are minimized (in addition to the task objective).
Task solution hierarchy defined by implicit regularisation objectives.
eringrant.me
Deep networks have parameter symmetries, so we can walk through solution space, changing all weights and representations, while keeping output fixed. In the worst case, function and representation are *dissociated*.

(Networks can have the same function with the same or different representation.)
Example of a failure case. (A) A random walk on the solution manifold of a two-layer linear network reveals that weights can change continuously, inducing changes in the (B) network parametrisation and thus the (C) hidden-layer representations, while preserving the (D) network output.
eringrant.me
Are similar representations in neural nets evidence of shared computation? In new theory work w/ Lukas Braun (lukasbraun.com) & @saxelab.bsky.social, we prove that representational comparisons are ill-posed in general, unless networks are efficient.

@icmlconf.bsky.social @cogcompneuro.bsky.social
eringrant.me
Want to contribute to this debate at #CCN2025? Please come to our session today, fill out the anonymous survey (forms.gle/yDBBcBZybGjogksC8), and comment on the GAC page (sites.google.com/ccneuro.org/gac2020/gacs-by-year/2025-gacs/2025-1)! Your perspectives will shape our eventual GAC paper. 👥
eringrant.me
This GAC focuses on three debates/questions around benchmarks in cognitive science (the what, why, and how): (1) Should data or theory come first? (2) Should we focus on replication or exploration? (3) What incentives should we build up, if we choose to invest effort as a community?
The three questions of the GAC?
1. What should benchmarks measure?
2. What should the goals of a benchmark be?
3. How should benchmarks be structured?
eringrant.me
Cognitive science aims for more than mere prediction: We aim to build theories. Yet, evaluations in cognitive science tend to be narrow tests of a specific theory. How can we create benchmarks to make empirical validation more systematic, while preserving our goal of theory-driven cognitive science?
eringrant.me
Cognitive science met computational methods sooner than many scientific domains, but hasn’t yet fully embraced *benchmarks*: Shared evaluation challenges that focus on open data and reproducible methods (doi.org/10.1162/99608f92.b91339ef). How could we get benchmarking right for cognitive science? 🤔
Data Science at the Singularity
doi.org
eringrant.me
Our #CCN2025 GAC debate w/ @gretatuckute.bsky.social, Gemma Roig (www.cvai.cs.uni-frankfurt.de), Jacqueline Gottlieb (gottlieblab.com), Klaus Oberauer, @mschrimpf.bsky.social &‬ @brittawestner.bsky.social asks:

📊 What benchmarks are useful for cognitive science? 💭
2025.ccneuro.org/gac
Speakers and organizers of the GAC debate. Time and location of the GAC debate: 5 PM in Room C1.03.