Blake Richards
@tyrellturing.bsky.social
11K followers 3.2K following 2.6K posts
Researcher at Google and CIFAR Fellow, working on the intersection of machine learning and neuroscience in Montréal (academic affiliations: @mcgill.ca and @mila-quebec.bsky.social).
Posts Media Videos Starter Packs
Reposted by Blake Richards
shahabbakht.bsky.social
Regardless of what explainability/mech interp in AI is actually after, and whether or not they know what they’re searching for, we can confidently say they’re pursuing what systems neuroscience has pursued for decades, with very similar puzzles and confusions.
bayesianboy.bsky.social
What problem is explainability/interpretability research trying to solve in ML, and do you have a favorite paper articulating what that problem is?
Reposted by Blake Richards
wutsaiyale.bsky.social
WTI's Inspiring Speaker Series continues this week with Timothy Lillicrap, “Model-based reinforcement language for reasoning in games and beyond”

10.9.25 | 10 – 11:15a
100 College St, Workshop 1116
🔗 wti.yale.edu/event/2025-10/inspiring-speaker-tim-lillicrap

All Yale community members are welcome.
Reposted by Blake Richards
mschrimpf.bsky.social
A glimpse at what #NeuroAI brain models might enable: a topographic vision model predicts stimulation patterns that steer complex object recognition behavior in primates. This could be a key 'software' component for visual prosthetic hardware 🧠🤖🧪
Reposted by Blake Richards
drlaschowski.bsky.social
Imagine a brain decoding algorithm that could generalize across different subjects and tasks. Today, we’re one step closer to achieving that vision.

Introducing the flagship paper of our brain decoding program: www.biorxiv.org/content/10.1...
#neuroAI #compneuro @utoronto.ca @uhn.ca
Reposted by Blake Richards
aidanhorner.bsky.social
If you're interested in the cognitive neuroscience of memory feel free to email me!

I do experimental psychology, brain imaging (fMRI and MEG) and a bit of modelling. Lab is doing stuff on forgetting, aging, schemas, and event boundaries, but we're not limited to that.

#psychscisky #neuroskyence
aidanhorner.bsky.social
It's that time of year when many start thinking about applying for PhDs. If you're applying for a UK PhD position, here is a blog post I wrote a while back that might be helpful

#cognition #psychscisky #neuroskyence #psychjobs
How to get PhD funding in the UK
It is that time of year again. The leaves are turning golden, red, and orange (or just brown), the nights are drawing in, and there is a chi...
aidanhorner.blogspot.com
Reposted by Blake Richards
shahabbakht.bsky.social
Interesting paper suggesting a mechanism for why in-context learning happens in LLMs.

They show that LLMs implicitly apply an internal low-rank weight update adjusted by the context. It’s cheap (due to the low-rank) but effective for adapting the model’s behavior.

#MLSky

arxiv.org/abs/2507.16003
Learning without training: The implicit dynamics of in-context learning
One of the most striking features of Large Language Models (LLM) is their ability to learn in context. Namely at inference time an LLM is able to learn new patterns without any additional weight updat...
arxiv.org
Reposted by Blake Richards
sarahdorner.bsky.social
Le Conseil d'Outremont (avec de nouvelles marionnettistes) donne leur appui au groupe « Outremont 0 km » qui dénonce les 0 km d'infrastructures de transport actif implantées depuis le mandat d'Ensemble Montréal à Outremont.
Les marionnettes du Conseil d'Outremont donnent leur appui au groupe « Outrémont 0 km ».
Reposted by Blake Richards
shahabbakht.bsky.social
So excited to see this preprint released from the lab into the wild.

Charlotte has developed a theory for how learning curriculum influences learning generalization.
Our theory makes straightforward neural predictions that can be tested in future experiments. (1/4)

🧠🤖 🧠📈 #MLSky
charlottevolk.bsky.social
🚨 New preprint alert!

🧠🤖
We propose a theory of how learning curriculum affects generalization through neural population dimensionality. Learning curriculum is a determining factor of neural dimensionality - where you start from determines where you end up.
🧠📈

A 🧵:

tinyurl.com/yr8tawj3
The curriculum effect in visual learning: the role of readout dimensionality
Generalization of visual perceptual learning (VPL) to unseen conditions varies across tasks. Previous work suggests that training curriculum may be integral to generalization, yet a theoretical explan...
tinyurl.com
Reposted by Blake Richards
charlottevolk.bsky.social
🚨 New preprint alert!

🧠🤖
We propose a theory of how learning curriculum affects generalization through neural population dimensionality. Learning curriculum is a determining factor of neural dimensionality - where you start from determines where you end up.
🧠📈

A 🧵:

tinyurl.com/yr8tawj3
The curriculum effect in visual learning: the role of readout dimensionality
Generalization of visual perceptual learning (VPL) to unseen conditions varies across tasks. Previous work suggests that training curriculum may be integral to generalization, yet a theoretical explan...
tinyurl.com
Reposted by Blake Richards
karl-jacoby.bsky.social
Only an administration intent on committing war crimes in the present and future would stoop to calling Wounded Knee a "battle" rather than what it truly was: a massacre of over 250 Lakotas, mainly women, children, and the elderly. 1/
tyrellturing.bsky.social
Who's to say filling in requires activity in V1?
Reposted by Blake Richards
mjaggi.bsky.social
We're hiring again for AI research engineering roles: Join the team behind the Apertus LLM, if you share our passion to work on impactful AI that's truly open.

careers.epfl.ch/job/Lausanne...
AI Research Engineers - Swiss AI Initiative
AI Research Engineers - Swiss AI Initiative
careers.epfl.ch
Reposted by Blake Richards
kristorpjensen.bsky.social
I’m super excited to finally put my recent work with @behrenstimb.bsky.social on bioRxiv, where we develop a new mechanistic theory of how PFC structures adaptive behaviour using attractor dynamics in space and time!

www.biorxiv.org/content/10.1...
tyrellturing.bsky.social
Strong recommend!!!

Really fascinating exploration on the links between life, intelligence, prediction, and computation.

(Disclosure: @blaiseaguera.bsky.social is now my boss.)
tyrellturing.bsky.social
A bit, depends on your definition... Our team at Google is doing neuro-inspired research, but the goal is not neuro insights, per se. (Though we will do a bit of that.)
Reposted by Blake Richards
dorialexander.bsky.social
Might not be the consensual opinion on here but totally true. More exciting developments right now than at any time in the past.
tyrellturing.bsky.social
Probably less... you're welcome. 😝
tyrellturing.bsky.social
4/4) Keep your eyes out for what our Paradigms of Intelligence team will be producing in the coming months and years. I’m pumped about the work and I’m confident that this group will produce some major breakthroughs in the near future to make AI more efficient and robust. 🙂 🧠 🤖
tyrellturing.bsky.social
3/4) I’m going to maintain a reduced position at @mcgill.ca and @mila-quebec.bsky.social, so don’t consider me as having completely abandoned academia. (I'm lucky to be where I am...) But, I’m keen to get more time to work on some bigger frontier problems I couldn’t tackle in my own lab.
tyrellturing.bsky.social
2/4) This is a big step for me, having spent my adult life in academia. But, there was no way I could pass up an opportunity to work with some of the smartest iconoclasts in the business, including @blaiseaguera.bsky.social himself, @dileeplearning.bsky.social, and many others.
tyrellturing.bsky.social
1/4) I’m excited to announce that I have joined the Paradigms of Intelligence team at Google (github.com/paradigms-of...)! Our team, led by @blaiseaguera.bsky.social, is bringing forward the next stage of AI by pushing on some of the assumptions that underpin current ML.

#MLSky #AI #neuroscience
Paradigms of Intelligence Team
Advance our understanding of how intelligence evolves to develop new technologies for the benefit of humanity and other sentient life - Paradigms of Intelligence Team
github.com
Reposted by Blake Richards
qqzhang.bsky.social
Does predictive coding work in SPACE or in TIME? Most neuroscientists assume TIME, i.e. neurons predict their future sensory inputs. We show that in visual cortex predictive coding actually works across SPACE, just like the original Rao+Ballard theory #neuroscience
www.biorxiv.org/cgi/content/...