Blake Richards
banner
tyrellturing.bsky.social
Blake Richards
@tyrellturing.bsky.social
Researcher at Google and CIFAR Fellow, working on the intersection of machine learning and neuroscience in Montréal (academic affiliations: @mcgill.ca and @mila-quebec.bsky.social).
A tale of the need for flexible cognition... 😂
In Tesco’s this afternoon: an elderly Polish man at the self-checkout, trying to buy a pint of milk and a white radish. There is no picture for the radish on the machine. The assistant doesn’t know what it is. She asks a colleague: “It’s a white radish.” There is no entry for it on the machine.
November 22, 2025 at 8:41 PM
Reposted by Blake Richards
AGI is just astrology for smart computer boys
November 21, 2025 at 3:12 AM
Reposted by Blake Richards
PSA to academics posting threads about your paper here: you can (and should) post the link to the paper in the first post. Your X/Twitter brain rot have have you thinking otherwise, but please free yourself of that. (Also you can call them 'blue-prints' if you want).
November 21, 2025 at 2:56 AM
Reposted by Blake Richards
I think almost all scientific projects should be planned carefully. And I think an app can dramatically improve that. So I wrote an app for that (free for now, if you can fund this let me know). I tested it quite a bit (>8000 users in beta so far). try it: planyourscience.com
November 20, 2025 at 3:33 PM
Reposted by Blake Richards
my little take on whole-brain neurophysiology and what it tells us about global coordination of neural activity on behavioural timescales

(I steered clear of tasteless analogies for this one...)

authors.elsevier.com/a/1m7H5_LsQS...
authors.elsevier.com
November 19, 2025 at 2:05 PM
Reposted by Blake Richards
We went back to the drawing board to think about what information is available to the visual system upon which it could build scene representations.

The outcome: a self-supervised training objective based on active vision that beats the SOTA on NSD representational alignment. 👇
November 18, 2025 at 2:14 PM
Reposted by Blake Richards
Putting the figures at the end of your preprint is one thing, but separating the CAPTIONS from the figures (with both at the end of the paper) is just plain cruel
November 18, 2025 at 3:07 PM
Reposted by Blake Richards
Coal isn't cost-effective anymore, and countries recognize it. Huge, since South Korea has been relying on coal for ~30% of their electricity.

www.theguardian.com/environment/...
South Korean decision to close all coal-fired power plants by 2040 sounds alarm for Australian exports
Decision announced at Cop30 climate conference signposts risks for Australia’s reliance on fossil fuel exports, analysts say
www.theguardian.com
November 17, 2025 at 2:51 PM
Reposted by Blake Richards
This is an excellent blueprint on a very fascinating use of AI scientist! And the results and super cool and interesting! 🤩
I have been asked this when talking about our work on using powerlaws to study representation quality in deep neural networks, glad to have a more concrete answer now! 😃
November 16, 2025 at 10:29 PM
Reposted by Blake Richards
GDM WeatherNext 2

8x faster than v1, it can compute extreme situations and game out scenarios in one minute flat on a single TPU (as opposed to hours of supercomputer time for traditional algorithms)

will be available in all of Google’s weather apps

blog.google/technology/g...
WeatherNext 2: Our most advanced weather forecasting model
The new AI model delivers more efficient, more accurate and higher-resolution global weather predictions.
blog.google
November 17, 2025 at 6:39 PM
I largely agree with this, with one caveat:

Algorithms/AI *are* a key barrier to progress in neuro-engineering and neuro-tech.

But, for actually *understanding* the brain, indeed, our major barrier is the inability to measure the things we could use to test our computational theories.
Same for neuroscience. The lack of ability to measure many neurons’ activity, perturb them, and measure intracellular processes and connections is what limits understanding the brain.

The key barriers are not algorithms or AI.

🧪#neuroscience 🧠🤖 #MLSky
November 17, 2025 at 6:14 PM
Reposted by Blake Richards
It is actually an incredibly frustrating time to be a theoretical neuroscientist right now imo, for this reason
Same for neuroscience. The lack of ability to measure many neurons’ activity, perturb them, and measure intracellular processes and connections is what limits understanding the brain.

The key barriers are not algorithms or AI.

🧪#neuroscience 🧠🤖 #MLSky
November 17, 2025 at 1:23 AM
Reposted by Blake Richards
I’m genuinely curious about this. The numbers in the blog are quite impressive.

Has anyone tried it and would like to share their $200 experience?
Today, we're announcing Kosmos, our newest AI Scientist, available today. Kosmos makes fully autonomous scientific discoveries at scale by analyzing datasets and literature, and is the most powerful agent for science so far. Beta users estimate that Kosmos does 6 months of work in a single day.
November 17, 2025 at 4:11 PM
Reposted by Blake Richards
@skrub-data.bsky.social: better data-science primitives for clean code on dataframes

Watch my dotAI talk, it's fun (live coding)!
www.youtube.com/watch?v=bQS4...
skrub really makes it easy to do machine learning with dataframes
Clean code in Data Science - Gael Varoquaux - Skrub DataOps, Probabl:
YouTube video by dotconferences
www.youtube.com
November 17, 2025 at 5:07 PM
Reposted by Blake Richards
Large-language models are bureaucracy flashlights.
This raises what I like to call the "AI test for tasks".

If many people use AI to do task X, then that tells you that task X is actually just a brainless administrative exercise.

Any such task should probably be eliminated, and if that's not an option, modified to make automation even easier.
There was never any point to having reference letters. That's why we've all started using AI to do this nonesense task.

References should only be used for short-listed candidates for important positions/awards, and ideally, be done via a call to get the most honest opinion possible.
November 15, 2025 at 6:08 PM
Reposted by Blake Richards
PSA to anyone on any kind of search or admissions committee: try to limit the number of rec letters you require and don't make rec letter writers do anything other than upload a PDF. Thank you!!
November 14, 2025 at 9:15 PM
Reposted by Blake Richards
New (sort of) preprint on arXiv today: a generalized bias-variance decomposition for Bregman divergences! arxiv.org/abs/2511.08789
A Generalized Bias-Variance Decomposition for Bregman Divergences
The bias-variance decomposition is a central result in statistics and machine learning, but is typically presented only for the squared error. We present a generalization of the bias-variance decompos...
arxiv.org
November 13, 2025 at 8:44 PM
I'm actually impressed/surprised that there's any relationship here at all...
On the subject of data, here is from an analysis that I did when I was at NIGMS.

nigms.nih.gov/loop/2011/06...

This showed that there were some quite productive grants with percentile scores at or worse than the 30th percentile.

5/13
November 14, 2025 at 7:27 PM
Reposted by Blake Richards
The killing of letters of reference might be one positive of generative AI.
From my discussions with other faculty, the use of generative AI I hear about the most is writing reference letters.

What's the point of having reference letters anymore if everyone is just having them written by machine?
November 14, 2025 at 7:15 PM
This raises what I like to call the "AI test for tasks".

If many people use AI to do task X, then that tells you that task X is actually just a brainless administrative exercise.

Any such task should probably be eliminated, and if that's not an option, modified to make automation even easier.
There was never any point to having reference letters. That's why we've all started using AI to do this nonesense task.

References should only be used for short-listed candidates for important positions/awards, and ideally, be done via a call to get the most honest opinion possible.
From my discussions with other faculty, the use of generative AI I hear about the most is writing reference letters.

What's the point of having reference letters anymore if everyone is just having them written by machine?
November 14, 2025 at 7:14 PM
There was never any point to having reference letters. That's why we've all started using AI to do this nonesense task.

References should only be used for short-listed candidates for important positions/awards, and ideally, be done via a call to get the most honest opinion possible.
From my discussions with other faculty, the use of generative AI I hear about the most is writing reference letters.

What's the point of having reference letters anymore if everyone is just having them written by machine?
November 14, 2025 at 7:10 PM
Reposted by Blake Richards
A bad thing is unfolding at NIH this week: It looks like the Trump administration is trying to replace key civil servant scientific leaders, the Institute Directors, with political hires. These directors control the NIH budget, tens of billions.

A bit of a video explainer here: 1/ 🧪
November 13, 2025 at 10:31 PM
Reposted by Blake Richards
🧪Preprint!
How foragers depart from optimal models can tell us a lot about how they compute their decisions.

A strong but underexplored departure is that foragers widely vary when they leave identical patches.

A 🧵
doi.org/10.1101/2025...

With
@emmavscholey.bsky.social @brainapps.bsky.social
November 12, 2025 at 4:32 PM
Reposted by Blake Richards
I am super happy to share that our project on training biophysical models with Jaxley is now published in Nature Methods: www.nature.com/articles/s41...
Jaxley: differentiable simulation enables large-scale training of detailed biophysical models of neural dynamics - Nature Methods
Jaxley is a versatile platform for biophysical modeling in neuroscience. It allows efficiently simulating large-scale biophysical models on CPUs, GPUs and TPUs. Model parameters can be optimized with ...
www.nature.com
November 13, 2025 at 12:38 PM