Olivier Codol
oliviercodol.bsky.social
Olivier Codol
@oliviercodol.bsky.social
Neuroscience, RL for motor learning, neural control of movement, NeuroAI.
Opinions stated here are my own, not those of my employer.
I’m trying really hard to narrow down who is behind this every-Montreal-cycling-lanes masterpiece of a suit
December 15, 2025 at 11:32 PM
Wow and in winter, which is even more beautiful!
November 25, 2025 at 1:53 AM
As always, thank you to my kind friends and mentors along the way, who make my journey not only possible but also fun and fulfilling.
November 12, 2025 at 5:12 PM
In my free time, I am wrapping up (a lot of) work and projects with former colleagues and friends. I will be communicating these as they come, so stay tuned!
November 12, 2025 at 5:12 PM
While I'm sad to step away from my full-time academic work, the first few months have been fantastic—I'm enjoying doing exciting research at the scale possible in such an ambitious team and company. There's a lot to learn and I'm grateful for my inclusive colleagues enabling this experience.
November 12, 2025 at 5:12 PM
Yes! The advantages are much clearer wrt neural computation (memory, expressivity, and gradient propagation) than for exploration per se.
November 7, 2025 at 4:09 AM
Learning through motor noise (exploration) is well documented in humans (lots of cool work from Shadmehr and @olveczky.bsky.social) but the scale is rather small. Here if the dynamical regime helps exploration I’d say it should be within these scales as well.
November 7, 2025 at 3:27 AM
That being said this is not how we move (execute movements) and in that sense this is a model of learning rather control.
November 7, 2025 at 3:17 AM
I would say yes it’s possible. Particularly because a deviation is carried over instead of collapsing back, so the filtering function that non linear muscle activations have will not impact it as much as white noise.
November 7, 2025 at 3:16 AM
As in if the edge of chaos regime is a consequence or if it is a cause of RL’s need for exploration?
November 7, 2025 at 3:02 AM
As always a huge thank you to my colleagues and supervisors @glajoie.bsky.social @mattperich.bsky.social and @nandahkrishna.bsky.social for helping make this work what it is—and making the journey so fun and interesting
November 6, 2025 at 2:14 AM
We’re pleased to see RL's role in neural plasticity is increasingly under focus in the motor control community (check out @adrianhaith.bsky.social's latest piece!)
I strongly believe motor learning is sitting at the interface of many plasticity mechanisms and RL is an important piece of this puzzle.
New Pre-Print:
www.biorxiv.org/cgi/content/...

We’re all familiar with having to practice a new skill to get better at it, but what really happens during practice? The answer, I propose, is reinforcement learning - specifically policy-gradient reinforcement learning.

Overview 🧵 below...
Policy-Gradient Reinforcement Learning as a General Theory of Practice-Based Motor Skill Learning
Mastering any new skill requires extensive practice, but the computational principles underlying this learning are not clearly understood. Existing theories of motor learning can explain short-term ad...
www.biorxiv.org
November 6, 2025 at 2:10 AM
Along the above, we add discussion points that I hope will clarify some of our stance on the topic of RL in neuroscience and acknowledge some past important work that we believe our study complements. We also add several important controls (particularly Figs. S8, S14). Feel free to check it all out!
November 6, 2025 at 2:10 AM