Sam Gershman
gershbrain.bsky.social
Sam Gershman
@gershbrain.bsky.social
Professor, Department of Psychology and Center for Brain Science, Harvard University
https://gershmanlab.com/
If you work at the intersection of computational neuroscience and machine learning, consider applying for this postdoc position (January 2027 start date):
academicpositions.harvard.edu/postings/15868
An opportunity to work with a great group of people across Harvard, MIT, and UC Berkeley.
February 10, 2026 at 7:36 PM
"data available upon reasonable request"
February 7, 2026 at 12:16 PM
The lesson for all the students out there is that science is a community project. Most of us make individually small contributions to this project. Success is measured at the collective level. Many of our professional (and personal) dysfunctions could be fixed by more fully embracing this view.
February 7, 2026 at 11:20 AM
In fairness to Grossberg, he made valuable contributions to computational neuroscience. The problem is that his grievances alienated all the people who would otherwise have appreciated his contributions. I see this as a cautionary tale about what happens when scientists are obsessed with credit.
February 7, 2026 at 11:20 AM
When I started grad school, my advisor counseled me that I'd know that I'd made it in neuroscience when Grossberg accuses me of stealing his ideas. Probably his advisor told him the same thing!
February 6, 2026 at 5:27 PM
Connectionists mailing list.
February 6, 2026 at 5:24 PM
If I were him, what I would be worried about is other people getting called "the Stephen Grossberg of X" which would probably mean someone whose genuine scientific contributions are overshadowed by their narcissism and shameless self-aggrandizement.
February 6, 2026 at 5:16 PM
Stephen Grossberg explaining why he is both the Newton *and* the Einstein of the mind. Why not also the Lavoisier, Curie, and Darwin of the mind? 🙄
February 6, 2026 at 5:16 PM
Reposted by Sam Gershman
We are hiring next year in semantics- one year positions of course less exciting and good for the field than straight up tenure track postings, but on the other hand hiring in something that isn’t computational feels like a win anymore in linguistics. Come bring some innovative takes on meaning!
Lecturer in Linguistics (Semantics)
The Department of Linguistics seeks applications for a lecturer in linguistics, with a focus on teaching and advising in formal semantics. The ability to teach and advise students in another area in a...
academicpositions.harvard.edu
February 6, 2026 at 1:48 PM
Reposted by Sam Gershman
🚨 Hiring in Munich 🇩🇪: 2 open-topic PhD positions in human & machine learning (TVöD E13 80%).
Start ~June 2026 (flexible). Deadline: March 2, 2026.
Apply/info: hcai-munich.com/PhD_Job_Ad.pdf
Reposts appreciated 🙏
February 5, 2026 at 1:03 PM
🤣 totally!
a close up of a person 's face with the words in the computer below
ALT: a close up of a person 's face with the words in the computer below
media.tenor.com
February 6, 2026 at 11:54 AM
Given how secretive tech companies are these days (supposedly only publishing the stuff that doesn't work well), can we infer the most cutting-edge algorithms from what is *not* published?
February 6, 2026 at 11:06 AM
February 5, 2026 at 11:21 AM
My lab is looking to recruit 1-2 paid summer interns to do wet lab work on learning in a unicellular organism (Stentor coeruleus). You can apply here:
forms.gle/b47WpSobjFjo...
February 5, 2026 at 11:21 AM
Reposted by Sam Gershman
We are hiring a research specialist, to start this summer! This position would be a great fit for individuals looking to get more experience in computational and cognitive neuroscience research before applying to graduate school. #neurojobs Apply here: research-princeton.icims.com/jobs/21503/r...
Careers | Human Resources
research-princeton.icims.com
February 4, 2026 at 1:12 PM
This plot is a nice illustration of what the system (blue diamond) can accomplish relative to throwing LLMs directly at the problem: high success rate and much lower computational cost. A reasoning LLM (red diamond) achieves an okay (still worse) success rate, but requires twice as much computation.
February 3, 2026 at 10:26 AM
A new and improved version of TheoryCoder, which learns to play video games in a human-like way by synthesizing both high-level abstractions and a low-level model of game mechanics:
arxiv.org/abs/2602.00929
Learning Abstractions for Hierarchical Planning in Program-Synthesis Agents
Humans learn abstractions and use them to plan efficiently to quickly generalize across tasks -- an ability that remains challenging for state-of-the-art large language model (LLM) agents and deep rei...
arxiv.org
February 3, 2026 at 10:26 AM
A real-world, high-stakes case study of policy compression:
osf.io/preprints/ps...
Clinicians in the emergency department exhibit near-optimal adaptation to varying cognitive demands.
February 2, 2026 at 9:52 PM
Reposted by Sam Gershman
#JNeurosci: Results from Hall-McMaster, @nicoschuck.bsky.social, @gershbrain.bsky.social et al suggest the entorhinal cortex might highlight aspects of past experiences that can be generalized, allowing us to make effective decisions in new environments https://doi.org/10.1523/JNEUROSCI.1492-25.2025
January 30, 2026 at 2:25 PM
Reposted by Sam Gershman
RA job alert. We are looking for a fearless experimentalist to join our experiments on cortical neuromodulators during visually guided learning, lead by @michael-lynn.bsky.social . Opportunity to learn 2p imaging, design and conduct experiments. RT plz. Apply here: my.corehr.com/pls/uoxrecru...
Job Details
my.corehr.com
January 30, 2026 at 1:43 PM
Reposted by Sam Gershman
now accepted at ICLR! 🐺🥳🐺

arxiv.org/abs/2506.20666
January 27, 2026 at 2:55 PM
Thanks, Nancy!! I'm teaching it this semester, so your students can take my class if they're interested.
January 25, 2026 at 12:16 PM
I'm gradually populating this page with additional resources: lecture slides, projects, code. As always, please let me know if you have any feedback!
With some trepidation, I'm putting this out into the world:
gershmanlab.com/textbook.html
It's a textbook called Computational Foundations of Cognitive Neuroscience, which I wrote for my class.

My hope is that this will be a living document, continuously improved as I get feedback.
January 24, 2026 at 11:27 AM
And of course please correct me if I've misunderstood anything!
January 23, 2026 at 12:18 PM
I'm really glad Jeong et al did this work. It's a valuable empirical contribution and will stimulate new theoretical work, even if I'm not yet completely persuaded by their interpretation.
January 23, 2026 at 12:18 PM