Yevgeni Berzak
@whylikethis.bsky.social
740 followers 550 following 20 posts
Assistant Prof. at the Technion. Computational Psycholinguistics, NLP, Cognitive Science. https://lacclab.github.io/
Posts Media Videos Starter Packs
Reposted by Yevgeni Berzak
tomerullman.bsky.social
It's officially been 75 years since the proposal of the Turing Test, a good time bring up 'The Minimal Turing Test':

www.sciencedirect.com/science/arti...
Reposted by Yevgeni Berzak
tomerullman.bsky.social
my friend/colleague Frank Jäkel wrote a book on AI. I sadly don't know German but I happily know Frank, and I've heard him talking about this for a while now, and just on that basis I'd recommend the German speakers in the audience check it out
Reposted by Yevgeni Berzak
tomerullman.bsky.social
Out now in TiCS, something i've been thinking about a lot:

"Physics vs. graphics as an organizing dichotomy in cognition"

(by Balaban & me)

relevant for many people, related to imagination, intuitive physics, mental simulation, aphantasia, and more

authors.elsevier.com/a/1lBaC4sIRv...
Reposted by Yevgeni Berzak
mcxfrank.bsky.social
If you haven't been looking recently at the Open Encyclopedia of Cognitive Science (oecs.mit.edu), here's your reminder that we are a free, open access resource for learning about the science of mind.

Today we are launching our new Thematic Collections to organize our growing set of articles!
OECS thematic collections.
whylikethis.bsky.social
👁️‍🗨️ 4 sub-corpora: 📖 reading for comprehension, 🔎📖 information seeking, 📖📖 repeated reading, 🔎📖📖 information seeking in repeated reading.

🏋🏽 Text difficulty level manipulation: reading original and simplified texts.

👌 High quality recordings with an EyeLink 1000 Plus eye tracker.
whylikethis.bsky.social
👥 360 participants (English L1) & 152 hours of eye movement recordings - more data than all the publicly available English L1 eye tracking corpora combined!

🗞️ 30 newswire articles in English (162 paragraphs) with reading comprehension questions and auxiliary text annotations.
whylikethis.bsky.social
👀 📖 Big news! 📖 👀
Happy to announce the release of the OneStop Eye Movements dataset! 🎉 🎉
OneStop is the product of over 6 years of experimental design, data collection and data curation.
github.com/lacclab/OneS...
Reposted by Yevgeni Berzak
Reposted by Yevgeni Berzak
shravanvasishth.bsky.social
In person (no streaming/zoom) sentence processing workshop at Potsdam with Tal Linzen, Brian Dillon, Titus von der Malsburg, Oezge Bakay, William Timkey, Pia Schoknecht, Michael Vrazitulis, and Johan Hennert:

vasishth.github.io/sentproc-wor...
Sentence processing workshop, May 27, 2025
vasishth.github.io
Reposted by Yevgeni Berzak
rtommccoy.bsky.social
🤖🧠 Paper out in Nature Communications! 🧠🤖

Bayesian models can learn rapidly. Neural networks can handle messy, naturalistic data. How can we combine these strengths?

Our answer: Use meta-learning to distill Bayesian priors into a neural network!

www.nature.com/articles/s41...

1/n
A schematic of our method. On the left are shown Bayesian inference (visualized using Bayes’ rule and a portrait of the Reverend Bayes) and neural networks (visualized as a weight matrix). Then, an arrow labeled “meta-learning” combines Bayesian inference and neural networks into a “prior-trained neural network”, described as a neural network that has the priors of a Bayesian model – visualized as the same portrait of Reverend Bayes but made out of numbers. Finally, an arrow labeled “learning” goes from the prior-trained neural network to two examples of what it can learn: formal languages (visualized with a finite-state automaton) and aspects of English syntax (visualized with a parse tree for the sentence “colorless green ideas sleep furiously”).
Reposted by Yevgeni Berzak
rtommccoy.bsky.social
Made a new assignment for a class on Computational Psycholinguistics:
- I trained a Transformer language model on sentences sampled from a PCFG
- The students' task: Given the Transformer, try to infer the PCFG (w/ a leaderboard for who got closest)

Would recommend!

1/n
On the left is a probabilistic context free grammar (PCFG). On the right is an image of the Transformer architecture. There are arrows going back and forth between the PCFG and the Transformer, showing how the assignment goes back and forth between them.
Reposted by Yevgeni Berzak
jennhu.bsky.social
Check out our new work on introspection in LLMs! 🔍

TL;DR we find no evidence that LLMs have privileged access to their own knowledge.

Beyond the study of LLM introspection, our findings inform an ongoing debate in linguistics research: prompting (eg grammaticality judgments) =/= prob measurement!
siyuansong.bsky.social
New preprint w/ @jennhu.bsky.social @kmahowald.bsky.social : Can LLMs introspect about their knowledge of language?
Across models and domains, we did not find evidence that LLMs have privileged access to their own predictions. 🧵(1/8)
Reposted by Yevgeni Berzak
tomerullman.bsky.social
it's only Consciousness if it comes from the Consciousness region of the brain, otherwise its just sparkling attention
Reposted by Yevgeni Berzak
tomerullman.bsky.social
new preprint on Theory of Mind in LLMs, a topic I know a lot of people care about (I care. I'm part of people):

"Re-evaluating Theory of Mind evaluation in large language models"

(by Hu* @jennhu.bsky.social , Sosa, and me)

link: arxiv.org/pdf/2502.21098
Reposted by Yevgeni Berzak
guydav.bsky.social
Out today in Nature Machine Intelligence!

From childhood on, people can create novel, playful, and creative goals. Models have yet to capture this ability. We propose a new way to represent goals and report a model that can generate human-like goals in a playful setting... 1/N
Reposted by Yevgeni Berzak
tomerullman.bsky.social
Hello! I'm looking to hire a post-doc, to start this Summer or Fall.

It'd be great if you could share this widely with people you think might be interested.

More details on the position & how to apply: bit.ly/cocodev_post...

Official posting here: academicpositions.harvard.edu/postings/14723
Reposted by Yevgeni Berzak
evfedorenko.bsky.social
Our language neuroscience lab (evlab.mit.edu) is looking for a new lab manager/FT RA to start in the summer. Apply here: tinyurl.com/3r346k66 We'll start reviewing apps in early Mar. (Unfortunately, MIT does not sponsor visas for these positions, but OPT works.)
EvLab
Our research aims to understand how the language system works and how it fits into the broader landscape of the human mind and brain.
evlab.mit.edu
whylikethis.bsky.social
The 3rd Workshop on Eye Movements and the Assessment of Reading Comprehension will take place on June 5–7, 2025 at the University of Stuttgart!
Submit an abstract by March 1st and join us!
tmalsburg.github.io/Comprehensio...
The 3rd Workshop on Eye Movements and the Assessment of Reading ComprehensionJune 5–7, 2025, University of Stuttgart
tmalsburg.github.io
Reposted by Yevgeni Berzak
evfedorenko.bsky.social
So excited to receive the Troland Award!! Huge congrats to the other winner—Nick Turk-Browne! And TY, as always, to my mentors&nominators, to my amazing labbies past&present, and to all the wonderful and supportive colleagues in our broader scientific community. <3 www.nasonline.org/award/trolan...
Troland Research Award – NAS
Two Troland Research Awards of $75,000 are given annually to recognize unusual achievement by early-career researchers (preferably 45 years of age or younger) and to further empirical research within ...
www.nasonline.org
whylikethis.bsky.social
Fantastic resource!
linguistbrian.bsky.social
Happy to share that Liz Schotter and I have just published a beginner-level tutorial introduction to eye-tracking-while-reading studies in Behavior Methods:

link.springer.com/article/10.3...
link.springer.com