James Michaelov
@jamichaelov.bsky.social
4.2K followers 520 following 35 posts
Postdoc at MIT. Research: language, the brain, NLP. jmichaelov.com
Posts Media Videos Starter Packs
Reposted by James Michaelov
catherinearnett.bsky.social
I’m in Vienna all week for @aclmeeting.bsky.social and I’ll be presenting this paper on Wednesday at 11am (Poster Session 4 in HALL X4 X5)! Reach out if you want to chat about multilingual NLP, tokenizers, and open models!
catherinearnett.bsky.social
✨New pre-print✨ Crosslingual transfer allows models to leverage their representations for one language to improve performance on another language. We characterize the acquisition of shared representations in order to better understand how and when crosslingual transfer happens.
jamichaelov.bsky.social
In the most extreme case, LMs assign sentences such as ‘the car was given a parking ticket by the explorer’ (unlikely but possible event) a lower probability than ‘the car was given a parking ticket by the brake’ (animacy-violating event, semantically-related final word) over half of the time. 2/3
jamichaelov.bsky.social
New paper accepted at ACL Findings! TL;DR: While language models generally predict sentences describing possible events to have a higher probability than impossible (animacy-violating) ones, this is not robust for generally unlikely events and is impacted by semantic relatedness. 1/3
Reposted by James Michaelov
catherinearnett.bsky.social
My paper with @tylerachang.bsky.social and @jamichaelov.bsky.social will appear at #ACL2025NLP! The updated preprint is available on arxiv. I look forward to chatting about bilingual models in Vienna!
catherinearnett.bsky.social
✨New pre-print✨ Crosslingual transfer allows models to leverage their representations for one language to improve performance on another language. We characterize the acquisition of shared representations in order to better understand how and when crosslingual transfer happens.
Reposted by James Michaelov
catherinearnett.bsky.social
✨New pre-print✨ Crosslingual transfer allows models to leverage their representations for one language to improve performance on another language. We characterize the acquisition of shared representations in order to better understand how and when crosslingual transfer happens.
jamichaelov.bsky.social
I’ve had success using the infini-gram API for this (though it can get overloaded with user requests at times): infini-gram.io
Home
infini-gram.io
jamichaelov.bsky.social
I don’t think this is quite what you’re looking for, but @camrobjones.bsky.social recently ran some Turing-test-style studies and found that some people believed ELIZA to be a human (and participants were asked to give reasons for their responses)
Reposted by James Michaelov
jamichaelov.bsky.social
With all the new people here on Bluesky, I think it’s a good time to (re-)introduce myself. I’m a postdoc at MIT carrying out research at the intersection of the cognitive science of language and AI. Here are some of the things I’ve worked on in the last year 🧵:
jamichaelov.bsky.social
Seems like a great initiative to have some of these location-based ones! I’d love to be added if possible!
jamichaelov.bsky.social
Excited to be at #EMNLP #EMNLP2024 this year! Especially interested in chatting about the intersection of cognitive science/psycholinguistics and AI/NLP, training dynamics, robustness/reliability, meaning, and evaluation
jamichaelov.bsky.social
Also, I’m going to be attending EMNLP next week - reach out if you want to meet/chat
jamichaelov.bsky.social
If there’s still space (and you accept postdocs), could I be added?
jamichaelov.bsky.social
Thanks for creating this list - looks great! I’d love to be added if there’s still room
jamichaelov.bsky.social
If there’s still room, is there any chance you could add me to this list?
jamichaelov.bsky.social
Also, I’m going to be attending EMNLP next week - reach out if you want to meet/chat
jamichaelov.bsky.social
Anyway, excited to learn and chat about about research along these lines and beyond here on Bluesky!
jamichaelov.bsky.social
Of course, none of this work would have been possible without my amazing PhD advisor Ben Bergen, and my other great collaborators: Seana Coulson, @catherinearnett.bsky.social, Tyler Chang, Cyma Van Petten, and Megan Bardolph!
jamichaelov.bsky.social
5: Recurrent models like RWKV and Mamba have recently emerged as viable alternatives to transformers. While they are intuitively more cognitively plausible, when used to model human language processing, how do they compare transformers? We find that they perform about the same overall:
Revenge of the Fallen? Recurrent Models Match Transformers at...
Transformers have generally supplanted recurrent neural networks as the dominant architecture for both natural language processing tasks and for modelling the effect of predictability on online...
openreview.net
jamichaelov.bsky.social
4: Is the N400 sensitive only to the predicted probability of the stimuli encountered, or also the predicted probability of alternatives? We revisit this question with state-of-the-art NLP methods, with the results supporting the former hypothesis:
Ignoring the alternatives: The N400 is sensitive to stimulus preactivation alone
The N400 component of the event-related brain potential is a neural signal of processing difficulty. In the language domain, it is widely believed to …
www.sciencedirect.com
jamichaelov.bsky.social
3: The N400, a neural index of language processing, is highly sensitive to the contextual probability of words. But to what extent can lexical prediction explain other N400 phenomena? Using GPT-3, we show that it can implicitly account for both semantic similarity and plausibility effects:
Strong Prediction: Language Model Surprisal Explains Multiple N400 Effects
Abstract. Theoretical accounts of the N400 are divided as to whether the amplitude of the N400 response to a stimulus reflects the extent to which the stimulus was predicted, the extent to which the s...
doi.org
jamichaelov.bsky.social
If you’re interested in hearing more of my thoughts on this topic, check out this article at Communications of the ACM by Sandrine Ceurstemont that includes quotes from an interview with me and my co-author Ben Bergen:
Bigger, Not Necessarily Better
The inverse scaling issue means larger LLMs sometimes handle things less well.
cacmb4.acm.org
jamichaelov.bsky.social
1. Training language models on more data generally improves their performance, but is this always the case? We show that inverse scaling can occur not just across models of different sizes, but also in individual models over the course of training:
Emergent Inabilities? Inverse Scaling Over the Course of Pretraining
James Michaelov, Ben Bergen. Findings of the Association for Computational Linguistics: EMNLP 2023. 2023.
aclanthology.org
jamichaelov.bsky.social
With all the new people here on Bluesky, I think it’s a good time to (re-)introduce myself. I’m a postdoc at MIT carrying out research at the intersection of the cognitive science of language and AI. Here are some of the things I’ve worked on in the last year 🧵: