Adam Morgan
@adumbmoron.bsky.social
840 followers 440 following 44 posts
Postdoc at NYU using ECoG to study how the brain translates from thought to language. On the job market! 🏳️‍🌈🏳️‍⚧️🗳️ he/him https://adam-milton-morgan.github.io/
Posts Media Videos Starter Packs
Reposted by Adam Morgan
liinapy.bsky.social
Spectacular talk by SNL Early Career Award winner Esti Blanco Elorrieta! Much NeLLab pride, congratulations Esti! 🎉🎉 #SNL2025 @snlmtg.bsky.social
adumbmoron.bsky.social
There’s lots more work to be done here, including tinkering with prompts, model parameters, and extending to freely-available LLMs. In the meantime, we hope this is useful to folks and complements existing tools with something new: fast, scalable, and customizable VFF estimation.
adumbmoron.bsky.social
📌 VFFs from Gahl et al. (2004)'s manually annotated (i.e. gold-standard) VFFs
📌 against preferences for competing frames (the dative alternation and NP/SC ambiguity) 🧵6/8
Evaluating the LLM's, benepar's, and the Stanford Parser's VFF estimates by comparison to Gahl et al.'s (2004) database. The LLM produced the best fit, across 7 different verb frames.
adumbmoron.bsky.social
We benchmarked it thoroughly. The LLM consistently outperformed benepar & the Stanford Parser:
📌 300 human-annotated sentences (LLM accuracy = 79%, vs. 69% for benepar and 59% for Stanford) 🧵5/8
Accuracy for the GPT-4o (LLM) parser, Berkeley Neural Parser (benepar), and Stanford Parser on three manually-parsed verbs. The LLM consistently showed higher agreement with manual parses.
adumbmoron.bsky.social
That’s particularly exciting because existing datasets don’t scale well. They’re hard to adapt to new verbs/contexts/languages according to experimental need. Our pipeline is simple, scalable, and adaptable. We release the full code + VFF norms for 476 English verbs. 🧵4/8
adumbmoron.bsky.social
So we got creative and tried asking an LLM to parse a bunch of sentences. As it turns out, not only did this work, but the LLM outperformed both the Stanford Parser and the Berkeley Neural Parser (benepar), a state-of-the-art deep-learning parser trained on treebanks. 🧵3/8
adumbmoron.bsky.social
We needed syntactic norms for an experiment -- specifically Verb Frame Frequencies (VFFs), or how often particular verbs appear in different syntactic frames (e.g., intransitive, prepositional object, etc.). Nothing in the literature quite fit. 🧵2/8
adumbmoron.bsky.social
Thank you, Florence!!
adumbmoron.bsky.social
P.S. Yes, we know, Frankenstein wasn't the monster's name. 🤣
adumbmoron.bsky.social
More broadly, the field has largely assumed that the representations we study with single word production tasks are the same as those involved in sentences. By successfully using models trained on picture naming to decode words in sentences, we verify this 🔑 point. 🧵8/9
adumbmoron.bsky.social
These findings show that word processing doesn't always look like it does in picture naming: it depends on task demands. This complexity may even help explain why languages globally prefer placing subjects before objects! 🧵7/9
adumbmoron.bsky.social
We took a closer look at what was going on in prefrontal cortex. This revealed that these sustained representations traced back to different regions depending on a word's sentence position: when it was a subject, it was encoded in IFG, while MFG encoded objects. 🧵6/9
Density plots for the number of detections of subjects (left) and objects (right) during the production of subjects and objects in passive sentences, split by two prefrontal regions: IFG (top) and MFG (bottom). IFG sustained representations of subjects throughout both words while MFG sustained representations of objects.
adumbmoron.bsky.social
In passive sentences like "Frankenstein was hit by Dracula", we observed sustained neural activity encoding BOTH nouns simultaneously throughout the entire utterance. This was particularly true in prefrontal cortex. 🧵5/9
Decoding results from middle frontal gyrus during passive sentences showed sustained encoding of the object noun.
adumbmoron.bsky.social
For straightforward active sentences ("Dracula hit Frankenstein"), the brain activated words sequentially, matching their spoken order. But things changed dramatically for more complex sentences... 🧵4/9
Decoding results from sensorimotor cortex for active sentences: the subject noun is predicted above chance while it is being said, and the object noun while it is being said.
adumbmoron.bsky.social
We trained machine learning classifiers to identify each word's specific neural pattern. 🔑We ONLY used data from picture naming (single word production) to train the models. We then used the models to predict what word patients were saying in real time as they said sentences.🧵3
Word-specific patterns of neural activity: electrodes that selectively responded to each of the six words.
adumbmoron.bsky.social
We recorded brain activity directly from cortex in neurosurgical patients (ECoG) while they used 6 words in two tasks: picture naming ("Dracula") and scene description ("Dracula hit Frankenstein"). 🧵2/9
Task screenshots (picture naming: a cartoon picture of Frankenstein; scene description: cartoon image of Dracula hitting Frankenstein) and mean neural activity per word for one electrode in middle temporal gyrus.
adumbmoron.bsky.social
Wow, thanks Laurel! Honestly one of the best compliments I’ve ever gotten given the quality of the other talks!