Ev Fedorenko
@evfedorenko.bsky.social
6.2K followers 570 following 80 posts
I study language using tools from cognitive science and neuroscience. I also like snuggles.
Posts Media Videos Starter Packs
Reposted by Ev Fedorenko
rodbraga.bsky.social
📣 New preprint from the Braga Lab! 📣

The ventral visual stream for reading converges on the transmodal language network

Congrats to Dr. Joe Salvo for this epic set of results

Big Q: What brain systems support the translation of writing to concepts and meaning?

Thread 🧵 ⬇️
Reposted by Ev Fedorenko
kmahowald.bsky.social
UT Austin Linguistics is hiring in computational linguistics!

Asst or Assoc.

We have a thriving group sites.utexas.edu/compling/ and a long proud history in the space. (For instance, fun fact, Jeff Elman was a UT Austin Linguistics Ph.D.)

faculty.utexas.edu/career/170793

🤘
UT Austin Computational Linguistics Research Group – Humans processing computers processing humans processing language
sites.utexas.edu
Reposted by Ev Fedorenko
jessyjli.bsky.social
All of us (@kanishka.bsky.social @kmahowald.bsky.social and me) are looking for PhD students this cycle! If computational linguistics/NLP is your passion, join us at UT Austin!

For my areas see jessyli.com
Reposted by Ev Fedorenko
neuranna.bsky.social
As our lab started to build encoding 🧠 models, we were trying to figure out best practices in the field. So @neurotaha.bsky.social
built a library to easily compare design choices & model features across datasets!

We hope it will be useful to the community & plan to keep expanding it!
1/
neurotaha.bsky.social
🚨 Paper alert:
To appear in the DBM Neurips Workshop

LITcoder: A General-Purpose Library for Building and Comparing Encoding Models

📄 arxiv: arxiv.org/abs/2509.091...
🔗 project: litcoder-brain.github.io
Reposted by Ev Fedorenko
kanishka.bsky.social
The compling group at UT Austin (sites.utexas.edu/compling/) is looking for PhD students!

Come join me, @kmahowald.bsky.social, and @jessyjli.bsky.social as we tackle interesting research questions at the intersection of ling, cogsci, and ai!

Some topics I am particularly interested in:
Picture of the UT Tower with "UT Austin Computational Linguistics" written in bigger font, and "Humans processing computers processing human processing language" in smaller font
Reposted by Ev Fedorenko
hsmall.bsky.social
Excited to share new work with @hleemasson.bsky.social , Ericka Wodka, Stewart Mostofsky and @lisik.bsky.social! We investigated how simultaneous vision and language signals are combined in the brain using naturalistic+controlled fMRI. Read the paper here: osf.io/b5p4n
1/n
Reposted by Ev Fedorenko
thomasserre.bsky.social
Brown’s Department of Cognitive & Psychological Sciences is hiring a tenure-track Assistant Professor, working in the area of AI and the Mind (start July 1, 2026). Apply by Nov 8, 2025 👉 apply.interfolio.com/173939

#AI #CognitiveScience #AcademicJobs #BrownUniversity
Apply - Interfolio {{$ctrl.$state.data.pageTitle}} - Apply - Interfolio
apply.interfolio.com
evfedorenko.bsky.social
A fantastic piece by two of my favorite linguists!
kmahowald.bsky.social
📣@futrell.bsky.social and I have a BBS target article with an optimistic take on LLMs + linguistics. Commentary proposals (just need a few hundred words) are OPEN until Oct 8. If we are too optimistic for you (or not optimistic enough!) or you have anything to say: www.cambridge.org/core/journal...
How Linguistics Learned to Stop Worrying and Love the Language Models
How Linguistics Learned to Stop Worrying and Love the Language Models
www.cambridge.org
Reposted by Ev Fedorenko
kmahowald.bsky.social
📣@futrell.bsky.social and I have a BBS target article with an optimistic take on LLMs + linguistics. Commentary proposals (just need a few hundred words) are OPEN until Oct 8. If we are too optimistic for you (or not optimistic enough!) or you have anything to say: www.cambridge.org/core/journal...
How Linguistics Learned to Stop Worrying and Love the Language Models
How Linguistics Learned to Stop Worrying and Love the Language Models
www.cambridge.org
Reposted by Ev Fedorenko
Reposted by Ev Fedorenko
‪@benhayden.bsky.social‬
@tyrellturing.bsky.social
@jmgrohneuro.bsky.social
@pessoabrain.bsky.social
I see a lot of talk on here about how we should avoid
"x does y" talk because the brain is "a dynamic, reverberant, reciprocally interconnected system".
But this does not follow.
A thread...
Reposted by Ev Fedorenko
carlzimmer.com
Harvard University won a crucial legal victory in its clash with the Trump administration on Wednesday, when a federal judge said that the government had broken the law by freezing billions of dollars in research funds. Gift link: nyti.ms/41CEOWy
nyti.ms
Reposted by Ev Fedorenko
nicolecrust.bsky.social
Let’s talk @storiesofwin.bsky.social. I’m flattered to be among their profiles (coming soon) & I want to elevate the team behind this terrific effort. /1

www.storiesofwin.org
Stories of WiN
www.storiesofwin.org
Reposted by Ev Fedorenko
adelegoldberg.bsky.social
📌 👉 The 14th International Construction Grammar conference will be held at Princeton, June 4-7, 2026

Usage-based analyses and Empirical methods

Stay tuned for updates!
Reposted by Ev Fedorenko
neurograce.bsky.social
The rumors are true! #CCN2026 will be held at NYU. @toddgureckis.bsky.social and I will be executive-chairing. Get in touch if you want to be involved!
Reposted by Ev Fedorenko
rtommccoy.bsky.social
🤖 🧠 NEW PAPER ON COGSCI & AI 🧠 🤖

Recent neural networks capture properties long thought to require symbols: compositionality, productivity, rapid learning

So what role should symbols play in theories of the mind? For our answer...read on!

Paper: arxiv.org/abs/2508.05776

1/n
The top shows the title and authors of the paper: "Whither symbols in the era of advanced neural networks?" by Tom Griffiths, Brenden Lake, Tom McCoy, Ellie Pavlick, and Taylor Webb.

At the bottom is text saying "Modern neural networks display capacities traditionally believed to require symbolic systems. This motivates a re-assessment of the role of symbols in cognitive theories."

In the middle is a graphic illustrating this text by showing three capacities: compositionality, productivity, and inductive biases. For each one, there is an illustration of a neural network displaying it. For compositionality, the illustration is DALL-E 3 creating an image of a teddy bear skateboarding in Times Square. For productivity, the illustration is novel words produced by GPT-2: "IKEA-ness", "nonneotropical", "Brazilianisms", "quackdom", "Smurfverse". For inductive biases, the illustration is a graph showing that a meta-learned neural network can learn formal languages from a small number of examples.
evfedorenko.bsky.social
I am just fooling around, you goof 😊
evfedorenko.bsky.social
Jamie, we are good for it! 😉
Reposted by Ev Fedorenko
rodbraga.bsky.social
🚨 New Preprint 🚨

Targeting intracranial electrical stimulation (ES) to network regions defined within individuals causes network-level effects

By Cyr et al.

***
Q: Can we use individualized network maps from precision fMRI to modulate a targeted network via intracranial ES?

A: Yes!

🧵:
Reposted by Ev Fedorenko
sussillodavid.bsky.social
Coming March 17, 2026!
Just got my advance copy of Emergence — a memoir about growing up in group homes and somehow ending up in neuroscience and AI. It’s personal, it’s scientific, and it’s been a wild thing to write. Grateful and excited to share it soon.