Kyle Mahowald (COLM 2025)
@kmahowald.bsky.social
2.9K followers 520 following 81 posts
UT Austin linguist http://mahowak.github.io/. computational linguistics, cognition, psycholinguistics, NLP, crosswords. occasionally hockey?
Posts Media Videos Starter Packs
Pinned
kmahowald.bsky.social
LMs need linguistics! New paper, with @futrell.bsky.social, on LMs and linguistics that conveys our excitement about what the present moment means for linguistics and what linguistics can do for LMs. Paper: arxiv.org/abs/2501.17047. 🧵below.
Reposted by Kyle Mahowald (COLM 2025)
kanishka.bsky.social
Come join us at the city of ACL!

Very happy to chat about my experience as a new faculty at UT Ling, come find me at #COLM2025 if you’re interested!!
kmahowald.bsky.social
UT Austin Linguistics is hiring in computational linguistics!

Asst or Assoc.

We have a thriving group sites.utexas.edu/compling/ and a long proud history in the space. (For instance, fun fact, Jeff Elman was a UT Austin Linguistics Ph.D.)

faculty.utexas.edu/career/170793

🤘
UT Austin Computational Linguistics Research Group – Humans processing computers processing humans processing language
sites.utexas.edu
Reposted by Kyle Mahowald (COLM 2025)
jessyjli.bsky.social
We’re hiring faculty as well! Happy to talk about it at COLM!
kmahowald.bsky.social
UT Austin Linguistics is hiring in computational linguistics!

Asst or Assoc.

We have a thriving group sites.utexas.edu/compling/ and a long proud history in the space. (For instance, fun fact, Jeff Elman was a UT Austin Linguistics Ph.D.)

faculty.utexas.edu/career/170793

🤘
UT Austin Computational Linguistics Research Group – Humans processing computers processing humans processing language
sites.utexas.edu
kmahowald.bsky.social
Thanks, didn't know the history of his later life. Deleted and re-posted to omit.
kmahowald.bsky.social
Austin is a lovely city, and the department is wonderful and supportive. I've had a great experience here.

As you can see in the ad, the scope of what we are looking for is broad.

Happy to discuss this position or Ph.D. positions at #COLM2025 or offline!
kmahowald.bsky.social
UT Austin Linguistics is hiring in computational linguistics!

Asst or Assoc.

We have a thriving group sites.utexas.edu/compling/ and a long proud history in the space. (For instance, fun fact, Jeff Elman was a UT Austin Linguistics Ph.D.)

faculty.utexas.edu/career/170793

🤘
UT Austin Computational Linguistics Research Group – Humans processing computers processing humans processing language
sites.utexas.edu
kmahowald.bsky.social
Austin is a lovely city, and the department is wonderful and supportive. I've had a great experience.

As you can see in the ad, the scope of we're looking for construe as computational linguistics is broad.

Happy to chat at #COLM2025 or offline about this faculty position and/or Ph.D. positions!
Reposted by Kyle Mahowald (COLM 2025)
sashaboguraev.bsky.social
I will be giving a short talk on this work at the COLM Interplay workshop on Friday (also to appear at EMNLP)!

Will be in Montreal all week and excited to chat about LM interpretability + its interaction with human cognition and ling theory.
sashaboguraev.bsky.social
A key hypothesis in the history of linguistics is that different constructions share underlying structure. We take advantage of recent advances in mechanistic interpretability to test this hypothesis in Language Models.

New work with @kmahowald.bsky.social and @cgpotts.bsky.social!

🧵👇!
Reposted by Kyle Mahowald (COLM 2025)
siyuansong.bsky.social
Heading to #COLM2025 to present my first paper w/ @jennhu.bsky.social @kmahowald.bsky.social !

When: Tuesday, 11 AM – 1 PM
Where: Poster #75

Happy to chat about my work and topics in computational linguistics & cogsci!

Also, I'm on the PhD application journey this cycle!

Paper info 👇:
siyuansong.bsky.social
New preprint w/ @jennhu.bsky.social @kmahowald.bsky.social : Can LLMs introspect about their knowledge of language?
Across models and domains, we did not find evidence that LLMs have privileged access to their own predictions. 🧵(1/8)
kmahowald.bsky.social
Do you want to use AI models to understand human language?

Are you fascinated by whether linguistic representations are lurking in LLMs?

Are you in need of a richer model of spatial words across languages?

Consider UT Austin for all your Computational Linguistics Ph.D. needs!

mahowak.github.io
Reposted by Kyle Mahowald (COLM 2025)
mcxfrank.bsky.social
Ever wonder how habituation works? Here's our attempt to understand:

A stimulus-computable rational model of visual habituation in infants and adults doi.org/10.7554/eLif...

This is the thesis of two wonderful students: @anjiecao.bsky.social @galraz.bsky.social, w/ @rebeccasaxe.bsky.social
infant data from experiment 1 conceptual schema for different habituation models title page results from experiment 2 with adults
kmahowald.bsky.social
At UT we just got to hear about this in a zoom talk from @sfeucht.bsky.social. I echo the endorsement:
cool ideas about representations in llms with linguistic relevance!
Who is going to be at #COLM2025?

I want to draw your attention to a COLM paper by my student @sfeucht.bsky.social that has totally changed the way I think and teach about LLM representations. The work is worth knowing.

And you can meet Sheridan at COLM, Oct 7!
bsky.app/profile/sfe...
Reposted by Kyle Mahowald (COLM 2025)
jessyjli.bsky.social
Can AI aid scientists amidst their own workflows, when they do not know step-by-step workflows and may not know, in advance, the kinds of scientific utility a visualization would bring?

Check out @sebajoe.bsky.social’s feature on ✨AstroVisBench:
nsfsimonscosmicai.bsky.social
Exciting news! Introducing AstroVisBench: A Code Benchmark for Scientific Computing and Visualization in Astronomy!

A new benchmark developed by researchers at the NSF-Simons AI Institute for Cosmic Origins is testing how well LLMs implement scientific workflows in astronomy and visualize results.
Reposted by Kyle Mahowald (COLM 2025)
kmahowald.bsky.social
📣@futrell.bsky.social and I have a BBS target article with an optimistic take on LLMs + linguistics. Commentary proposals (just need a few hundred words) are OPEN until Oct 8. If we are too optimistic for you (or not optimistic enough!) or you have anything to say: www.cambridge.org/core/journal...
How Linguistics Learned to Stop Worrying and Love the Language Models
How Linguistics Learned to Stop Worrying and Love the Language Models
www.cambridge.org
Reposted by Kyle Mahowald (COLM 2025)
wmatchin.bsky.social
Provocative piece and more interesting than most that have been written about this topic. I greatly encourage people to weigh in!

My own perspective is that while there is utility to LMs, the scientific insights are greatly overstated.
kmahowald.bsky.social
📣@futrell.bsky.social and I have a BBS target article with an optimistic take on LLMs + linguistics. Commentary proposals (just need a few hundred words) are OPEN until Oct 8. If we are too optimistic for you (or not optimistic enough!) or you have anything to say: www.cambridge.org/core/journal...
How Linguistics Learned to Stop Worrying and Love the Language Models
How Linguistics Learned to Stop Worrying and Love the Language Models
www.cambridge.org
kmahowald.bsky.social
Yes, after some discussion, we decided to stick with the past tense like in the movie. Richard says it's an example of the prophetic perfect tense en.wikipedia.org/wiki/Prophet....
Prophetic perfect tense - Wikipedia
en.wikipedia.org
kmahowald.bsky.social
BBS also values publishing commentaries not just from the most relevant subarea of the article but from a wide variety of areas. So also consider submitting if you're further afield in some way!
kmahowald.bsky.social
The accepted manuscript is here: www.cambridge.org/core/service...

Have already heard plenty of spirited and useful disagreement on the piece. If that's you, especially considering submitting something! (Or if you want to say how much you agree with us, that's of course welcome too.)
www.cambridge.org
kmahowald.bsky.social
📣@futrell.bsky.social and I have a BBS target article with an optimistic take on LLMs + linguistics. Commentary proposals (just need a few hundred words) are OPEN until Oct 8. If we are too optimistic for you (or not optimistic enough!) or you have anything to say: www.cambridge.org/core/journal...
How Linguistics Learned to Stop Worrying and Love the Language Models
How Linguistics Learned to Stop Worrying and Love the Language Models
www.cambridge.org
kmahowald.bsky.social
Congrats to Leonie on the new gig! Surely though she will mess our Texas summers.
weissweiler.bsky.social
📢Life update📢

🥳I'm excited to share that I've started as a postdoc at Uppsala University NLP @uppsalanlp.bsky.social, working with Joakim Nivre on topics related to constructions and multilinguality!

🙏Many thanks to the Walter Benjamin Programme of the DFG for making this possible.
kmahowald.bsky.social
Can AI introspect? Surprisingly tricky to define what that means! And also interesting to test. New work from @siyuansong.bsky.social, @harveylederman.bsky.social, @jennhu.bsky.social and me on introspection in LLMs. See paper and thread for a definition and some experiments!
siyuansong.bsky.social
How reliable is what an AI says about itself? The answer depends on whether models can introspect. But, if an LLM says its temperature parameter is high (and it is!)….does that mean it’s introspecting? Surprisingly tricky to pin down. Our paper: arxiv.org/abs/2508.14802 (1/n)