Kanishka Misra 🌊
@kanishka.bsky.social
2.5K followers 260 following 200 posts
Assistant Professor of Linguistics, and Harrington Fellow at UT Austin. Works on computational understanding of language, concepts, and generalization. 🕸️👁️: https://kanishka.website
Posts Media Videos Starter Packs
Pinned
kanishka.bsky.social
News🗞️

I will return to UT Austin as an Assistant Professor of Linguistics this fall, and join its vibrant community of Computational Linguists, NLPers, and Cognitive Scientists!🤘

Excited to develop ideas about linguistic and conceptual generalization (recruitment details soon!)
Picture of the UT Tower taken by me on my first day at UT as a postdoc in 2023!
Reposted by Kanishka Misra 🌊
cocchino.bsky.social
We're hiring! UT Linguistics invites applications for a position in computational linguistics to begin next academic year 2026-27 (rank of tenure-track Assistant Professor or Associate Professor with tenure). apply.interfolio.com/175156
Apply - Interfolio {{$ctrl.$state.data.pageTitle}} - Apply - Interfolio
apply.interfolio.com
kanishka.bsky.social
Catch Qing’s poster in the morning poster session today!!

I’ll also be there, talk to me about UT Ling’s new comp ling job/methods to study linguistic generalization/and how LMs *might* inform language science!
qyao.bsky.social
LMs learn argument-based preferences for dative constructions (preferring recipient first when it’s shorter), consistent with humans. Is this from memorizing preferences in training? New paper w/ @kanishka.bsky.social , @weissweiler.bsky.social , @kmahowald.bsky.social

arxiv.org/abs/2503.20850
examples from direct and prepositional object datives with short-first and long-first word orders: 
DO (long first): She gave the boy who signed up for class and was excited it.
PO (short first): She gave it to the boy who signed up for class and was excited.
DO (short first): She gave him the book that everyone was excited to read.
PO (long-first): She gave the book that everyone was excited to read to him.
Reposted by Kanishka Misra 🌊
kmahowald.bsky.social
UT Austin Linguistics is hiring in computational linguistics!

Asst or Assoc.

We have a thriving group sites.utexas.edu/compling/ and a long proud history in the space. (For instance, fun fact, Jeff Elman was a UT Austin Linguistics Ph.D.)

faculty.utexas.edu/career/170793

🤘
UT Austin Computational Linguistics Research Group – Humans processing computers processing humans processing language
sites.utexas.edu
kanishka.bsky.social
Come join us at the city of ACL!

Very happy to chat about my experience as a new faculty at UT Ling, come find me at #COLM2025 if you’re interested!!
kmahowald.bsky.social
UT Austin Linguistics is hiring in computational linguistics!

Asst or Assoc.

We have a thriving group sites.utexas.edu/compling/ and a long proud history in the space. (For instance, fun fact, Jeff Elman was a UT Austin Linguistics Ph.D.)

faculty.utexas.edu/career/170793

🤘
UT Austin Computational Linguistics Research Group – Humans processing computers processing humans processing language
sites.utexas.edu
Reposted by Kanishka Misra 🌊
sashaboguraev.bsky.social
I will be giving a short talk on this work at the COLM Interplay workshop on Friday (also to appear at EMNLP)!

Will be in Montreal all week and excited to chat about LM interpretability + its interaction with human cognition and ling theory.
sashaboguraev.bsky.social
A key hypothesis in the history of linguistics is that different constructions share underlying structure. We take advantage of recent advances in mechanistic interpretability to test this hypothesis in Language Models.

New work with @kmahowald.bsky.social and @cgpotts.bsky.social!

🧵👇!
Reposted by Kanishka Misra 🌊
jessyjli.bsky.social
On my way to #COLM2025 🍁

Check out jessyli.com/colm2025

QUDsim: Discourse templates in LLM stories arxiv.org/abs/2504.09373

EvalAgent: retrieval-based eval targeting implicit criteria arxiv.org/abs/2504.15219

RoboInstruct: code generation for robotics with simulators arxiv.org/abs/2405.20179
Reposted by Kanishka Misra 🌊
siyuansong.bsky.social
Heading to #COLM2025 to present my first paper w/ @jennhu.bsky.social @kmahowald.bsky.social !

When: Tuesday, 11 AM – 1 PM
Where: Poster #75

Happy to chat about my work and topics in computational linguistics & cogsci!

Also, I'm on the PhD application journey this cycle!

Paper info 👇:
siyuansong.bsky.social
New preprint w/ @jennhu.bsky.social @kmahowald.bsky.social : Can LLMs introspect about their knowledge of language?
Across models and domains, we did not find evidence that LLMs have privileged access to their own predictions. 🧵(1/8)
kanishka.bsky.social
Also, I’m on the look out for my first PhD student! If you’d like to be the one, please reach out to me (dms/email open) and we can chat!!

@jessyjli.bsky.social and @kmahowald.bsky.social are also hiring students, and we’re all eager to co-advise!
kanishka.bsky.social
I’ll also be moderating a roundtable at the INTERPLAY workshop on Oct 10 — excited to discuss behavior, representations, and a third secret thing with folks!
Reposted by Kanishka Misra 🌊
jessyjli.bsky.social
All of us (@kanishka.bsky.social @kmahowald.bsky.social and me) are looking for PhD students this cycle! If computational linguistics/NLP is your passion, join us at UT Austin!

For my areas see jessyli.com
kanishka.bsky.social
We'll all be attending #COLM2025 -- come say hi if you are interested in working with us!!

Separate tweet incoming for COLM papers!
kanishka.bsky.social
The compling group at UT Austin (sites.utexas.edu/compling/) is looking for PhD students!

Come join me, @kmahowald.bsky.social, and @jessyjli.bsky.social as we tackle interesting research questions at the intersection of ling, cogsci, and ai!

Some topics I am particularly interested in:
Picture of the UT Tower with "UT Austin Computational Linguistics" written in bigger font, and "Humans processing computers processing human processing language" in smaller font
Reposted by Kanishka Misra 🌊
gboleda.bsky.social
New paper! 🚨 I argue that LLMs represent a synthesis between distributed and symbolic approaches to language, because, when exposed to language, they develop highly symbolic representations and processing mechanisms in addition to distributed ones.
arxiv.org/abs/2502.11856
Sigmoid function. Non-linearities in neural network allow it to behave in distributed and near-symbolic fashions.
kanishka.bsky.social
Hehe but really — unifying all of them + easy access = yes plssss
kanishka.bsky.social
Friendship ended with minicons, glazing is my new fav package!
aaronstevenwhite.io
I've found it kind of a pain to work with resources like VerbNet, FrameNet, PropBank (frame files), and WordNet using existing tools. Maybe you have too. Here's a little package that handles data management, loading, and cross-referencing via either a CLI or a python API.
GitHub - aaronstevenwhite/glazing: Unified data models and interfaces for syntactic and semantic frame ontologies.
Unified data models and interfaces for syntactic and semantic frame ontologies. - aaronstevenwhite/glazing
github.com
Reposted by Kanishka Misra 🌊
aaronstevenwhite.io
I've found it kind of a pain to work with resources like VerbNet, FrameNet, PropBank (frame files), and WordNet using existing tools. Maybe you have too. Here's a little package that handles data management, loading, and cross-referencing via either a CLI or a python API.
GitHub - aaronstevenwhite/glazing: Unified data models and interfaces for syntactic and semantic frame ontologies.
Unified data models and interfaces for syntactic and semantic frame ontologies. - aaronstevenwhite/glazing
github.com