Aryaman Arora
@aryaman.io
2.8K followers 480 following 24 posts
member of technical staff @stanfordnlp.bsky.social
Posts Media Videos Starter Packs
Reposted by Aryaman Arora
juliekallini.bsky.social
"Mission: Impossible" was featured in Quanta Magazine! Big thank you to @benbenbrubaker.bsky.social for the wonderful article covering our work on impossible languages. Ben was so thoughtful and thorough in all our conversations, and it really shows in his writing!
Reposted by Aryaman Arora
cgpotts.bsky.social
I've posted the practice run of my LSA keynote. My core claim is that LLMs can be useful tools for doing close linguistic analysis. I illustrate with a detailed case study, drawing on corpus evidence, targeted syntactic evaluations, and causal intervention-based analyses: youtu.be/DBorepHuKDM
Finding linguistic structure in large language models
YouTube video by Chris Potts
youtu.be
aryaman.io
What are the broad open problems in your view?
aryaman.io
i am now going to write a massive reply that will have no effect on this score you have given me
aryaman.io
hmm bluesky feed is 80% reviewing complaints. twitter is slightly better in this regard
aryaman.io
I was reading Tim Bodt's new book on Proto-Western Kho-Bwa while waiting for code to run

www.ling.sinica.edu.tw/item/en?act=...

very nice work
LANGUAGE AND LINGUISTICS >
www.ling.sinica.edu.tw
aryaman.io
e.g. Proto-Western Kho-Bwa *n̥a-jʷa-kʰa "chin" (> Khoina Sartang nyjukʰu) seems to be composed of

- PWKB *n̥a- "lower face" < Proto-Sino-Tibetan *s-na ~ s-naːr
- ?PST *g(j/w)ar "cheek, chin, jaw"
- PST *m/s-k(w)a-j "mouth, opening"

very interesting derivational morphology strat
aryaman.io
it's pretty interesting how (some?) Sino-Tibetan languages, when they historically underwent too much degradation through sound change, just decided to make compounds with synonymous elements or stick semantic prefixes on everything
aryaman.io
The cat is the mech interp researcher right
aryaman.io
made this thing, reply to be added
go.bsky.app/AKGJ82V
Reposted by Aryaman Arora
juand-r.bsky.social
How do language models organize concepts and their properties? Do they use taxonomies to infer new properties, or infer based on concept similarities? Apparently, both!

🌟 New paper with my fantastic collaborators @amuuueller.bsky.social and @kanishka.bsky.social
Title: "Characterizing the Role of Similarity in the Property Inferences of Language Models"
Authors: Juan Diego Rodriguez, Aaron Mueller, Kanishka Misra

Left figure: "Given that dogs are daxable, is it true that corgis are daxable?" A language model could answer this either using taxonomic relations, illustrated by a taxonomy dog-corgi, dog-mutt, canine-wolf, etc., or by similarity relations (dogs are more similar to corgis than cats, wolves or shar peis).

Right figure: illustration of the causal model (and an example intervention) for distributed alignment search (DAS), which we used to find a subspace in the network responsible for property inheritance behavior. The bottom nodes are "property", "premise concept (A)" and "conclusion concept (B)" , the middle nodes are "A has property P", "B is a kind of A", and the top node is "B has property P".
aryaman.io
the fact that you are posting here (and not there) has significantly increased my desire to use this platform
aryaman.io
i do like my username here
aryaman.io
wow i have so many followers here somehow, is it time to start posting here too
Reposted by Aryaman Arora
nsaphra.bsky.social
I decided what we need to make blueskAI happen is a feed. Reply here to get added to the whitelist! Whitelisted users can post to the feed by adding the following keywords to a post:

🤖
bskAI
blueskAI
aryaman.io
Hey @noviscl.bsky.social should we start shitposting here too
aryaman.io
lol I think we talked about it the first time I reviewed (SIGMORPHON)
aryaman.io
There will never be a random Burushaski speaker from Pakistan in my mentions here