Jacob Eisenstein is at CoLM
@jacobeisenstein.bsky.social
5.5K followers 2.3K following 200 posts
natural language processing and computational linguistics at google deepmind.
Posts Media Videos Starter Packs
Reposted by Jacob Eisenstein is at CoLM
mariaa.bsky.social
Here’s a #COLM2025 feed!

Pin it 📌 to follow along with the conference this week!
Reposted by Jacob Eisenstein is at CoLM
myra.bsky.social
AI always calling your ideas “fantastic” can feel inauthentic, but what are sycophancy’s deeper harms? We find that in the common use case of seeking AI advice on interpersonal situations—specifically conflicts—sycophancy makes people feel more right & less willing to apologize.
Screenshot of paper title: Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence
Reposted by Jacob Eisenstein is at CoLM
ptshaw.bsky.social
Excited to share a new paper that aims to narrow the conceptual gap between the idealized notion of Kolmogorov complexity and practical complexity measures for neural networks.
Bridging Kolmogorov Complexity and Deep Learning: Asymptotically Optimal Description Length Objectives for Transformers
jacobeisenstein.bsky.social
thanks! i was more confused about the “kugel” part but TIL that this is apparently inspired by an airy globe?
Reposted by Jacob Eisenstein is at CoLM
joshuaraclaw.com
Cannot stress enough how good it is that you can come across a post about gorgeous little Yiddish book sitting in someone’s family collection, and within a few seconds you can find the full scanned version of the book available for free through the Yiddish Book Center’s website
jacobeisenstein.bsky.social
i think my great grandmother was the last owner of these books that knew how to read them
jacobeisenstein.bsky.social
found some books at my parents’ house
yiddish book cover automatic translation: autonomy by dr. b hoffman
jacobeisenstein.bsky.social
On the positive side, this vario grinder, which i bought second hand, is the best technological upgrade of the summer in my house.

(Its grind settings are 1-10, a-z, so the chatgpt output is clearly wrong and the claude output is nonsensical)
jacobeisenstein.bsky.social
Baristas still safe from robotic automation, and not just because robots don’t know what coffee tastes like.

prompt: “I’m trying to dial in this v60 of huatusco with my vario. temp / grind recommendations?”
jacobeisenstein.bsky.social
right but you’ll notice it’s pretty hard to validate a proposed answer to those why questions, so it was not unreasonable to hypothesize that a better formal model of language might yield better features for an NLP system
jacobeisenstein.bsky.social
The project of putting statistical meat on grammarian bones is, imo, a beautiful one (this is basically what textbook is about), even if it didn’t work out as a way to build NLP. It was helpful to have the participation of people like Bender, who understood the latest ideas in theoretical syntax.
jacobeisenstein.bsky.social
Bender is/was fairly distinct among syntax people for caring about statistical NLP and for believing that it can or even must incorporate sophisticated ideas about grammar. Lots of NLP people thought this for a long time, but I don’t think many linguists did.
jacobeisenstein.bsky.social
I’d guess that the majority position of syntacticians about LLMs (and other NLP beforehand) is roughly what Chomsky says: language tech can’t possibly teach us anything about the human language capability, so whether the LLM writes well doesn’t matter at all.
theophite.bsky.social
the thing about Emily Bender is that the reason she hates LLMs is that the fact that they exist -- not that they are "AGI," but the fact that they exist -- falsifies every paper she has ever written. you more or less can't be a formal grammar person in a world where statistical learning works.
jacobeisenstein.bsky.social
boston champaign pittsburgh atlanta, and, uh, let’s count seattle

glad i did it, hope i don’t have to do it again
golikehellmachine.com
under golikehellism everyone will be required to live in at least three new cities where they know less than five people from the age of 20-40
Reposted by Jacob Eisenstein is at CoLM
mmitchell.bsky.social
🤖 ICYMI: Yesterday, @hf.co and OpenAI partnered to bring open source GPT to the public. This is a Big Deal in "AI world". Allow me to explain why. 🧵
huggingface.co/openai/gpt-o...
Yellow background with orange border. OpenAI logo top center, "OpenAI's GPT OSS" bottom center. Left side in smaller writing says "From the makers of ChatGPT...A new model is released". On the right and left side are spikey orange announcement banners, reading "Now Open Source!" "Available on Hugging Face! <logo>"
url given in bottom right corner is hf.co/openai/gpt-oss-120b
Reposted by Jacob Eisenstein is at CoLM
ibnbassal.bsky.social
roman burrito thread
ed3d.net
Ed @ed3d.net · Jul 26
you also have the laganum, which is (at some points) a pancake-ish kinda dough snack. slather on a "salsa" of garlic, onions, olives, and vinegar. pile on some roasted lamb or pork and caramelized onions, with some parsley or cilantro. chickpeas sub for beans. artichoke hearts, too
Reposted by Jacob Eisenstein is at CoLM
jacobeisenstein.bsky.social
this is very cool and i’m looking forward to reading the paper, but a basic question about this data: isn’t it likely that a congressional rep’s speeches are written by a shifting cast of speechwriters over the course of their career? wouldn’t that explain adoption of new usages?
Reposted by Jacob Eisenstein is at CoLM
grvkamath.bsky.social
Our new paper in #PNAS (bit.ly/4fcWfma) presents a surprising finding—when words change meaning, older speakers rapidly adopt the new usage; inter-generational differences are often minor.

w/ Michelle Yang, ‪@sivareddyg.bsky.social‬ , @msonderegger.bsky.social‬ and @dallascard.bsky.social‬👇(1/12)
jacobeisenstein.bsky.social
this is very cool and i’m looking forward to reading the paper, but a basic question about this data: isn’t it likely that a congressional rep’s speeches are written by a shifting cast of speechwriters over the course of their career? wouldn’t that explain adoption of new usages?
jacobeisenstein.bsky.social
Readers interested in bias and local knowledge may be disappointed by the thin discussion in section 5: the PPI work is great, but if Jordan means "bias" in anything like the colloquial sense then it's impossible to talk about without reference to power differentials
arxiv.org/abs/2005.14050
Language (Technology) is Power: A Critical Survey of "Bias" in NLP
We survey 146 papers analyzing "bias" in NLP systems, finding that their motivations are often vague, inconsistent, and lacking in normative reasoning, despite the fact that analyzing "bias" is an inh...
arxiv.org
jacobeisenstein.bsky.social
Indeed, while social situations can *create* new kinds of uncertainty -- e.g., uncertainty about others' intentions -- they can also create powerful incentives for agents to reason and communicate about uncertainty. (Shameless self-promotion here: arxiv.org/abs/2503.14481)
Don't lie to your friends: Learning what you know from collaborative self-play
To be helpful assistants, AI agents must be aware of their own capabilities and limitations. This includes knowing when to answer from parametric knowledge versus using tools, when to trust tool outpu...
arxiv.org