Greg Durrett
@gregdnlp.bsky.social
2.6K followers 410 following 19 posts
CS professor at NYU. Large language models and NLP. he/him
Posts Media Videos Starter Packs
Reposted by Greg Durrett
manyawadhwa.bsky.social
Unfortunately I won't be at #COLM2025 this week, but please check out our work being presented by my collaborators/advisors!

If you are interested in evals of open-ended tasks/creativity please reach out and we can schedule a chat! :)
gregdnlp.bsky.social
Find my students and collaborators at COLM this week!

Tuesday morning: @juand-r.bsky.social and @ramyanamuduri.bsky.social 's papers (find them if you missed it!)

Wednesday pm: @manyawadhwa.bsky.social 's EvalAgent

Thursday am: @anirudhkhatry.bsky.social 's CRUST-Bench oral spotlight + poster
gregdnlp.bsky.social
Find my students and collaborators at COLM this week!

Tuesday morning: @juand-r.bsky.social and @ramyanamuduri.bsky.social 's papers (find them if you missed it!)

Wednesday pm: @manyawadhwa.bsky.social 's EvalAgent

Thursday am: @anirudhkhatry.bsky.social 's CRUST-Bench oral spotlight + poster
Reposted by Greg Durrett
juand-r.bsky.social
Excited to present this at #COLM2025 tomorrow! (Tuesday, 11:00 AM poster session)
juand-r.bsky.social
One of the ways that LLMs can be inconsistent is the "generator-validator gap," where LLMs deem their own answers incorrect.

🎯 We demonstrate that ranking-based discriminator training can significantly reduce this gap, and improvements on one task often generalize to others!

🧵👇
A visualization of the generator-validator gap, where the LM likelihoods of for the generator and discriminator forms of questions are poorly correlated. Aligning the validator and generator rankings can fix it!
gregdnlp.bsky.social
Check out this feature about AstroVisBench, our upcoming NeurIPS D&B paper about code workflows and visualization in the astronomy domain! Great testbed for the interaction of code + VLM reasoning models.
nsfsimonscosmicai.bsky.social
Exciting news! Introducing AstroVisBench: A Code Benchmark for Scientific Computing and Visualization in Astronomy!

A new benchmark developed by researchers at the NSF-Simons AI Institute for Cosmic Origins is testing how well LLMs implement scientific workflows in astronomy and visualize results.
Reposted by Greg Durrett
kanishka.bsky.social
News🗞️

I will return to UT Austin as an Assistant Professor of Linguistics this fall, and join its vibrant community of Computational Linguists, NLPers, and Cognitive Scientists!🤘

Excited to develop ideas about linguistic and conceptual generalization (recruitment details soon!)
Picture of the UT Tower taken by me on my first day at UT as a postdoc in 2023!
gregdnlp.bsky.social
Great to work on this benchmark with astronomers in our NSF-Simons CosmicAI institute! What I like about it:
(1) focus on data processing & visualization, a "bite-sized" AI4Sci task (not automating all of research)
(2) eval with VLM-as-a-judge (possible with strong, modern VLMs)
sebajoe.bsky.social
How good are LLMs at 🔭 scientific computing and visualization 🔭?

AstroVisBench tests how well LLMs implement scientific workflows in astronomy and visualize results.

SOTA models like Gemini 2.5 Pro & Claude 4 Opus only match ground truth scientific utility 16% of the time. 🧵
Reposted by Greg Durrett
kgandersen.bsky.social
The end of US leadership in science, technology, and innovation.

All in one little table.

A tremendous gift to China, courtesy of the GOP.

nsf-gov-resources.nsf.gov/files/00-NSF...
gregdnlp.bsky.social
Check out Anirudh's work on a new benchmark for C-to-Rust transpilation! 100 realistic-scale C projects, plus target Rust interfaces + Rust tests that let us validate the transpiled code beyond what prior benchmarks allow.
anirudhkhatry.bsky.social
🚀Meet CRUST-Bench, a dataset for C-to-Rust transpilation for full codebases 🛠️
A dataset of 100 real-world C repositories across various domains, each paired with:
🦀 Handwritten safe Rust interfaces.
🧪 Rust test cases to validate correctness.
🧵[1/6]
Reposted by Greg Durrett
anirudhkhatry.bsky.social
🚀Meet CRUST-Bench, a dataset for C-to-Rust transpilation for full codebases 🛠️
A dataset of 100 real-world C repositories across various domains, each paired with:
🦀 Handwritten safe Rust interfaces.
🧪 Rust test cases to validate correctness.
🧵[1/6]
gregdnlp.bsky.social
Check out Manya's work on evaluation for open-ended tasks! The criteria from EvalAgent can be plugged into LLM-as-a-judge or used for refinement. Great tool with a ton of potential, and there's LOTS to do here for making LLMs better at writing!
manyawadhwa.bsky.social
Evaluating language model responses on open-ended tasks is hard! 🤔

We introduce EvalAgent, a framework that identifies nuanced and diverse criteria 📋✍️.

EvalAgent identifies 👩‍🏫🎓 expert advice on the web that implicitly address the user’s prompt 🧵👇
gregdnlp.bsky.social
Check out Ramya et al.'s work on understanding discourse similarities in LLM-generated text! We see this as an important step in quantifying the "sameyness" of LLM text, which we think will be a step towards fixing it!
ramyanamuduri.bsky.social
Have that eerie feeling of déjà vu when reading model-generated text 👀, but can’t pinpoint the specific words or phrases 👀?

✨We introduce QUDsim, to quantify discourse similarities beyond lexical, syntactic, and content overlap.
Reposted by Greg Durrett
juand-r.bsky.social
Our final South by Semantics lecture at UT Austin is happening on Wednesday April 23!
South by Semantics Workshop
Title: "Not-your-mother's connectionism: LLMs as cognitive models"
Speaker: Ellie Pavlick (Brown University)
Date and time: April 23, 2025. 3:30 - 5 PM.
Location: GDC 6.302
gregdnlp.bsky.social
Check out @juand-r.bsky.social and @wenxuand.bsky.social 's work on improving generator-validator gaps in LLMs! I really like the formulation of the G-V gap we present, and I was pleasantly surprised by how well the ranking-based training closed the gap. Looking forward to following up in this area!
juand-r.bsky.social
One of the ways that LLMs can be inconsistent is the "generator-validator gap," where LLMs deem their own answers incorrect.

🎯 We demonstrate that ranking-based discriminator training can significantly reduce this gap, and improvements on one task often generalize to others!

🧵👇
A visualization of the generator-validator gap, where the LM likelihoods of for the generator and discriminator forms of questions are poorly correlated. Aligning the validator and generator rankings can fix it!
Reposted by Greg Durrett
kenwhite.bsky.social
If you're scooping up students off the street for writing op-eds, you're secret police, and should be treated accordingly.
Reposted by Greg Durrett
juand-r.bsky.social
I'm excited to announce two papers of ours which will be presented this summer at @naaclmeeting.bsky.social eting.bsky.social and @iclr-conf.bsky.social !
🧵
Reposted by Greg Durrett
swarat.bsky.social
Excited about Proofwala, @amitayush.bsky.social's new framework for ML-aided theorem-proving.

* Paper: arxiv.org/abs/2502.04671
* Code: github.com/trishullab/p...

Proofwala allows the collection of proof-step data from multiple proof assistants (Coq and Lean) and multilingual training. (1/3)
Reposted by Greg Durrett
whstancil.bsky.social
Popular or not Dems cannot bend on the need for trans people to be treated with basic humanity and respect. If we give up that because the right made trans people unpopular, we give up everything. They’ll dice us group by group like a salami. We die on this hill or we die alone in a ditch
Reposted by Greg Durrett
carlbergstrom.com
Here are just a few of the NSF review panels that were shut down today, Chuck.

This is research that would have made us competitive in computer science that will now be delayed by many months if not lost forever.

AI is fine but right now the top priority is keeping the lights on at NSF and NIH.
Proposal Review Panel for Civil, Mechanical, and Manufacturing Innovation (1194) 
Date/Time: January 28, 2025 8:30am – 5:00pm 
Contact: Daniel McAdams, 703/292-4654 

Proposal Review Panel for Innovation and Technology Ecosystems (84685) 
Date/Time: January 28, 2025 12:00pm – 6:00pm 
Contact: Alastair Monk, 703/292-8050 

Proposal Review Panel for Innovation and Technology Ecosystems (84685) 
Date/Time: January 28, 2025 10:00am – 5:00pm 
Contact: Henry Ahn, 703/292-8050 

Proposal Review Panel for Electrical, Communications, and Cyber Systems (1196) 
Date/Time: January 28 - 29, 2025 9:00am – 6:00pm 
Contact: Dominique Dagenais, 703/292-2980 

Proposal Review Panel for Engineering Education and Centers (173) 
Date/Time: January 28 - 29, 2025 8:30am – 5:00pm 
Contact: Jesus Soriano Molla, 703/292-7795 

Proposal Review Panel for Computer and Network Systems (1207) 
Date/Time: January 28 - 29, 2025 8:30am – 5:00pm 
Contact: Nan Zhang, 703/292-8950
Reposted by Greg Durrett
kylelo.bsky.social
kicking off 2025 with our OLMo 2 tech report while payin homage to the sequelest of sequels 🫡

🚗 2 OLMo 2 Furious 🔥 is everythin we learned since OLMo 1, with deep dives into:

🚖 stable pretrain recipe
🚔 lr anneal 🤝 data curricula 🤝 soups
🚘 tulu post-train recipe
🚜 compute infra setup

👇🧵
gregdnlp.bsky.social
Before his post-training work, Prasann did a great project on representing LM outputs with lattices, which remains one of my favorite algorithms-oriented papers from my group in the last few years, with a lot of potential for interesting follow-up work!
gregdnlp.bsky.social
He then advanced our understanding of online DPO methods: how can we combine the strengths of reward models and DPO? (also at COLM 2024)
gregdnlp.bsky.social
...established a critical weakness of RLHF with open reward models: spurious correlation with length (COLM 2024)
gregdnlp.bsky.social
Huge congrats to @prasannsinghal.bsky.social for being one of the 8 CRA Outstanding Undergraduate Researcher Award winners! It has been an absolute privilege to work with Prasann during his time at UT. (And he's applying for PhD programs this year...hint hint...)

Prasann's work 🧵