Vishakh Padmakumar
@vishakhpk.bsky.social
54 followers 150 following 9 posts
PhD Student @nyudatascience.bsky.social, working with He He on NLP and Human-AI Collaboration. Also hanging out @ai2.bsky.social Website - https://vishakhpk.github.io/
Posts Media Videos Starter Packs
Pinned
vishakhpk.bsky.social
What does it mean for #LLM output to be novel?
In work w/ johnchen6.bsky.social, Jane Pan, Valerie Chen and He He, we argue it needs to be both original and high quality. While prompting tricks trade one for the other, better models (scaling/post-training) can shift the novelty frontier 🧵
Reposted by Vishakh Padmakumar
Reposted by Vishakh Padmakumar
gautamkamath.com
I wrote a post on how to connect with people (i.e., make friends) at CS conferences. These events can be intimidating so here's some suggestions on how to navigate them

I'm late for #ICLR2025 #NAACL2025, but in time for #AISTATS2025 #ICML2025! 1/3
kamathematics.wordpress.com/2025/05/01/t...
Tips on How to Connect at Academic Conferences
I was a kinda awkward teenager. If you are a CS researcher reading this post, then chances are, you were too. How to navigate social situations and make friends is not always intuitive, and has to …
kamathematics.wordpress.com
vishakhpk.bsky.social
And prompting tricks like asking for novelty and denial prompting trade-off originality and quality without meaningfully shifting the frontier of novelty …. so there’s lot more work to be done 😀
vishakhpk.bsky.social
Sure, but can we elicit more novelty at inference time? Turns out it’s tricky. Increasing sampling temperatures (from 0.5 to 2) boosts originality but can hurt quality, creating a U-shaped effect.
vishakhpk.bsky.social
But improving the underlying model can help yield more novel output! This can either be by (a) increasing model scale (1B -> 7B), and (b) instruction tuning (7B -> 7B-Instruct)
vishakhpk.bsky.social
We find that base LLMs often generate less novel output than human-written references from the datasets
vishakhpk.bsky.social
We evaluate the novelty of OLMo and Pythia models on 3 creative tasks:
📝 Story completion (TinyStories)
🎨 Poetry writing (Help Me Write a Poem)
🛠️ Creative tool use (MacGyver)
Novelty = harmonic mean of output quality (LLM-as-judge) and originality (unseen n-gram fraction).
vishakhpk.bsky.social
Considering originality and quality separately is not enough—human prefs on quality can favor outputs reproducing training data (users may not recognize this) while originality alone can reward incoherent generations. These are often at odds & should be evaluated together💡
vishakhpk.bsky.social
What does it mean for #LLM output to be novel?
In work w/ johnchen6.bsky.social, Jane Pan, Valerie Chen and He He, we argue it needs to be both original and high quality. While prompting tricks trade one for the other, better models (scaling/post-training) can shift the novelty frontier 🧵
Reposted by Vishakh Padmakumar
eunsol.bsky.social
When using LLM-as-a-judge, practitioners often use greedy decoding to get the most likely judgment. But we found that deriving a score from the judgment distribution (like taking the mean) works better!
❌LLM-as-a-judge with greedy decoding
😎Using the distribution of the judge’s labels
victorwang37.bsky.social
LLM judges have become ubiquitous, but valuable signal is often ignored at inference.

We analyze design decisions for leveraging judgment distributions from LLM-as-a-judge: 🧵

(w/ Michael J.Q. Zhang, @eunsol.bsky.social)
Reposted by Vishakh Padmakumar
victorwang37.bsky.social
LLM judges have become ubiquitous, but valuable signal is often ignored at inference.

We analyze design decisions for leveraging judgment distributions from LLM-as-a-judge: 🧵

(w/ Michael J.Q. Zhang, @eunsol.bsky.social)
Reposted by Vishakh Padmakumar
nsaphra.bsky.social
Ever looked at LLM skill emergence and thought 70B parameters was a magic number? Our new paper shows sudden breakthroughs are samples from bimodal performance distributions across seeds. Observed accuracy jumps abruptly while the underlying accuracy DISTRIBUTION changes slowly!
Distributional Scaling Laws for Emergent Capabilities
Rosie Zhao, Tian Qin, David Alvarez-Melis, Sham Kakade, Naomi Saphra
In this paper, we explore the nature of sudden breakthroughs in language model performance at scale, which stands in contrast to smooth improvements governed by scaling laws. While advocates of "emergence" view abrupt performance gains as capabilities unlocking at specific scales, others have suggested that they are produced by thresholding effects and alleviated by continuous metrics. We propose that breakthroughs are instead driven by continuous changes in the probability distribution of training outcomes, particularly when performance is bimodally distributed across random seeds. In synthetic length generalization tasks, we show that different random seeds can produce either highly linear or emergent scaling trends. We reveal that sharp breakthroughs in metrics are produced by underlying continuous changes in their distribution across seeds. Furthermore, we provide a case study of inverse scaling and show that even as the probability of a successful run declines, the average performance of a successful run continues to increase monotonically. We validate our distributional scaling framework on realistic settings by measuring MMLU performance in LLM populations. These insights emphasize the role of random variation in the effect of scale on LLM capabilities.
Reposted by Vishakh Padmakumar
hyunwoo-kim.bsky.social
🚨New Paper! So o3-mini and R1 seem to excel on math & coding. But how good are they on other domains where verifiable rewards are not easily available, such as theory of mind (ToM)? Do they show similar behavioral patterns? 🤔 What if I told you it's...interesting, like the below?🧵
Reposted by Vishakh Padmakumar
awettig.bsky.social
🤔 Ever wondered how prevalent some type of web content is during LM pre-training?

In our new paper, we propose WebOrganizer which *constructs domains* based on the topic and format of CommonCrawl web pages 🌐

Key takeaway: domains help us curate better pre-training data! 🧵/N
Reposted by Vishakh Padmakumar
soniakmurthy.bsky.social
(1/9) Excited to share my recent work on "Alignment reduces LM's conceptual diversity" with @tomerullman.bsky.social and @jennhu.bsky.social, to appear at #NAACL2025! 🐟

We want models that match our values...but could this hurt their diversity of thought?
Preprint: arxiv.org/abs/2411.04427
Reposted by Vishakh Padmakumar
thomwolf.bsky.social
« appending "Wait" multiple times to the model's generation » is our current most likely path to AGI :)

See the fresh arxiv.org/abs/2501.19393 by Niklas Muennighoff et al.
Reposted by Vishakh Padmakumar
nyudatascience.bsky.social
CDS' He He, @vishakhpk.bsky.social, & former CDS postdoc Abulhair Saparov, et al, find major AI limits in “Transformers Struggle to Learn to Search.”

AI models excel at single-step reasoning but fail in systematic exploration as tasks grow in complexity.

nyudatascience.medium.com/even-simple-...
Even Simple Search Tasks Reveal Fundamental Limits in AI Language Models
Research by CDS’ He He, Vishakh Padmakumar, and others shows that LLMs’ reasoning relies on heuristics, not systematic exploration.
nyudatascience.medium.com
Reposted by Vishakh Padmakumar
jennarussell.bsky.social
People often claim they know when ChatGPT wrote something, but are they as accurate as they think?

Turns out that while general population is unreliable, those who frequently use ChatGPT for writing tasks can spot even "humanized" AI-generated text with near-perfect accuracy 🎯
Reposted by Vishakh Padmakumar
rtommccoy.bsky.social
🔥While LLM reasoning is on people's minds...

Here's a shameless plug for our work comparing o1 to previous LLMs (extending "Embers of Autoregression"): arxiv.org/abs/2410.01792

- o1 shows big improvements over GPT-4
- But qualitatively it is still sensitive to probability

1/4
A plot showing LLM performance on various algorithmic tasks. For all LLMs evaluated, including o1-preview, performance is highly influenced by the probability of the output to be produced, with lower performance on cases with low-probability outputs. The tasks being evaluated on are shift ciphers, Pig Latin, article swapping, and reversal.