Alex Lew
@alexlew.bsky.social
2.4K followers 540 following 41 posts
Theory & practice of probabilistic programming. Current: MIT Probabilistic Computing Project; Fall '25: Incoming Asst. Prof. at Yale CS
Posts Media Videos Starter Packs
alexlew.bsky.social
As a way of evaluating the model (rather than the variational family or inference method)
alexlew.bsky.social
By contrast, of course, a good VI method should give good posteriors. If better posteriors give worse predictions, it's the model's fault, not the VI method's
alexlew.bsky.social
Yes, I agree... It's a very (non-Bayesian) ML-inflected way of looking at things, where the whole game is to maximize predictive accuracy on new data, and the training data D is just instrumental to this goal.
alexlew.bsky.social
Right, I think if you're trying to estimate P*, it's okay that Method 2 is inconsistent. The way you "spend more compute" in Method 2 is by choosing a bigger/more expressive variational family q, not taking more samples from a fixed variational family. So consistency isn't quite the right property
alexlew.bsky.social
Oh -- I agree it doesn't make sense to choose q1 or q2 based on the quantity! Or at least, if you do that, you're just fitting the model q(θ)p(D'|θ) to the held-out data D', not doing posterior inference anymore.
alexlew.bsky.social
(Though even if you use approach 2, it is probably better to use q as a proposal within IS, rather than using it directly to substitute the posterior)
alexlew.bsky.social
We can't sample p(θ|D) exactly, so consider two procedures for sampling θ from distributions _close_ to p(θ|D):
1 - run MCMC / SMC / etc. targeting p(θ|D)
2 - run VI to obtain q(θ) then draw independent θs from q via simple MC
Not obvious that Method 1 always wins (for a given computational budget)
Reposted by Alex Lew
tomerullman.bsky.social
not sure how to get this across to non-academics but here goes,

Imagine if you were suddenly told 'we decided not to pay your salary', that's kind of what the grant cuts felt like.

Now imagine if you were suddenly told 'we are going to set your dog on fire', that's what this feels like:
Reposted by Alex Lew
benlipkin.bsky.social
Want to use AWRS SMC?

Check out the GenLM control library: github.com/genlm/genlm-...

GenLM supports not only grammars, but arbitrary programmable constraints from type systems to simulators.

If you can write a Python function, you can control your language model!
Reposted by Alex Lew
benlipkin.bsky.social
Many LM applications may be formulated as text generation conditional on some (Boolean) constraint.

Generate a…
- Python program that passes a test suite.
- PDDL plan that satisfies a goal.
- CoT trajectory that yields a positive reward.
The list goes on…

How can we efficiently satisfy these? 🧵👇
Reposted by Alex Lew
joaoloula.bsky.social
#ICLR2025 Oral

How can we control LMs using diverse signals such as static analyses, test cases, and simulations?

In our paper “Syntactic and Semantic Control of Large Language Models via Sequential Monte Carlo” (w/ @benlipkin.bsky.social,
@alexlew.bsky.social, @xtimv.bsky.social) we:
alexlew.bsky.social
I'd love to see these ideas migrated to Gen.jl! But there are some technical questions about how best to make that work.
alexlew.bsky.social
Hi, thanks for your interest! The pedagogical implementations can be found at:

Julia: github.com/probcomp/ADE...

Haskell: github.com/probcomp/ade...

The (less pedagogical, more performant) JAX implementation is still under active development, led by McCoy Becker.
alexlew.bsky.social
- Daphne Koller had arguably the first PPL paper about inference-in-Bayesian-models-cast-as-programs
- Alexandra Silva has great work on semantics, static analysis, and verification of probabilistic & non-det. progs
- Annabelle McIver does too
- Nada Amin has cool recent papers on PPL semantics
alexlew.bsky.social
DeepSeek's implementation isn't public, and maybe I'm misinterpreting their paper. But the TRL reimplementation does appear to follow this logic. github.com/huggingface/...

Curious what people think we should make of this!

8/8
trl/trl/trainer/grpo_trainer.py at 55e680e142d88e090dcbf5a469eab1ebba28ddef · huggingface/trl
Train transformer language models with reinforcement learning. - huggingface/trl
github.com
alexlew.bsky.social
The usual KL penalty says: try not to wander outside the realm of sensible generations [as judged by our pretrained model].

This penalty says: try not to lose any of the behaviors present in the pretrained model.

Which is a bit strange as a fine-tuning objective.

7/
alexlew.bsky.social
When we differentiate their (Schulman's) estimator, pi_ref comes back into the objective, but in a new role.

Now, the objective has a CrossEnt(pi_ref, pi_theta) term. KL(P,Q) = CrossEnt(P,Q) - Entropy(P), so this is related to KL, but note the direction of KL is reversed.

6/
**Mathematical formulation of an alternative KL estimator and its gradient.**  

The alternative KL estimator is defined as:

\[
\widehat{KL}_\theta(x) := \frac{\pi_{\text{ref}}(x)}{\pi_\theta(x)} + \log \pi_\theta(x) - \log \pi_{\text{ref}}(x) - 1
\]

From this, it follows that:

\[
\mathbb{E}_{x \sim \pi_{\text{old}}} [\nabla_\theta \widehat{KL}_\theta(x)] = \nabla_\theta \mathbb{E}_{x \sim \pi_{\text{old}}} \left[ \frac{\pi_{\text{ref}}(x)}{\pi_\theta(x)} + \log \pi_\theta(x) \right]
\]

Approximating when \(\pi_{\text{old}} \approx \pi_\theta\), we get:

\[
\approx \nabla_\theta \mathbb{E}_{x \sim \pi_{\text{ref}}} [-\log \pi_\theta(x)] + \nabla_\theta \mathbb{E}_{x \sim \pi_{\text{old}}} [\log \pi_\theta(x)]
\]

The annotated explanation in purple states that this results in:

\[
\text{CrossEnt}(\pi_{\text{ref}}, \pi_\theta) - \text{CrossEnt}(\pi_{\text{old}}, \pi_\theta)
\] 

where \(\text{CrossEnt}(\cdot, \cdot)\) denotes cross-entropy.

----
alt text generated by ChatGPT
alexlew.bsky.social
Interestingly, if they were *not* using this estimator, and instead using the standard estimator, pi_ref would not affect the gradient at all!

More evidence that there's something odd about their approach. And maybe one reason they turned to Schulman's estimator.

5/
**Mathematical explanation of the standard KL estimator and its gradient.**  

The standard KL estimator is defined as:

\[
\widehat{KL}_\theta(x) := \log \pi_\theta(x) - \log \pi_{\text{ref}}(x)
\]

From this, it follows that:

\[
\mathbb{E}_{x \sim \pi_{\text{old}}} [\nabla_\theta \widehat{KL}_\theta(x)] = \nabla_\theta \mathbb{E}_{x \sim \pi_{\text{old}}} [\log \pi_\theta(x)]
\]

Annotated explanation in purple states that this term represents the *negative cross-entropy from \(\pi_{\text{old}}\) to \(\pi_\theta\).*

--
alt text automatically generated by ChatGPT
alexlew.bsky.social
This means GRPO is not optimizing the usual objective. What objective *is* it optimizing? Well, it depends on the particular KL estimator they are using.

A few people have noticed that GRPO uses a non-standard KL estimator, from a blog post by Schulman.

4/
Text from the GRPO paper:

And different from the KL penalty term used in (2), we estimate the KL divergence with the following unbiased estimator (Schulman, 2020):

KL(pi_theta || pi_ref) = pi_ref(o_{i,t} | q, o_{i,<t}) / pi_theta(o_{i,t} | q,o_{i,<t}) - log pi_ref(o_{i,t} | q, o_{i,<t}) / pi_theta(o_{i,t} | q,o_{i,<t}) - 1

which is guaranteed to be positive.
alexlew.bsky.social
GRPO instead directly differentiates the KL estimator, evaluated on samples taken from pi_old (the LM before this update).

But the point of policy gradient is that you can't just "differentiate the estimator": you need to account for the gradient of the sampling process.

3/
**Mathematical formulation of the GRPO (Group-Relative Policy Optimization) objective and its gradient.**  

The objective function is defined as:

\[
J_{\text{GRPO}}(\theta) = \mathbb{E}_{x \sim \pi_\theta} [R_\theta(x)] - \beta \cdot \mathbb{E}_{x \sim \pi_{\text{old}}} [\widehat{KL}_\theta(x)]
\]

The gradient of this objective is:

\[
\nabla J_{\text{GRPO}}(\theta) = \nabla_\theta \mathbb{E}_{x \sim \pi_\theta} [R_\theta(x)] - \beta \cdot \mathbb{E}_{x \sim \pi_{\text{old}}} [\nabla_\theta \widehat{KL}_\theta(x)]
\]

Annotated explanations in purple indicate that the first term is *unbiasedly estimated via the group-relative policy gradient*, while the second term is *not* the derivative of the KL divergence, even when \(\pi_{\text{old}} = \pi_\theta\).

----
(alt text automatically generated by ChatGPT)
alexlew.bsky.social
RL for LMs often introduces a KL penalty term, to balance the "maximize reward" objective with an incentive to stay close to some reference model.

A way to implement is to modify the reward, so E[R~]=E[R] - KL term. Then you can apply standard RL (e.g. policy gradient).

2/
**Mathematical expression describing KL-penalized reinforcement learning objective.**  
The objective function is given by:

\[
J(\theta) = \mathbb{E}_{x \sim \pi_\theta} [R_\theta(x)] - \beta \cdot D_{\text{KL}}(\pi_\theta, \pi_{\text{ref}}) 
\]

Rewritten as:

\[
J(\theta) = \mathbb{E}_{x \sim \pi_\theta} [\tilde{R}_\theta(x)]
\]

where:

\[
\tilde{R}_\theta(x) := R_\theta(x) - \beta \cdot \widehat{KL}_\theta(x)
\]

\[
\widehat{KL}_\theta(x) := \log \pi_\theta(x) - \log \pi_{\text{ref}}(x)
\]

Annotations in purple indicate that \( R_\theta(x) \) represents the reward, and \( \widehat{KL}_\theta(x) \) is an unbiased estimator of the KL divergence.

----
(alt text generated by ChatGPT)
alexlew.bsky.social
@xtimv.bsky.social and I were just discussing this interesting comment in the DeepSeek paper introducing GRPO: a different way of setting up the KL loss.

It's a little hard to reason about what this does to the objective. 1/
Also note that, instead of adding KL penalty in the reward, GRPO regularizes by directly adding the KL divergence between the trained policy and the reference policy to the loss, avoiding complicating the calculation of the advantage.
alexlew.bsky.social
Yeah — would be interesting to know if the pattern holds for today’s larger models! (This paper was done 1.5 years ago, in academia, using open models)
alexlew.bsky.social
Furthermore, regardless of training procedure, the models are still autoregressive probabilistic sequence models, so they can be understood as optimal “autocompleters” for *some* data distribution.

“If this is our conversation so far, what word would an assistant probably say next?”
alexlew.bsky.social
Hm, I think the base LMs are quite interesting. From the DPO paper: sampling 128 completions from a base model, and then selecting the sample with highest reward under the RLHF reward model, performs similarly to actual RLHF.
A plot from the Direct Preference Optimization paper, comparing various methods of aligning LMs to preference data.