Ben Eysenbach
@ben-eysenbach.bsky.social
210 followers 1 following 6 posts
Assistant professor at Princeton CS working on reinforcement learning and AI/ML. Site: https://ben-eysenbach.github.io/ Lab: https://princeton-rl.github.io/
Posts Media Videos Starter Packs
Reposted by Ben Eysenbach
dataonbrainmind.bsky.social
🚨 Excited to announce our #NeurIPS2025 Workshop: Data on the Brain & Mind

📣 Call for: Findings (4- or 8-page) + Tutorials tracks

🎙️ Speakers include @dyamins.bsky.social @lauragwilliams.bsky.social @cpehlevan.bsky.social

🌐 Learn more: data-brain-mind.github.io
ben-eysenbach.bsky.social
New research directions:
* model-based RL with NF models,
* goal/language-conditioned NF foundation policies,
* NFs for collocation-based planning,
* goal-conditioned NF value functions (as control barrier functions, as Lyapunov functions).
👆Join/scoop us -- we can't do it all!
ben-eysenbach.bsky.social
2/ Much of my past research is about avoiding density estimation in RL, because I've assumed that it's difficult and fickle. But, if NFs make it easy to do high-dim density estimation, there are lots of new RL algorithms to be developed:
ben-eysenbach.bsky.social
Check out @raj-ghugare.bsky.social's new paper on the surprising effectiveness of normalizing flows (NF) in RL 🚀

This project changed my mind in 2 ways:
1/ Diffusion policies, flow-models, and EBMs have become ubiquitous in RL. Turns out NFs can perform as well -- no ODEs/SDEs required!
raj-ghugare.bsky.social
Normalizing Flows (NFs) check all boxes for RL: exact likelihoods (imitation learning), efficient sampling (real-time control), and variational inference (Q-learning)! Yet they are overlooked over more expensive and less flexible contemporaries like diffusion models.

Are NFs fundamentally limited?
ben-eysenbach.bsky.social
While we still don't understand precisely why depth helps so much, the benefits seem correlated with exploration. Thought experiment: What if the answer to the exploration problem in RL were to just increase network depth?
ben-eysenbach.bsky.social
tldr: increase the depth of your RL networks by several orders of magnitude.

Our new paper shows that very very deep networks are surprisingly useful for RL, if you use resnets, layer norm, and self-supervised RL!

Paper, code, videos: wang-kevin3290.github.io/scaling-crl/
kevin-wang3290.bsky.social
1/ While most RL methods use shallow MLPs (~2–5 layers), we show that scaling up to 1000-layers for contrastive RL (CRL) can significantly boost performance, ranging from doubling performance to 50x on a diverse suite of robotic tasks.

Webpage+Paper+Code: wang-kevin3290.github.io/scaling-crl/
ben-eysenbach.bsky.social
Excited to share new work led by @vivekmyers.bsky.social and @crji.bsky.social that proves you can learn to reach distant goals by solely training on nearby goals. The key idea is a new form of invariance. This invariance implies generalization w.r.t. the horizon.
vivekmyers.bsky.social
Reinforcement learning agents should be able to improve upon behaviors seen during training.
In practice, RL agents often struggle to generalize to new long-horizon behaviors.
Our new paper studies *horizon generalization*, the degree to which RL algorithms generalize to reaching distant goals. 1/