Hanlin Zhang
@hlzhang109.bsky.social
23 followers 43 following 11 posts
CS PhD student @Harvard https://hanlin-zhang.com
Posts Media Videos Starter Packs
hlzhang109.bsky.social
✅ Open-source everything — models, data, training, and evaluation pipeline

✅ Maintain the EvoLM model family with clear data provenance

✅ Support the community in extending this foundation for future LLM research
EvoLM: In Search of Lost Language Model Training Dynamics
Modern language model (LM) training has been divided into multiple stages, making it difficult for downstream developers to evaluate the impact of design choices made at each stage. We present EvoLM, ...
arxiv.org
hlzhang109.bsky.social
We seek to:

✅ Build a fully transparent and reproducible model suite for studying LM training

✅ Quantify how each training phase contributes to upstream cloze task performance and downstream generative task performance, considering both in-domain and out-of-domain settings
EvoLM: In Search of Lost Language Model Training Dynamics
Modern language model (LM) training has been divided into multiple stages, making it difficult for downstream developers to evaluate the impact of design choices made at each stage. We present EvoLM, ...
arxiv.org
hlzhang109.bsky.social
Introducing EvoLM, a model suite with 100+ decoder-only LMs (1B/4B) trained from scratch, across four training stages —

🟦 Pre-training
🟩 Continued Pre-Training (CPT)
🟨 Supervised Fine-Tuning (SFT)
🟥 Reinforcement Learning (RL)
EvoLM: In Search of Lost Language Model Training Dynamics
Modern language model (LM) training has been divided into multiple stages, making it difficult for downstream developers to evaluate the impact of design choices made at each stage. We present EvoLM, ...
arxiv.org
hlzhang109.bsky.social
New work [JSKZ25] w/ Jikai, Vasilis,
@shamkakade.bsky.social .

We introduce new formulations and tools for evaluating LM capabilities, which help explain observations of post-training behaviors of Qwen-series models.

More details:

- hanlin-zhang.com/causal-capab...
- x.com/_hanlin_zhan...
hlzhang109.bsky.social
[4/4] Prompt injection can extract private datastore content—verbatim—from RAG:

– Black-box attack can leak 41% of a book with just 100 queries
– Vulnerability grows with model size and instruction tuning
– Mitigation: eliminate position bias (via PINE)+system prompts

(arxiv.org/abs/2402.17840)
Follow My Instruction and Spill the Beans: Scalable Data Extraction from Retrieval-Augmented Generation Systems
Retrieval-Augmented Generation (RAG) improves pre-trained models by incorporating external knowledge at test time to enable customized adaptation. We study the risk of datastore leakage in Retrieval-I...
arxiv.org
hlzhang109.bsky.social
[3/4] LMs can suffer from position bias—they favor content based on where it appears. This can hurt reasoning and evaluation.
We introduce PINE, a training-free method that eliminates position bias via bidirectional attention+reordering docs by attention scores.
(arxiv.org/abs/2407.01100)
Eliminating Position Bias of Language Models: A Mechanistic Approach
Position bias has proven to be a prevalent issue of modern language models (LMs), where the models prioritize content based on its position within the given context. This bias often leads to unexpecte...
arxiv.org
hlzhang109.bsky.social
[2/4] Can LLMs self-improve by verifying their own outputs? This paper says yes—with a twist. The key lies in a measure: the Generation-Verification Gap (GV-Gap) that scales with pretraining FLOPs in a log-linear trend.
Oral @yus167.bsky.social 6A: Sat 26 Apr 4:18-4:30.
(arxiv.org/abs/2412.02674)
Mind the Gap: Examining the Self-Improvement Capabilities of Large Language Models
Self-improvement is a mechanism in Large Language Model (LLM) pre-training, post-training and test-time inference. We explore a framework where the model verifies its own outputs, filters or reweights...
arxiv.org
hlzhang109.bsky.social
[1/4] Modern large-scale LM training is limited not just by compute, but by data movement—a classic Von Neumann bottleneck (research.ibm.com/blog/why-von...).

Scaling batch size reduces optimization steps, but only up to a point—the Critical Batch Size (CBS).
How the von Neumann bottleneck is impeding AI computing
The von Neumann architecture, which separates compute and memory, is perfect for conventional computing. But it creates a data traffic jam for AI.
research.ibm.com
hlzhang109.bsky.social
Highlights from #ICLR2025 — a brief thread 🧵
Reposted by Hanlin Zhang
blackhc.bsky.social
I want to reshare @brandfonbrener.bsky.social's @NeurIPSConf 2024 paper on CoLoR-Filter: A simple yet powerful method for selecting high-quality data for language model pre-training!

With @hlzhang109.bsky.social @schwarzjn.bsky.social @shamkakade.bsky.social
Reposted by Hanlin Zhang
shamkakade.bsky.social
(1/n) 💡How can we speed up the serial runtime of long pre-training runs? Enter Critical Batch Size (CBS): the tipping point where the gains of data parallelism balance with diminishing efficiency. Doubling batch size halves the optimization steps—until we hit CBS, beyond which returns diminish.
Reposted by Hanlin Zhang
yus167.bsky.social
LLM self-improvement has critical implications in synthetic data, post-training and test-time inference. To understand LLMs' true capability of self-improvement, we perform large-scale experiments with multiple families of LLMs, tasks and mechanisms. Here is what we found: (1/9)