Yinglun Zhu
yinglunzhu.bsky.social
Yinglun Zhu
@yinglunzhu.bsky.social
Assistant Prof @ UC Riverside. Research on Efficient ML, RL, and LLMs. CS PhD @ UW Madison.

yinglunz.com
Pinned
Super excited to share Test-Time Matching (TTM), an iterative, self-improving algorithm that unlocks substantial compositional reasoning capabilities in multimodal models.

TTM enables SigLIP-B16 (~0.2B params) to outperform GPT-4.1 on MMVP-VLM, establishing a new SOTA.
Super excited to share Test-Time Matching (TTM), an iterative, self-improving algorithm that unlocks substantial compositional reasoning capabilities in multimodal models.

TTM enables SigLIP-B16 (~0.2B params) to outperform GPT-4.1 on MMVP-VLM, establishing a new SOTA.
October 31, 2025 at 6:03 PM
🚀Excited to share our new paper:

Online Finetuning Decision Transformers with Pure RL Gradients

RL drives reasoning in LLMs—but remains underexplored for online finetuning of Decision Transformers (DTs), where most methods still rely mainly on supervised objectives.

Why?
October 14, 2025 at 7:04 PM
Sharing new paper: Towards Multimodal Active Learning: Efficient Learning with Limited Paired Data

We extend classical unimodal active learning to the multimodal AL with unaligned data, allowing data-efficient finetuning and pretraining of vision-language models such as CLIP and SigLIP.

1/3
October 10, 2025 at 6:03 PM
🚀Excited to share our new paper: Strategic Scaling of Test-Time Compute: A Bandit Learning Approach.

We turn test-time compute allocation into a bandit learning problem, achieving:
✅ +11.10% on MATH-500
✅ +7.41% on LiveCodeBench

Paper: arxiv.org/pdf/2506.12721

(1/3)
July 1, 2025 at 6:45 PM
Reposted by Yinglun Zhu
There is a ton of interest in the question of whether AI can be be funny: www.bbc.com/future/artic.... Our paper at NeurIPS investigates the humor generation capabilities of the latest and greatest AI models using one of world’s largest humor datasets! arxiv.org/pdf/2406.10522
December 4, 2024 at 2:00 PM
I’m recruiting multiple PhD students for Fall 2025 at UCR! If you’re interested in working on efficient ML, RL, and LLMs, please apply to the UCR CS/EE PhD program.

Please visit yinglunz.com for detailed information on research directions and contact instructions.
December 3, 2024 at 7:52 PM