Deqing Fu
@deqing.bsky.social
180 followers 470 following 14 posts
CS PhD Student @USC. deqingfu.github.io
Posts Media Videos Starter Packs
deqing.bsky.social
I gave a talk earlier today as Stanford NLP seminar. Here are the slides if you are interested: deqingfu.github.io/_docs/202505...
deqingfu.github.io
Reposted by Deqing Fu
billzhu.bsky.social
At @naaclmeeting.bsky.social this week! I’ll be presenting our work on LLM domain induction with @thomason.bsky.social on Thu (5/1) at 4pm in Hall 3, Section I.

Would love to connect and chat about LLM planning, reasoning, AI4Science, multimodal stuff, or anything else. Feel free to DM!
deqing.bsky.social
It seems I haven't posted any research related posts on this platform. Starting to do it now.
bsky.app/profile/deqi...
deqing.bsky.social
Excited to share that my intern work at Meta GenAI is accepted to @iclr-conf.bsky.social #ICLR2025

Introducing TLDR: Token-Level Detective Reward Model For Large Vision Language Models.

TLDR provides fine-grained annotations to
each text token.

🔗arXiv: arxiv.org/abs/2410.04734
deqing.bsky.social
I would like to thank my intern mentor Lawrence Chen from Meta, and all other peers Tong Xiao, Rui Wang, Guan Pang, and Pengchuan Zhang. Big thanks to my lab mate @billzhu.bsky.social for valuable discussions and my advisor @robinjia.bsky.social for thoughtful inputs.
deqing.bsky.social
Finally, token-level annotations given by TLDR model could speedup human annotators to fix image captions that are slightly off. In fact, it can speed up human annotation by 3 times!
deqing.bsky.social
Next, there is something interesting. After finishing training the TLDR model, one can simply remove the reward model head and re-attach the original language model head, to, obviously, become a new vision-language model. It's shown that these new models become better.
deqing.bsky.social
TLDR has rich usefulness. First, it can serve as a hallucination rate evaluation metric. As shown in the table, GPT-4o is still the best vision language model in the token level while open-weight models such as Llama-3.2-90B is catching up in the sentence and response level.
deqing.bsky.social
TLDR is trained on synthetic hard negatives generated via a perturbation-based method. The architecture is very simple. Instead of applying the reward model head to the last token, as many RMs are doing, TLDR applies the reward model head to every token.
deqing.bsky.social
Excited to share that my intern work at Meta GenAI is accepted to @iclr-conf.bsky.social #ICLR2025

Introducing TLDR: Token-Level Detective Reward Model For Large Vision Language Models.

TLDR provides fine-grained annotations to
each text token.

🔗arXiv: arxiv.org/abs/2410.04734
deqing.bsky.social
I think it may come from pretraining data and how numbers are presented by humans. We are still investigating how/why these features emerge from LLMs and will keep you updated with any new findings!
Reposted by Deqing Fu
robinjia.bsky.social
I'll be at #NeurIPS2024! My group has papers analyzing how LLMs use Fourier Features for arithmetic and how TFs learn higher-order optimization for ICL (led by @deqing.bsky.social), plus workshop papers on backdoor detection and LLMs + PDDL (led by @billzhu.bsky.social)
deqing.bsky.social
Can add add me please? Thanks!
deqing.bsky.social
Thanks for making this pack. Can you add me please? Thank you!
Reposted by Deqing Fu
mattf1n.bsky.social
USC NLP folks are on Bluesky!
Follow my amazing colleagues here

go.bsky.app/KUwSZ6W
deqing.bsky.social
Happy to join a new social media platform. I work on theory/science behind modern LLMs, and how to make them more robust and explainable.