Matthew Finlayson
@mattf1n.bsky.social
4.6K followers 610 following 61 posts
NLP PhD @ USC Previously at AI2, Harvard mattf1n.github.io
Posts Media Videos Starter Packs
mattf1n.bsky.social
The project was led by Murtaza Nazir, an independent researcher with serious engineering chops. It's his first paper. He's a joy to work with and is applying to PhDs. Hire him!

It's great to finally collab with Jack Morris, and a big thanks to @swabhs.bsky.social and Xiang Ren for advising.
mattf1n.bsky.social
Our technical insight is that logprob vectors can be linearly encoded as a much smaller vector. We make prompt stealing both *more accurate* and *cheaper*, by compactly encoding logprob outputs over multiple generation steps, resulting in massive gains over previous SoTA methods.
mattf1n.bsky.social
We noticed that existing methods don't fully use LLM outputs:
either they ignore logprobs (text only), or they only use logprobs from a single generation step.

The problem is that next-token logprobs are big--the size of the entire LLM vocabulary *for each generation step*.
mattf1n.bsky.social
When interacting with an AI model via an API, the API provider may secretly change your prompt or inject a system message before feeding it to the model.

Prompt stealing--also known as LM inversion--tries to reverse engineer the prompt that produced a particular LM output.
mattf1n.bsky.social
I didn't believe when I first saw, but:
We trained a prompt stealing model that gets >3x SoTA accuracy.
The secret is representing LLM outputs *correctly*

🚲 Demo/blog: mattf1n.github.io/pils
📄: arxiv.org/abs/2506.17090
🤖: huggingface.co/dill-lab/pi...
🧑‍💻: github.com/dill-lab/PILS
Reposted by Matthew Finlayson
digthatdata.bsky.social
I wish the ML community would stop trying to turn every technique into a brand name. Just give the thing a descriptive name and call it what it is.

Forced backronyms like this are counter productive.
digthatdata.bsky.social
too distracted by this to read the actual content
mattf1n.bsky.social
It appears that the only fonts with optical sizes that work with pdflatex are the computer/latin modern fonts. I would kill for a free pdflatex-compatible Times clone with optical sizes so my small text can look good in ArXiv/conference submissions.
mattf1n.bsky.social
If you are writing a paper for #colm2025 and LaTeX keeps increasing your line height to accommodate things like superscripts, consider using $\smash{2^d}$, but beware of character overlaps.
Screenshot of inconsistent line height to make way for a superscript. Screenshot of text with consistent line height.
mattf1n.bsky.social
This project was made feasible by the excellent open-source LLM training library @fairseq2.bsky.social; I highly recommend giving it a look! It made both SFT and DPO a piece of cake 🍰
mattf1n.bsky.social
🧵 Adapting your LLM for new tasks is dangerous! A bad training set degrades models by encouraging hallucinations and other misbehavior. Our paper remedies this for RAG training by replacing gold responses with self-generated demonstrations. Check it out here: https://arxiv.org/abs/2502.10
mattf1n.bsky.social
6/ Our method is general, and we are excited to see how it might be used to better adapt LLMs to other tasks in the future.

A big shout-out to my collaborators at Meta: Ilia, Daniel, Barlas, Xilun, and Aasish (of whom only @uralik.bsky.social is on Bluesky)
mattf1n.bsky.social
5/ Training on self-demos, our model learns to better leverage the context to answer questions, and to refuse questions that it is likely to answer incorrectly. This results in consistent, large improvements across several knowledge-intensive QA tasks.
mattf1n.bsky.social
4/ To obtain self-demos we generate candidate responses with an LLM, then use the same LLM to compare these responses to the gold one, choosing the one that best matches (or refuses to answer). Thus we retain the gold supervision from the original responses while aligning the training data.
mattf1n.bsky.social
3/ OOD responses encourage the model to answer questions it does not know the answer to, and since retrievals are added post-hoc, the responses tend ignore or even contradict the retrieved context. Instead of training on these low-quality responses, we use the LLM to generate "self-demos".
mattf1n.bsky.social
2/ A popular recipe for adapting LLMs for RAG involves adding retrievals post-hoc to an existing instruction-tuning dataset. The hope is that the LLM learns to leverage the added context to respond to instructions. Unfortunately, the gold responses in these datasets tend to be OOD for the model.
mattf1n.bsky.social
🧵 Adapting your LLM for new tasks is dangerous! A bad training set degrades models by encouraging hallucinations and other misbehavior. Our paper remedies this for RAG training by replacing gold responses with self-generated demonstrations. Check it out here: https://arxiv.org/abs/2502.10
mattf1n.bsky.social
Putting together an unofficial usc Beamer template, I noticed that the USC style guide lists 4 formats for “cardinal red” but each of them is different:

PMS 201 C is #9D2235
CMYK: 7, 100, 65, 32 is #A1003D
RGB: 135, 27, 30 is #991B1E
HEX: #990000

Is this normal? The CMYK is especially egregious.
The usc style guide list of formats for “cardinal” (see main post for list) The rgb and CMYK colors side by side. The CMYK is considerably pinker
mattf1n.bsky.social
If you are registered for NeurIPS it should be available already online.
mattf1n.bsky.social
NeurIPS should make them available online after one month :)
mattf1n.bsky.social
In Vancouver for NeurIPS but don't have Taylor Swift tickets?

You can still spend the day going through our tutorial reading list:
cmu-l3.github.io/neurips2024-...

Tuesday December 10, 1:30-4:00pm @ West Exhibition Hall C, NeurIPS
A diagram demonstrating text generation with beam search. One of the paths reads “Taylor Swift is the only person to…”
mattf1n.bsky.social
Curious about all this inference-time scaling hype? Attend our NeurIPS tutorial: Beyond Decoding: Meta-Generation Algorithms for LLMs (Tue. 1:30)! We have a top-notch panelist lineup.

Our website: cmu-l3.github.io/neurips2024-...
Panelist photos: Rishabh Agarwal (Google, McGill), Noam Brown (OpenAl), Beidi Chen (CMU), Nouha Dziri (AI2), Jakob Foerster (Oxford, Meta)
mattf1n.bsky.social
😍 I went cycling there last year, what an amazing place
mattf1n.bsky.social
Check your data mixture. @hamishivi.bsky.social is probably secretly up-weighting Latin in Dolma
Reposted by Matthew Finlayson
hamishivi.bsky.social
What's that? A fully open LM competitive with Gemma and Qwen*?

Happy to have helped a bit with this release (Tulu 3 recipe used here)! OLMo-2 13B actually beats Tulu 3 8B on these evals, making it a SOTA fully open LM!!!

(*on the benchmarks we looked at, see tweet for more)
ai2.bsky.social
Ai2 @ai2.bsky.social · Nov 26
Meet OLMo 2, the best fully open language model to date, including a family of 7B and 13B models trained up to 5T tokens. OLMo 2 outperforms other fully open models and competes with open-weight models like Llama 3.1 8B — As always, we released our data, code, recipes and more 🎁
The OLMo 2 models sit at the Pareto frontier of training FLOPs vs model average performance.