Sebastian Dziadzio
@dziadzio.bsky.social
610 followers 890 following 56 posts
ELLIS PhD student in machine learning at IMPRS-IS. Continual learning at scale. sebastiandziadzio.com
Posts Media Videos Starter Packs
Pinned
dziadzio.bsky.social
Here's a fledgling starter pack for the AI community in Tübingen. Let me know if you'd like to be added!

go.bsky.app/NFbVzrA
Tübingen AI
Join the conversation
go.bsky.app
dziadzio.bsky.social
Yeah, mostly because GPT-5 needs to think for 20 seconds to come up with a name for a variable. It's good for bigger, self-contained features, but the bias for "reasoning" in the model router makes it downright unusable for smaller changes.
Reposted by Sebastian Dziadzio
adhirajghosh.bsky.social
🏆ONEBench accepted to ACL main! ✨
Stay tuned for the official leaderboard and real-time personalised benchmarking release!

If you’re attending ACL or are generally interested in the future of foundation model benchmarking, happy to talk!

#ACL2025NLP #ACL2025
@aclmeeting.bsky.social
adhirajghosh.bsky.social
🚨Looking to test your foundation model on an arbitrary and open-ended set of capabilities, not explicitly captured by static benchmarks? 🚨

Check out ✨ONEBench✨, where we show how sample-level evaluation is the solution.

🔎 arxiv.org/abs/2412.06745
dziadzio.bsky.social
Done! Sorry for the wait
Reposted by Sebastian Dziadzio
dziadzio.bsky.social
This has been a fun project with a great team: led by @vishaalurao.bsky.social and @confusezius.bsky.social, with core contributions from @bayesiankitten.bsky.social, and supervision by @zeynepakata.bsky.social, Samuel Albanie, and Matthias Bethge.
dziadzio.bsky.social
As usual, scaling matters!
🚀 Larger models benefit more from temporal merging than sequential finetuning.
🚀 Larger compute budgets allow temporal merging to match (and surpass!) multitask performance.
🚀 Best-in-TIME scales effectively across longer task sequences (50, 100).
Plots showing the scaling dynamics described in the text.
dziadzio.bsky.social
📌 The choice of merging technique doesn’t matter much.

In the temporal setting, complex merging techniques like TIES or Breadcrumbs offer only marginal gains compared to simpler ones like weight averaging.
A plot showing that different merging techniques perform similarly.
dziadzio.bsky.social
📌 Initialization and deployment choices are crucial.

One strategy stands out—using exponential moving average for both initialization and deployment strikes the best balance between knowledge accumulation and zero-shot retention. We call this approach ✨Best-in-TIME✨
A plot showing that different initialization and deployment strategies lead to different results.
dziadzio.bsky.social
📌 Accounting for time is essential.

Standard merging struggles with the temporal dynamics. Replay and weighting schemes, which factor in the sequential nature of the problem, help (but only to a point).
A plot showing that offline merging underperforms with respect to a replay baseline.
dziadzio.bsky.social
Key insights:

📌 Accounting for time is essential.
📌 Initialization and deployment choices are crucial.
📌 The choice of merging technique doesn’t matter much.
dziadzio.bsky.social
The world keeps changing, and so should our models.

Enter TIME (Temporal Integration of Model Expertise), a unifying approach that considers:

1️⃣ Initialization
2️⃣ Deployment
3️⃣ Merging Techniques

We study these three axes on the large FoMo-in-Flux benchmark.
A schematic representation of the TIME framework.
dziadzio.bsky.social
📄 New Paper: "How to Merge Your Multimodal Models Over Time?"

arxiv.org/abs/2412.06712

Model merging assumes all finetuned models are available at once. But what if they need to be created over time?

We study Temporal Model Merging through the TIME framework to find out!

🧵
How to Merge Your Multimodal Models Over Time?
Model merging combines multiple expert models - finetuned from a base foundation model on diverse tasks and domains - into a single, more capable model. However, most existing model merging approaches...
arxiv.org
dziadzio.bsky.social
Come chat to us at NeurIPS about continual multimodal pretraining and some interesting follow-ups 👀
confusezius.bsky.social
😵‍💫 Continually pretraining large multimodal models to keep them up-to-date all-the-time is tough, covering everything from adapters, merging, meta-scheduling to data design and more!

So I'm really happy to present our large-scale study at #NeurIPS2024!

Come drop by to talk about all that and more!
Reposted by Sebastian Dziadzio
adhirajghosh.bsky.social
🚨Looking to test your foundation model on an arbitrary and open-ended set of capabilities, not explicitly captured by static benchmarks? 🚨

Check out ✨ONEBench✨, where we show how sample-level evaluation is the solution.

🔎 arxiv.org/abs/2412.06745
dziadzio.bsky.social
The changing of the guard ceremony in Vancouver is complete
Kickstand advertising a Taylor Swift pop-up store. Kickstand advertising a coffee shop to NeurIPS attendees.
dziadzio.bsky.social
I keep forgetting about the concert, yesterday I was like 'wow people in Vancouver sure love sequins and cowboy boots'.
dziadzio.bsky.social
Whenever my "papers" tab group got lost in a chrome crash I felt nothing but relief.

The firehose is relentless, so over time my strategy became to skim in the moment if interesting and save to zotero, otherwise close the tab. There is only the present. Important stuff will come back.
dziadzio.bsky.social
Yeah, I think we consistently underestimate how much stuff is out there on the Internet. You might think your question or image prompt is niche and original, but if you consider the distribution of Internet-scale datasets, you'd have to work very hard to even reach the tail.
dziadzio.bsky.social
If someone said "the algorithm" with no additional context, I'd think of the latter, but "an algorithm" for me is still the former. Interesting how the default meaning is shifting.
dziadzio.bsky.social
How I use LLMs when writing papers:
1. Write a sentence.
2. Copy it to an LLM for edits, add a prompt explaining in simple words what I'm trying to say.
3. Realise my simple word explanation is actually what I need.
4. Copy it over to the paper, move on to the next sentence.
dziadzio.bsky.social
Have you read Fables for Robots? I think it was only published in English as part of Mortal Engines. If you liked Cyberiad, you'll like this one too!