Sebastian Sanokowski
@sanokows.bsky.social
1.6K followers 470 following 18 posts
Ellis PhD Student at JKU Linz working on Diffusion Samplers and combinatorial optimization
Posts Media Videos Starter Packs
Reposted by Sebastian Sanokowski
fses91.bsky.social
Happy to introduce 🔥LaM-SLidE🔥!

We show how trajectories of spatial dynamical systems can be modeled in latent space by

--> leveraging IDENTIFIERS.

📚Paper: arxiv.org/abs/2502.12128
💻Code: github.com/ml-jku/LaM-S...
📝Blog: ml-jku.github.io/LaM-SLidE/
1/n
sanokows.bsky.social
11/11 This is joint work with @willberghammer, @haoyu_wang66, @EnnemoserMartin, @HochreiterSepp, and @sebaleh. See you at #ICLR!
[Poster Link](iclr.cc/virtual/202...)
[Paper Link](arxiv.org/abs/2502.08696)
---
sanokows.bsky.social
10/11 🏆 Our method outperforms autoregressive approaches on Ising model benchmarks and opens new avenues for applying diffusion models to a wide range of scientific applications in discrete domains.
sanokows.bsky.social
9/11 📊 Due to the mass-covering property of the fKL, it excels at unbiased sampling. Conversely, the rKL is mode-seeking, making it ideal for combinatorial optimization (CO) as it achieves better solution quality with fewer samples.
sanokows.bsky.social
8/11 💡 𝐒𝐨𝐥𝐮𝐭𝐢𝐨𝐧 2: We address the limitations of the fKL by combining it with Neural Importance Sampling over samples from the diffusion sampler. This allows us to estimate the gradient of the fKL using Monte Carlo integration, making training more memory-efficient.
sanokows.bsky.social
7/11 An alternative is the forward KL divergence (fKL), where it is well known how to increase memory efficiency by leveraging Monte Carlo integration over diffusion time steps. However, the fKL divergence requires samples from the target distribution!
sanokows.bsky.social
6/11 💡 𝐒𝐨𝐥𝐮𝐭𝐢𝐨𝐧 1: We apply the policy gradient theorem to the rKL between joint distributions of the diffusion path. This enables the use of mini-batches over diffusion time steps by leveraging reinforcement learning methods, allowing for memory-efficient training.
sanokows.bsky.social
5/11 A commonly used divergence is the reverse KL divergence (rKL), as the expectation of the divergence goes over samples from the generative model. However, naive optimization of this KL divergence requires backpropagating through the whole generative process.
sanokows.bsky.social
4/11 🚨 𝐂𝐡𝐚𝐥𝐥𝐞𝐧𝐠𝐞: However, existing diffusion samplers struggle with memory scaling, limiting the number of attainable diffusion steps due to backpropagation through the entire generative process.
sanokows.bsky.social
3/11 🔍 𝐃𝐢𝐟𝐟𝐮𝐬𝐢𝐨𝐧 𝐒𝐚𝐦𝐩𝐥𝐞𝐫𝐬 aim to sample from an unnormalized target distribution without access to samples from this distribution. They can be trained by minimizing a divergence between the joint distribution of the forward and reverse diffusion paths.
sanokows.bsky.social
2/11 We've developed scalable and memory-efficient training methods for diffusion samplers, achieving state-of-the-art results in combinatorial optimization and unbiased sampling on the Ising model.
sanokows.bsky.social
1/11 Excited to present our latest work "Scalable Discrete Diffusion Samplers: Combinatorial Optimization and Statistical Physics" at #ICLR2025 on Fri 25 Apr at 10 am!
#CombinatorialOptimization #StatisticalPhysics #DiffusionModels
Reposted by Sebastian Sanokowski
amreibahr.bsky.social
Starkes Signal!!

Über 60 dt. Hochschulen & Forschungsinstitutionen haben heute ihren Ausstieg bei X bekanntgegeben, s.u. #eXit

X sei nicht mehr vereinbar mit ihren Grundwerten: „Weltoffenheit, wissenschaftliche Integrität, Transparenz und demokratischer Diskurs.“

Liste der Beteiligten hier:
Hochschulen und Forschungsinstitutionen verlassen Plattform X - Gemeinsam für Vielfalt, Freiheit und Wissenschaft
nachrichten.idw-online.de
Reposted by Sebastian Sanokowski
gklambauer.bsky.social
The Machine Learning for Molecules workshop 2024 will take place THIS FRIDAY, December 6.

Tickets for in-person participation are "SOLD" OUT.

We still have a few free tickets for online/virtual participation!

Registration link here: moleculediscovery.github.io/workshop2024/
ML for molecules and materials in the era of LLMs [ML4Molecules]
ELLIS workshop, HYBRID, December 6, 2024
moleculediscovery.github.io
sanokows.bsky.social
A Pizza Steel or Pizza Stone with max Heat (250 celsius) should do the Job
sanokows.bsky.social
I think it is fine to keep the score, but if all concerns are addressed they should at least justify why they are nevertheless keeping their score.
sanokows.bsky.social
Does this mean all Paper at 6 or above should be accepted?
Reposted by Sebastian Sanokowski
iclr-conf.bsky.social
✍️ Reminder to reviewers: Check author responses to your reviews, and ask follow up questions if needed.

50% of papers have discussion - let’s bring this number up!
Reposted by Sebastian Sanokowski
marvin-schmitt.com
The ✨ML Internship Feed✨ is here!

@serge.belongie.com and I created this feed to compile internship opportunities in AI, ML, CV, NLP, and related areas.

The feed is rule-based. Please help us improve the rules by sharing feedback 🧡

🔗 Link to the feed: bsky.app/profile/did:...
sanokows.bsky.social
Love seeing the Bluesky community grow!
Just look at the stats—daily activity (likes, posts, and follows) is skyrocketing 📈, with recent peaks such as hitting 3 million daily likes!

Want to explore more about Bluesky’s incredible growth? Check out the live stats page here: bsky.jazco.dev/stats
Atlas - Engagement-Based Social Graph for Bluesky by Jaz (jaz.bsky.social)
bsky.jazco.dev
Reposted by Sebastian Sanokowski
jonkhler.argmin.xyz
Max Welling (@wellingmax.bsky.social) landed and needs followers! ;)
sanokows.bsky.social
I also would like to join :)
Reposted by Sebastian Sanokowski
willieneis.bsky.social
I'm making a list of AI for Science researchers on bluesky — let me know if I missed you / if you'd like to join!

go.bsky.app/AcP9Lix