Kirill Neklyudov
@k-neklyudov.bsky.social
290 followers 100 following 26 posts
Assistant Professor at Mila and UdeM https://necludov.github.io/
Posts Media Videos Starter Packs
Reposted by Kirill Neklyudov
majhas.bsky.social
(1/n)🚨Train a model solving DFT for any geometry with almost no training data
Introducing Self-Refining Training for Amortized DFT: a variational method that predicts ground-state solutions across geometries and generates its own training data!
📜 arxiv.org/abs/2506.01225
💻 github.com/majhas/self-...
k-neklyudov.bsky.social
David is disrupting everone's neurips grind by putting out amazing works, such a dirty trick!
davidpfau.com
New paper accepted to ICML! We present a novel policy optimization algorithm for continuous control with a simple closed form which generalizes DDPG, SAC etc. to generic stochastic policies: Wasserstein Policy Optimization (WPO).
Reposted by Kirill Neklyudov
martinmbauer.bsky.social
Renormalisation is a central concept in modern physics. It describes how the dynamics of a system change at different scales. A great way to understand and visualise renormalisation is the Ising model

(some math, but one can follow without it )

1/13
k-neklyudov.bsky.social
Every image was generated using SuperDiff for SDXL with two different prompts. Now, what are the prompts?🤔
k-neklyudov.bsky.social
SuperDiff goes super big!
- Spotlight at #ICLR2025!🥳
- Stable Diffusion XL pipeline on HuggingFace huggingface.co/superdiff/su... made by Viktor Ohanesian
- New results for molecules in the camera-ready arxiv.org/abs/2412.17762
Let's celebrate with a prompt guessing game in the thread👇
k-neklyudov.bsky.social
We've been sharing these projects during the year, but today, they have been accepted at #ICLR2025 (1-3) and #AISTATS2025 (4)
k-neklyudov.bsky.social
🧵(7/7) The main result that unlocks all these possibilities is our new Itô density estimator, an efficient way to estimate the density of the generated samples for an already-trained diffusion model (assuming that we know the score). It does not require any extra computations, just the forward pass!
k-neklyudov.bsky.social
🧵(6/7) We try out SuperDiff on generating images with #StableDiffusion by superimposing two prompts so that the image satisfies both. Ever wondered what a waffle cone would look like if it doubled as a volcano? Check out our paper! You’ll find marvellous new creatures in there such as an otter-duck
k-neklyudov.bsky.social
🧵(5/7) We test our model for unconditional de novo protein generation, where we superimpose two diffusion models: Proteus generates more designable and novel proteins, while FrameDiff generates more diverse proteins. SuperDiff combines them to generate designable and novel and diverse proteins!
k-neklyudov.bsky.social
🧵(4/7) Here’s a 2D example for intuition: given two already trained models, we combine their outputs (vector fields) based on estimated densities, allowing us to generate samples from all modes (e.g. for continual learning) or from the surface of equal densities (e.g. for concept interpolation).
k-neklyudov.bsky.social
🧵(2/7)We provide a new approach for estimating density without touching the divergence. This gives us the control to easily interpolate concepts (logical AND) or mix densities (logical OR), allowing us to create one-of-a-kind generations! ⚡🌀🤗
The Superposition of Diffusion Models Using the Itô Density Estimator
The Cambrian explosion of easily accessible pre-trained diffusion models suggests a demand for methods that combine multiple different pre-trained diffusion models without incurring the significant co...
arxiv.org
k-neklyudov.bsky.social
🧵(1/7) Have you ever wanted to combine different pre-trained diffusion models but don't have time or data to retrain a new, bigger model?

🚀 Introducing SuperDiff 🦹‍♀️ – a principled method for efficiently combining multiple pre-trained diffusion models solely during inference!
Reposted by Kirill Neklyudov
aspuru.bsky.social
10 minutes ago

I am excited to share a perspective on the much-needed topic of hashtag#safety for hashtag#selfdrivinglaboratories. As the field progresses, understanding the challenges and gaps in building safe setups will be crucial for scaling up this technology!

doi.org/10.26434/che...
Steering towards safe self-driving laboratories
The past decade has witnessed remarkable advancements in autonomous systems, such as automobiles that are evolving from traditional vehicles to ones capable of navigating complex environments without ...
doi.org
Reposted by Kirill Neklyudov
kolesnikov.ch
With some delay, JetFormer's *prequel* paper is finally out on arXiv: a radically simple ViT-based normalizing flow (NF) model that achieves SOTA results in its class.

Jet is one of the key components of JetFormer, deserving a standalone report. Let's unpack: 🧵⬇️