Paul Hagemann
@yungbayesian.bsky.social
840 followers 430 following 26 posts
PhD student at TU Berlin, working on generative models and inverse problems he/him
Posts Media Videos Starter Packs
Reposted by Paul Hagemann
smnlssn.bsky.social
We are looking for someone to join the group as a postdoc to help us with scaling implicit transfer operators. If you are interested in this, please reach out to me through email. Include CV, with publications and brief motivational statement. RTs appreciated!
yungbayesian.bsky.social
what is so misunderstood about (3)?
Reposted by Paul Hagemann
smnlssn.bsky.social
2025 CHAIR Structured Learning Workshop -- Apply to attend: ui.ungpd.com/Events/60bfc...
yungbayesian.bsky.social
ist auch logisch, dass greenpeace/foodwatch bspw den grünen näherstehen, da die ja die themen bespielen. neutralität wäre da ja eher lächerlich
yungbayesian.bsky.social
was ist an der studie falsch
yungbayesian.bsky.social
interesting point, but i would say (true) memorization is mathematically impossible. the underlying question is what generalization means when we are given finite training samples. it depends on the model and how long you train, see proceedings.neurips.cc/paper_files/... and arxiv.org/abs/2412.20292
Score-Based Generative Models Detect Manifolds
proceedings.neurips.cc
yungbayesian.bsky.social
yes i agree, but for diffusion such a constant velocity/score field does not even exist
yungbayesian.bsky.social
so in diffusion models the time schedule is so that we cannot have straight paths velocity fields (i.e., v_t(x_t) is constant in time), as opposed to flow matching/rectified flows where it is possible to obtain such paths (although it requires either OT/rectifying...)
yungbayesian.bsky.social
Check out our github and give it a try yourself! Lots of potential in exploring stuff like this also to other domains (medical imaging, protein/bio stuff)!

github.com/annegnx/PnP-...

Also credit goes to my awesome collaborators Anne Gagneux, Sego Martin and Gabriele Steidl!
github.com
yungbayesian.bsky.social
Compared to diffusion methods, we can handle arbitrary latent distributions and also get (theoretically) straighter paths! We evaluate on multiple image datasets against flow matching+diffusion+standard PnP based restoration methods!
yungbayesian.bsky.social
Our algorithm proceeds as follows: we do a gradient step on the data fidelity, reproject onto the flow matching path and then denoise using our flow matching model. This is super cheap to do!
yungbayesian.bsky.social
Therefore, we use the plug and play framework and rewrite our velocity field (which predicts a direction) to instead denoise the image x_t (i.e., predict the MMSE image x_1). Then we obtain a "time" conditional PnP version, where we solve do the forward backward PnP at the current time and reproject
yungbayesian.bsky.social
Our paper "PnP-Flow: Plug-and-Play Image Restoration with Flow Matching" has been accepted to ICLR 2025. Here a short explainer: We want to restore images (i.e., solve inverse problems) using pretrained velocity fields from flow matching. However, using change of variables is super costly.
yungbayesian.bsky.social
very nice paper, only had a quick glimpse, but another aspect is that the optimal score estimator explodes if we approach t -> 0, which NNs ofc cannot replicate. how does this influence the results?
yungbayesian.bsky.social
you might be onto sth haha
yungbayesian.bsky.social
i guess the adam paper is a pretty good indicator how much ml papers are being published. looks like we are saturating since 2021
yungbayesian.bsky.social
same experience here. i am not sure we need actual conference reviewing at all. why do we not all publish on openreview and if i use your paper/build upon/read it, i can write my opinion on it? without the accept reject stamp.
yungbayesian.bsky.social
Here, one can see FID results for different beta! Indeed it seems to be fruitful to restrict mass movement in Y for class conditional cifar! We apply this also to other interesting inverse problems, the article can be found at arxiv.org/abs/2403.18705
yungbayesian.bsky.social
We want to approximate this distance with standard OT solvers, and therefore introduce a twisted cost function. With this at hand, we can now do OT flow matching for inverse problems! The factor beta controls how much mass leakage we allow in Y.
yungbayesian.bsky.social
This object has already been of some interest, i.e., it pops up in the theory of gradient flows. It generalizes the KL property quite nicely, and unifies some ideas present in conditional generative modelling. For instance, its dual is the loss usually used in conditional wasserstein gans.
yungbayesian.bsky.social
Now does the same hold for the Wasserstein distance? Unfortunately not, since moving mass in Y-direction can be more efficient for some measures. However, we can fix that if we restrict the suitable couplings to ones, that only move mass in Y-direction.
yungbayesian.bsky.social
In a somewhat recent paper we introduced conditional Wasserstein Distances. They generalize a property that basically explains why KL works well for generative modelling, the chain rule of KL!
It says that if one wants to approximate the posterior, one can also minimize the KL between joints.