Anirban Ray
banner
anirbanray.bsky.social
Anirban Ray
@anirbanray.bsky.social
240 followers 350 following 50 posts
PhD Student working on bioimaging inverse problems with @florianjug.bsky.social at @humantechnopole.bsky.social + @tudresden.bsky.social | Prev: computer vision Hitachi R&D, Tokyo. πŸ”—: https://rayanirban.github.io/ Likes πŸΈπŸ‹οΈπŸ”οΈπŸ“ and ✈️
Posts Media Videos Starter Packs
Pinned
So exciting to see everyone here. A brief self-intro for others:

I am currently a PhD student with @florianjug.bsky.social. My work involves the application of AI in bioimage analysis. I am interested in the application of #GenAI to solve inverse problems with #DiffusionModels #FlowMatching #VAEs.
Reposted by Anirban Ray
Anirban Ray, Vera Galinova, Florian Jug
ResMatching: Noise-Resilient Computational Super-Resolution via Guided Conditional Flow Matching
https://arxiv.org/abs/2510.26601
needless to say! and much appreciated 😊
super cool work😎
Generative Point Tracking with Flow Matching

My latest project with Adam W. Harley, @csprofkgd.bsky.social, Derek Nowrouzezahrai, @chrisjpal.bsky.social.

Project page: mtesfaldet.net/genpt_projpa...
Paper: arxiv.org/abs/2510.20951
Code: github.com/tesfaldet/ge...
β€œWe may not win every battle, but we will win the war.” --- Such an appropriate characterization for posterior samplers. Each posterior sample fights its own battle against noise and degradation; some win, some lose. But the MMSE estimate wins the war πŸ˜‰.
#iykuk #ImageRestoration
Reposted by Anirban Ray
Diffusion Transformers with Representation Autoencoders by Boyang Zheng, et al (arxiv.org/abs/2510.116...)

Unexpected result: swapping the SD-VAE for a pretrained visual encoder improves FID, challenging the idea that encoders' information compression is not suited for generative modeling!
πŸ‘πŸ‘πŸ‘
Happy to share that ShapeEmbed has been accepted at @neuripsconf.bsky.social πŸŽ‰ SE is self-supervised framework to encode 2D contours from microscopy & natural images into a latent representation invariant to translation, scaling, rotation, reflection & point indexing
πŸ“„ arxiv.org/pdf/2507.01009 (1/N)
Reposted by Anirban Ray
We had an awesome #OMIBS2025

Thanks to all the lecturers, staff members, vendor faculty, sponsors, and participants for making this an amazing course year!
Reposted by Anirban Ray
Introducing Latent-X β€” our all-atom frontier AI model for protein binder design.

State-of-the-art lab performance, widely accessible via the Latent Labs Platform.

Free tier: platform.latentlabs.com
Blog: latentlabs.com/latent-x/
Technical report: tinyurl.com/latent-X
Reposted by Anirban Ray
New episode in this line of work from @giannisdaras.bsky.social et al. on training diffusion models with mostly bad/low-quality/corrupted data (+few high-quality samples). This time for proteins!

πŸ“„ Ambient diffusion Omni: arxiv.org/pdf/2506.10038
πŸ“„ Ambient Proteins: www.biorxiv.org/content/10.1...
these are so beautiful 🀩.
Smita Krishnaswamy at #AIxBio25
Reposted by Anirban Ray
New paper on the generalization of Flow Matching www.arxiv.org/abs/2506.03719

🀯 Why does flow matching generalize? Did you know that the flow matching target you're trying to learn *can only generate training points*?

w @quentinbertrand.bsky.social @annegnx.bsky.social @remiemonet.bsky.social πŸ‘‡πŸ‘‡πŸ‘‡
Congratulations πŸŽ‰ 🀩
Really cool and clean idea πŸ˜‡πŸ‘
George Stoica, Vivek Ramanujan, Xiang Fan, Ali Farhadi, Ranjay Krishna, Judy Hoffman
Contrastive Flow Matching
https://arxiv.org/abs/2506.05350
Reposted by Anirban Ray
πŸŽ€βœ¨Recording now available βœ¨πŸŽ€

youtu.be/jtDunWK8g1o?...
Reposted by Anirban Ray
Kullback–Leibler (KL) divergence is a cornerstone of machine learning.

We use it everywhere, from training classifiers and distilling knowledge from models, to learning generative models and aligning LLMs.

BUT, what does it mean, and how do we (actually) compute it?

Video: youtu.be/tXE23653JrU
Very interesting. Congrats! πŸ˜€
Reposted by Anirban Ray
"Energy Matching: Unifying Flow Matching and
Energy-Based Models for Generative Modeling" by Michal Balcerak et al. arxiv.org/abs/2504.10612
I'm not sure EBM will beat flow-matching/diffusion models, but this article is very refreshing.
Reposted by Anirban Ray
New Feature in DeepInverse (deepinv.github.io):

πŸš€ Custom Diffusion Solver Design
DeepInverse now simplifies the process with:

βœ” Standard SDEs (VP, VE, etc.)
βœ” Pretrained denoisers for multiple noise levels
βœ” ODE/SDE solvers (Euler, Heun)
βœ” Noisy data fidelity terms for guidance
Redirecting to https://deepinv.github.io/deepinv/
deepinv.github.io
Reposted by Anirban Ray
We will be kicking off the Neurogenomics Conference with two sessions on Neurodevelopment with talks from Wieland Huttner, @bassemh.bsky.social, @mareikealbert.bsky.social, @naelnadifkasri-lab.bsky.social, Yukiko Gotoh, @boyanbonev.bsky.social and others! #neurogen25
Monday is the big day! Very much looking forward to welcoming all participants and speakers of the Neurogenomics Conference to @humantechnopole.bsky.social in Milan. It promises to be an exciting few days filled with amazing science.
Really interesting paper on per-frequency control in diffusion models: arxiv.org/abs/2505.112.... They tackle the frequency degradation rate imbalance of the forward process by enforcing Equal SNR across Fourier components. Finally, someone’s taking @sedielem.bsky.social blogs to heart πŸ‘€πŸ”₯
A Fourier Space Perspective on Diffusion Models
Diffusion models are state-of-the-art generative models on data modalities such as images, audio, proteins and materials. These modalities share the property of exponentially decaying variance and mag...
arxiv.org
Reposted by Anirban Ray
Here's the third and final part of Slater Stich's "History of diffusion" interview series!

The other two interviewees' research played a pivotal role in the rise of diffusion models, whereas I just like to yap about them 😬 this was a wonderful opportunity to do exactly that!
History of Diffusion - Sander Dieleman
YouTube video by Bain Capital Ventures
www.youtube.com