Jeremias Sulam
@jsulam.bsky.social
72 followers 140 following 15 posts
Assistant Prof. @ JHU 🇦🇷🇺🇸 Mathematics of Data & Biomedical Data Science jsulam.github.io
Posts Media Videos Starter Packs
jsulam.bsky.social
Many more details on derivation, intuition, implementation details and proofs can be found in our paper arxiv.org/pdf/2507.08956 with the amazing Zhenghan Fang, Sam Buchanan and Mateo Diaz @ @jhu.edu @hopkinsdsai.bsky.social
arxiv.org
jsulam.bsky.social
2) In theory, we show that prox-diff sampling requires only O(d/sqrt(eps)) to produce a distribution epsilon-away (in KL) from a target one (assuming oracle prox), faster than the score version (resp. O(d/eps)). Technical assumptions differ in all papers though, so exact comparison is hard (10/n)
jsulam.bsky.social
1) In practice, ProxDM can provide samples from the data distribution much faster than comparable methods based on the score like DDPM from (Ho et al, 2020) and even comparable to their ODE alternatives (which are much faster) (9/n)
jsulam.bsky.social
In this way, we generalize the Proximal Matching Loss from (Fang et al, 2024) to learn time-specific proximal operators for the densities at each discrete time. The result is Proximal Diffusion Models: sampling by using proximal operators instead of the score. This has 2 main advantages: (8/n)
jsulam.bsky.social
So, in order to implement a Proximal/backward version of diffusion models, we need a (cheap!) way of solving this optimization problem, i.e. computing the proximal of the log densities at every single time step. If only there was a way… oh, in come Learned Proximal Networks (7/n)
jsulam.bsky.social
What are proximal operators? You can think of them as generalizations of projection operators. For a given (proximable) functional \rho(x), its proximal is defined by the solution of a simple optimization problem: (6/n)
jsulam.bsky.social
Backward discretization of diff. eqs. has been long studied (c.f. gradient descent vs proximal point method). Let’s go ahead and discretize the same SDE, but backwards! One problem: the update is defined implicitly... But it does admit a close form expression in terms of proximal operators! (5/n)
jsulam.bsky.social
Crucially, this step relies on being able to compute the score function. Luckily, Minimum Mean Squared Estimate (MMSE) denoisers can do just that (at least asymptotically). But, couldn’t there be a different discretization strategy for this SDE, you ask? Great question! Let's go *back*... (4/n)
jsulam.bsky.social
While elegant in continuous time, one needs to discretize the SDE to implement it in practice. In DF, this has always been done through forward discretization (e.g Euler-Maruyama), which combines a gradient step of the data distribution at the discrete time t (the *score*), and Gaussian noise: (3/n)
jsulam.bsky.social
First, a (very) brief overview of diffusion models (DM). DM work by simulating a process that converts samples from a distribution (random noise) to samples from target distribution of interest. This process is modeled mathematically with a stochastic differential equation (SDE) (2/n)
jsulam.bsky.social
Check this out 📢 Score-based diffusion models are powerful—but slow to sample. Could there be something better? Drop the scores, use proximals instead!

We present Proximal Diffusion Models, providing a faster alternative both in theory* and practice. Here’s how it works 🧵(1/n)
jsulam.bsky.social
Awesome to see our cover in @cp-patterns.bsky.social finally out! And kudos go to Zhenzhen Wang for her massive work on biomarker discovery for breast cancer
www.cell.com/patterns/ful...
jsulam.bsky.social
Today, on #WomenInScience day, this paper on biomarker discovery for breast cancer, by my amazing student Zhenzhen, has just appeared in @cp-patterns.bsky.social
🎉 Her work shows how to construct fully interpretable biomarkers employing bi-level graph learning! @jhu.edu @hopkinsdsai.bsky.social
jsulam.bsky.social
Nice write-up by @JHUCompSci about @JacopoTeneggi's work. Puch-line: interpretability of opaque ML models can be posed as hypothesis tests, for which online (efficient) testing procedures can be derived! www.cs.jhu.edu/news/wanna-b...
Wanna bet? Testing conceptual importance for more explainable AI
Johns Hopkins researchers used betting strategies to help clarify AI models’ decision-making processes.
www.cs.jhu.edu
jsulam.bsky.social
📣 What should *ML explanations* convey, and how does one report these precisely and rigorously? @neuripsconf.bsky.social
come check
Jacopo Teneggi's work on Testing for Explanations via betting this afternoon! I *bet* you'll like it :) openreview.net/pdf?id=A0HSm... @hopkinsdsai.bsky.social
Reposted by Jeremias Sulam
baselesspursuit.bsky.social
NeurIPS paper: Excited for our work (with Iuliia Dmitrieva+Sergey Babkin) on

"realSEUDO for real-time calcium imaging analysis"
arxiv.org/abs/2405.15701

to be presented tomorrow (Thu 4:30-7:30PM). realSEUDO is a fully on-line method for cell detection and activity estimation that runs at >30Hz.
realSEUDO for real-time calcium imaging analysis
Closed-loop neuroscience experimentation, where recorded neural activity is used to modify the experiment on-the-fly, is critical for deducing causal connections and optimizing experimental time. A cr...
arxiv.org