Abhishek Madan
@abhishekmadan.bsky.social
39 followers 19 following 9 posts
Posts Media Videos Starter Packs
Reposted by Abhishek Madan
estheroate.bsky.social
Every lens leaves a blur signature—a hidden fingerprint in every photo.

In our new #TPAMI paper, we show how to learn it fast (5 mins of capture!) with Lens Blur Fields ✨

With it, we can tell apart ‘identical’ phones by their optics, deblur images, and render realistic blurs.
abhishekmadan.bsky.social
Code is now out! Try it for yourself here: github.com/abhimadan/st...
Reposted by Abhishek Madan
selenaling.bsky.social
Our #SGP25 work studies a simple and effective way to uniformly sample implicit surfaces by casting rays. (1/9)

“Uniform Sampling of Surfaces by Casting Rays” w/ @abhishekmadan.bsky.social @nmwsharp.bsky.social and Alec Jacobson
abhishekmadan.bsky.social
There’s no free lunch - by adding randomization, we can increase the worst-case error. However, average errors are lower, and we’ve just scratched the surface of variance reduction techniques for this estimator, so the gap in max error can be closed even further.
abhishekmadan.bsky.social
Of course, this is more than just a cool mathematical trick - we show that structuring the computation in this way actually reduces thread divergence in parallel execution, which leads to some significant speedups on the GPU compared to Barnes-Hut.
abhishekmadan.bsky.social
However, this is equivalent to following every single path through the tree, and truncating paths that are less important to the sum. From here, we can make the algorithm stochastic (and unbiased!) by randomly selecting paths through the tree, and randomly truncating these paths.
abhishekmadan.bsky.social
The original, deterministic algorithm works by building an octree over all the source points, and essentially performing a truncated traversal over this tree that covers all the sources.
abhishekmadan.bsky.social
Meanwhile, stochastic approaches have been very successful in graphics - can we effectively use them here? We developed a stochastic version of the classic Barnes-Hut approximation that runs blazingly fast on the GPU, taking just 4ms to compute the total effect of over 4 trillion interactions!
abhishekmadan.bsky.social
Fast summation of interactions between a set of sources and a query point is a key ingredient in algorithms across science, engineering, and computer graphics, such as N-body simulation, winding number computation, and boundary integral evaluation.
abhishekmadan.bsky.social
At SIGGRAPH 2025, we’ll be presenting the paper “Stochastic Barnes-Hut Approximation for Fast Summation on the GPU”. By injecting a bit of randomization into the classic yet deterministic Barnes-Hut approximation for fast kernel summation, we can achieve nearly 10x speedups on the GPU!