Nicholas Sharp
@nmwsharp.bsky.social
1.2K followers 93 following 26 posts
3D geometry researcher: graphics, vision, 3D ML, etc | Senior Research Scientist @NVIDIA | polyscope.run and geometry-central.net | running, hockey, baking, & cheesy sci fi | opinions my own | he/him personal website: nmwsharp.com
Posts Media Videos Starter Packs
Reposted by Nicholas Sharp
abhishekmadan.bsky.social
Code is now out! Try it for yourself here: github.com/abhimadan/st...
nmwsharp.bsky.social
Also: this paper was recognized with a best paper award at SGP! Huge thanks to the organizers & congrats to the other awardees.

I was super lucky to work with Yousuf on this one, he's truly the mastermind behind it all!
nmwsharp.bsky.social
Logarithmic maps are incredibly useful for algorithms on surfaces--they're local 2D coordinates centered at a given source.

Yousuf Soliman and I found a better way to compute log maps w/ fast short-time heat flow in "The Affine Heat Method" presented @ SGP2025 today! 🧵
nmwsharp.bsky.social
Actually, Yousuf did a quick experiment which is related (though a different formulation), using @markgillespie64.bsky.social et al's Discrete Torsion Connection markjgillespie.com/Research/Dis.... You get fun spiraling log maps! (image attached)
nmwsharp.bsky.social
Yeah! That diffused frame is "the most regular frame field in the sense of transport along geodesics from the source", so you get out a log map that is as-regular-as-possible, in the same sense.

You could definitely use another frame field, and you'd get "log maps" warped along that field.
nmwsharp.bsky.social
💻 Website: www.yousufsoliman.com/projects/the...
📗 Paper: www.yousufsoliman.com/projects/dow...
🔬 Code (C++ library): geometry-central.net/surface/algo...
🐍 Code (python bindings): github.com/nmwsharp/pot...

(point cloud code not available yet, let us know if you're interested!)
nmwsharp.bsky.social
We give two variants of the algorithm, and show use cases for many problems like averaging values on surfaces, decaling, and stroke-aligned parameterization. It even works on point clouds!
nmwsharp.bsky.social
Instead of the usual VxV scalar Laplacian, or a 2Vx2V vector Laplacian, we build a 3Vx3V homogenous "affine" Laplacian! This Laplacian allows new algorithms for simpler and more accurate computation of the logarithmic map, since it captures rotation and translation at once.
nmwsharp.bsky.social
Previously in "The Vector Heat Method", we computed log maps with short-time heat flow, via a vector-valued Laplace matrix rotating between adjacent vertex tangent spaces.

The big new idea is to rotate **and translate** vectors, by working homogenous coordinates.
nmwsharp.bsky.social
Logarithmic maps are incredibly useful for algorithms on surfaces--they're local 2D coordinates centered at a given source.

Yousuf Soliman and I found a better way to compute log maps w/ fast short-time heat flow in "The Affine Heat Method" presented @ SGP2025 today! 🧵
Reposted by Nicholas Sharp
diwlevin.bsky.social
Holding SIGGRAPH Asia 2026 in Malaysia is a slap in the face to the rights of LGBTQ+ people. Especially now, when underrepresented people need as much support as we can possibly give them ! Angry like me ? Sign this open letter to let them know. 🏳️‍⚧️🏳️‍🌈

docs.google.com/document/d/1...
Open Letter to the SIGGRAPH Leadership
RE: Call for SIGGRAPH Asia to relocate from Malaysia and commit to a venue selection process that safeguards LGBTQ+ and other at-risk communities. To the SIGGRAPH Leadership: SIGGRAPH Executive Commit...
docs.google.com
nmwsharp.bsky.social
Sampling points on an implicit surface is surprisingly tricky, but we know how to cast rays against implicit surfaces! There's a classic relationship between line-intersections and surface-sampling, which turns out to be quite useful for geometry processing.
selenaling.bsky.social
Our #SGP25 work studies a simple and effective way to uniformly sample implicit surfaces by casting rays. (1/9)

“Uniform Sampling of Surfaces by Casting Rays” w/ @abhishekmadan.bsky.social @nmwsharp.bsky.social and Alec Jacobson
nmwsharp.bsky.social
Thank you! There's definitely a low-frequency bias when stochastic preconditioning is enabled, but we only use it for the first ~half of training, then train as-usual. The hypothesis is that the bias in the 1st half helps escape bad minima, then we fit high-freqs in the 2nd half. Coarse to fine!
Reposted by Nicholas Sharp
shumash.bsky.social
My child’s doll and tools I captured as 3D Gaussians, turned digital with collisions and dynamics. We are getting closer to bridging the gap between the world we can touch and digital 3D. Experience the bleeding edge at #NVIDIA Kaolin hands-on lab, #CVPR2025! Wed, 8-noon. tinyurl.com/nv-kaolin-cv...
nmwsharp.bsky.social
Check out Abhishek's research!

I was honestly surprised by this result: classic Barnes-Hut already builds a good spatial hierarchy for approximating kernel summations, but you can do even better by adding some stochastic sampling, for significant speedups on the GPU @ matching average error.
abhishekmadan.bsky.social
At SIGGRAPH 2025, we’ll be presenting the paper “Stochastic Barnes-Hut Approximation for Fast Summation on the GPU”. By injecting a bit of randomization into the classic yet deterministic Barnes-Hut approximation for fast kernel summation, we can achieve nearly 10x speedups on the GPU!
nmwsharp.bsky.social
Ah yes absolutely. That's a great example, we totally should have cited it!

When we looked around we found mannnnnny various "coarse-to-fine" like schemes appearing in the context of particular problems or architectures. As you say, what most excited us here is having simple+general option.
nmwsharp.bsky.social
Thank you for the kind words :) The technique is very much in-the-vein of lots of related ideas in ML, graphics, and elsewhere, but hopefully directly studying it & sharing is useful to the community!
nmwsharp.bsky.social
We did not try it w/ the Gaussians in this project (we really focused on the "query an Eulerian field" setting, which is not quite how Gaussian rendering works).

There are some very cool projects doing related things in that setting:
- ubc-vision.github.io/3dgs-mcmc/
- diglib.eg.org/items/b8ace7...
nmwsharp.bsky.social
Tagging @selenaling.bsky.social and @merlin.ninja, who are both on here it turns out! 😁
nmwsharp.bsky.social
website: research.nvidia.com/labs/toronto...
arxiv: arxiv.org/abs/2505.20473
code: github.com/iszihan/stoc...

Kudos go to Selena Ling who is the lead author of this work, during her internship with us at NVIDIA. Reach out to Selena or myself if you have any questions!
Stochastic Preconditioning for Neural Field Optimization
Stochastic Preconditioning for Neural Field Optimization
research.nvidia.com
nmwsharp.bsky.social
Closing thought: In geometry, half our algorithms are "just" Laplacians/smoothness/heat flow under the hood. In ML, half our techniques are "just" adding noise in the right place. Unsurprisingly, these two tools work great together in this project. I think there's a lot more to do in this vein!
nmwsharp.bsky.social
Geometric initialization is a commonly-used technique to accelerate SDF field fitting, yet it often results in disastrous artifacts for non-object centric scenes. Stochastic preconditioning also helps to avoid floaters both with and without geometric initialization.
nmwsharp.bsky.social
Neural field training can be sensitive to changes to hyperparameters. Stochastic preconditioning makes training more robust to hyperparameter choices, shown here in a histogram of PSNRs from fitting preconditioned and non-preconditioned fields across a range of hyperparameters.
nmwsharp.bsky.social
We argue that this is a quick and easy form of coarse-to-fine optimization, applicable to nearly any objective or field representation. It matches or outperforms custom designed polices and staged coarse-to-fine schemes.
nmwsharp.bsky.social
Surprisingly, optimizing this blurred field to fit the objective greatly improves convergence, and in the end we anneal 𝛼 to 0 and are left with an ordinary un-blurred field.
nmwsharp.bsky.social
And implementing our method requires changing just a few lines of code!