Tzu-Mao Li
@tzumaoli.bsky.social
680 followers 280 following 47 posts
https://cseweb.ucsd.edu/~tzli/ computer graphics, programming systems, machine learning, differentiable graphics
Posts Media Videos Starter Packs
tzumaoli.bsky.social
Amazing paper. Can't believe I haven't read it. Thanks a lot for sharing! (And yes I agree that the Nyquist limit is likely too loose and we can do so much better!)
tzumaoli.bsky.social
(This was inspired by the debate of whether the Pixel camera's 100x zoom in is hallucination or not, but it seems to apply to everything in the "AI" world right now.)
tzumaoli.bsky.social
My thoughts got stuck at the point above, so I decided to make this a bluesky post. ; )
tzumaoli.bsky.social
To move forward, either we move back to the "old ways" (I actually prefer this), or we should have a better visualization to indicate what things have higher uncertainty and make it clear to the audience. Probably a lot of people are working on this, but uncertainty quantification is a hard problem.
tzumaoli.bsky.social
We used to have a clear relation between sampling rates and reconstruction error. Now that has gone away and anything can go. In some sense, we have traded reconstruction error with predictability (perhaps because predictability is harder to benchmark). It almost feels like a form of no-free-lunch.
tzumaoli.bsky.social
Anything outside of Nyquist-Shannon limit is "hallucination". It used to have cooler names aliasing/noise. I think the key difference between the two is that human are good at catching aliasing/noise (even anti-aliasing), but not good at noticing hallucination. So "hallucination" feels like cheating
Reposted by Tzu-Mao Li
wjakob.bsky.social
My lab will be recruiting at all levels. PhD students, postdocs, and a research engineering position (worldwide for PhD/postdoc, EU candidates only for the engineering position). If you're at SIGGRAPH, I'd love to talk to you if you are interested in any of these.
tzumaoli.bsky.social
I've started to ask these questions in talks just so I can collect answers I can use myself in the future. ; )
tzumaoli.bsky.social
Most interesting thread I've read recently! I assume you can use this to build a BSP tree like data structure to render a lot of quadratic Bezier strokes?
tzumaoli.bsky.social
Also see the official SIGGRAPH blog post (blog.siggraph.org/2025/06/sigg...) for the best paper announcement and other cool SIGGRAPH papers.
tzumaoli.bsky.social
In the paper (suikasibyl.github.io/vvmc), we show a lot more: MSE analysis, debiasing, applying to actual renderers and differentiable renderers, and more.

In short, there is really no reason not to use RCV in your renderer and differentiable renderer. It reduces variance at negligible cost!
tzumaoli.bsky.social
Instead of classical "difference" CVs that is sensitive to scale, we use "Ratio" CVs that is scale invariant, i.e., the estimator has zero variance if your RCV is a constant scale of the integrand. This makes RCV far more robust than CV in rendering, since rendering equation is multiplicative.
tzumaoli.bsky.social
A potential remedy is control variates. You can use different CV for each component in a vector-valued integral, and a perfect CV gives zero variance. However, CVs are sensitive to the scale of your integrands: the zero-variance property doesn't preserve even if you simply scale the integrand by 2.
tzumaoli.bsky.social
While rendering equation is often presented as scalar integrals, they usually have multiple channels (e.g. RGB). However, importance sampling can only reduce variance of one channel, or a weighted average of them. It gets worse in differentiable rendering, since we need to compute many derivatives.
tzumaoli.bsky.social
Rendering nerds! Check out our latest work "Vector-Valued Monte Carlo Integration Using Ratio Control Variates" that has just gotten the best paper award at SIGGRAPH 2025. This paper presents a method that reduces variance of a wide range of rendering and diff. rendering tasks with negligible cost.
tzumaoli.bsky.social
Ah, I see what you mean. Indeed dart throwing like methods are not progressive as you'll need to reduce the Poisson disk radius as you add more samples...I mistakenly thought you can trivially add more samples by throwing more darts.
Sampling is hard!
tzumaoli.bsky.social
Kalantari and Sen's "Efficient Computation of Blue Noise Point Sets through Importance Sampling" (people.engr.tamu.edu/nimak/Data/E...) may satisfy what you want: it's a fancy version of dart throwing with a better sampling that supports variable density. It also seems pretty easy to implement.
people.engr.tamu.edu
tzumaoli.bsky.social
Ironically I only understand the appeal of 3DGS after this post. I was too rendering driven!
Reposted by Tzu-Mao Li
tzumaoli.bsky.social
This might be my favorite read this year so far! The idea of seeding patches on surface and walk towards the reference points geodesically makes so much sense. I also like how the paper justifies the approximations and discusses their consequences. Would love to implement this myself at some point.
tzumaoli.bsky.social
Wow. It seems that this would introduce some small discontinuities in the image space when the Russian roulette decisions diverge, but I guess these are so imperceptible after many bounces that it wouldn't matter. Mind blowing.
tzumaoli.bsky.social
Good point. It's highly related and is one change of variable away. We talked about the relation to Lipman's paper arxiv.org/abs/2106.07689 that solves for the viscosity eikonal equation. We found that in practice our parameterization leads to much better numerical behavior.
Phase Transitions, Distance Functions, and Implicit Neural Representations
Representing surfaces as zero level sets of neural networks recently emerged as a powerful modeling paradigm, named Implicit Neural Representations (INRs), serving numerous downstream applications in ...
arxiv.org
tzumaoli.bsky.social
We use of a relation between screened Poisson equation and distance (the same relation Keenan used in his classical "Geodesics in Heat" paper) to design a loss function. Since our loss excludes jaggy non-SDF solutions, it also makes optimization significantly more stable. Check the paper!