Wenzel Jakob
@wjakob.bsky.social
1.4K followers 220 following 45 posts
Associate professor leading EPFL's Realistic Graphics Lab. My research involves inverse graphics, material appearance modeling and physically based rendering
Posts Media Videos Starter Packs
wjakob.bsky.social
You can dump the PTX intermediate representation (see the documentation), but figuring out the calling convention of the kernel for your own use will be tricky. The system is not designed to be used in this way.
wjakob.bsky.social
Just write your solver in plain CUDA. How hard can it be? 😛
wjakob.bsky.social
This approach is restricted to software that only needs the CUDA driver. If your project uses cuSolver, you will likely need to have a dependency on the CUDA python package that ships this library on PyPI (similar to PyTorch et al.)
Reposted by Wenzel Jakob
mworchel.bsky.social
Differentiable rendering has transformed graphics and 3D vision, but what about other fields? Our SIGGRAPH 2025 introduces misuka, the first fully-differentiable path tracer for acoustics.
wjakob.bsky.social
Wasn’t that something.. Flocke (German for “flake”) says hi!
wjakob.bsky.social
If you are fitting a NeRF and you want a surface out at the end, you should probably be using the idea in this paper.
wjakob.bsky.social
Given the focus on performance, I would suggest to switch from pybind11 to nanobind. Should just be a tiny change 😇
wjakob.bsky.social
For the paper and data, please check out the project page: mokumeproject.github.io
Mokume
mokumeproject.github.io
wjakob.bsky.social
The Mokume project is a massive collaborative effort led by Maria Larsson at the University of Tokyo (w/Hodaka Yamaguchi, Ehsan Pajouheshgar, I-Chao Shen, Kenji Tojo, Chia-Ming Chang, Lars Hansson, Olof Broman, Takashi Ijiri, Ariel Shamir, and Takeo Igarashi).
wjakob.bsky.social
To reconstruct their interior, we: 1️⃣Localize annual rings on cube faces 2️⃣ Optimize a procedural growth field that assigns an age to every 3D point (when that wood formed during the tree's life) 3️⃣ Synthesize detailed textures via procedural model or a neural cellular automaton
wjakob.bsky.social
The Mokume dataset consists of 190 physical wood cubes from 17 species, each documented with:

- High-res photos of all 6 faces
- Annual ring annotations
- Photos of slanted cuts for validation
- CT scans revealing the true interior structure (for future use)
wjakob.bsky.social
Wood textures are everywhere in graphics, but realistic texturing requires knowing what wood looks like throughout its volume, not just on the surfaces.
The patterns depend on tree species, growth conditions, and where and how the wood was cut from the tree.
wjakob.bsky.social
How can one reconstruct the complete 3D interior of a wood block using only photos of its surfaces? 🪵
At SIGGRAPH'25 (Thursday!), Maria Larsson will present *Mokume*: a dataset of 190 diverse wood samples and a pipeline that solves this inverse texturing challenge. 🧵👇
wjakob.bsky.social
My lab will be recruiting at all levels. PhD students, postdocs, and a research engineering position (worldwide for PhD/postdoc, EU candidates only for the engineering position). If you're at SIGGRAPH, I'd love to talk to you if you are interested in any of these.
wjakob.bsky.social
The reason is that the "volume" of this paper is always rendered as a surface (without alpha blending) during the optimization. Think of it as an end-to-end optimization that accounts for meshing, without actually meshing the object at each step.
wjakob.bsky.social
To get a triangle mesh out at the end, you will still need a meshing step (e.g. marching cubes). The key difference is that NeRF requires addtl. optimization & heuristics to create a volume that will ultimately produce a high quality surface. With this new method, it just works.
Reposted by Wenzel Jakob
yiningkarlli.bsky.social
Wow, this is such a cool paper! Basically with a surprisingly small modification to existing NeRF optimization, this paper gets a really good direct surface reconstruction technique that doesn't require all of the usual mess that meshing a NeRF requires (raymarching, marching cubes, etc).
wjakob.bsky.social
Methods like NeRF and Gaussian Splats model the world as radioactive fog, rendered using alpha blending. This produces great results.. but are volumes the only way to get there?🤔 Our new SIGGRAPH'25 paper directly reconstructs surfaces without heuristics or regularizers.
wjakob.bsky.social
Check out our paper for more details at rgl.epfl.ch/publications...
wjakob.bsky.social
This is a joint work with @ziyizh.bsky.social, @njroussel.bsky.social, Thomas Müller, @tizian.bsky.social, @merlin.ninja, and Fabrice Rousselle.
wjakob.bsky.social
Our method minimizes the expected loss, whereas NeRF optimizes the loss of the expectation.
It generalizes deterministic surface evolution methods (e.g., NvDiffrec) and elegantly handles discontinuities. Future applications include physically based rendering and tomography.
wjakob.bsky.social
Instead of blending colors along rays and supervising the resulting images, we project the training images into the scene to supervise the radiance field.
Each point along a ray is treated as a surface candidate, independently optimized to match that ray's reference color.
wjakob.bsky.social
By changing just a few lines of code, we can adapt existing NeRF frameworks for surface reconstruction.
This patch shows the necessary changes to Instant NGP, which was originally designed for volume reconstruction.
wjakob.bsky.social
Methods like NeRF and Gaussian Splats model the world as radioactive fog, rendered using alpha blending. This produces great results.. but are volumes the only way to get there?🤔 Our new SIGGRAPH'25 paper directly reconstructs surfaces without heuristics or regularizers.
wjakob.bsky.social
It also adds support for function freezing so that the process of rendering a scene can be captured and cheaply replayed. See mitsuba.readthedocs.io/en/latest/re... for details.
Release notes - Mitsuba 3
mitsuba.readthedocs.io