Matthias Niessner
@niessner.bsky.social
2.4K followers 64 following 79 posts
Professor for Visual Computing & Artificial Intelligence @TU Munich Co-Founder @synthesiaIO Co-Founder @SpAItialAI https://niessnerlab.org/publications.html
Posts Media Videos Starter Packs
niessner.bsky.social
Fantastic retreat this weekend by our research groups!

Internal reviews, ideas brainstorming, paper reading, and much more! Of course also many social activities -- the highlight being our kayaking trip - lots of fun :)
niessner.bsky.social
All six of our submissions were accepted to #NeurIPS2025 🎉🥳

Awesome works about Gaussian Splatting Primitives, Lighting Estimation, Texturing, and much more GenAI :)

Great work by Peter Kocsis, Yujin Chen, Zhening Huang, Jiapeng Tang, Nicolas von Lützow, Jonathan Schmidt 🔥🔥🔥
niessner.bsky.social
We generate multiple videos along short, pre-defined trajectories that explore the scene in depth. Our scene memory conditions each video on the most relevant prior views while avoiding collisions.

Great work by Manuel Schneider & @LukasHollein
niessner.bsky.social
Can we use video diffusion to generate 3D scenes?

𝐖𝐨𝐫𝐥𝐝𝐄𝐱𝐩𝐥𝐨𝐫𝐞𝐫 (#SIGGRAPHAsia25) creates fully-navigable scenes via autoregressive video generation.

Text input -> 3DGS scene output & interactive rendering!

🌍http://mschneider456.github.io/world-explorer/
📽️https://youtu.be/N6NJsNyiv6I
niessner.bsky.social
We further propose a color-based densification and progressive training scheme for improved quality and faster convergence.

shivangi-aneja.github.io/projects/sca...
youtu.be/VyWkgsGdbkk

Great work by Shivangi Aneja, Sebastian Weiss, Irene Baeza Rojo, Prashanth Chandran, Gaspard Zoss, Derek Bradley
ScaffoldAvatar: High-Fidelity Gaussian Avatars with Patch Expressions
shivangi-aneja.github.io
niessner.bsky.social
We operate on patch-based local expression features and increase the representation capacity by synthesizing 3D Gaussians dynamically by leveraging tiny scaffold MLPs conditioned on localized expressions.
ScaffoldAvatar: High-Fidelity Gaussian Avatars with Patch Expressions
shivangi-aneja.github.io
niessner.bsky.social
ScaffoldAvatar: High-Fidelity Gaussian Avatars with Patch Expressions (#SIGGRAPH)

We reconstruct ultra-high fidelity photorealistic 3D avatars capable of generating realistic and high-quality animations including freckles and other fine facial details.

shivangi-aneja.github.io/projects/sca...
niessner.bsky.social
TL;DR RGB-D scan as input -> compact, CAD scene representation that also features materials in order to create a digital copy that features the looks of a real environment.

Great work by Zhening (Jack) Huang in collaboration with Xiaoyang Wu, Fangcheng Zhong, Hengshuang Zhao, Joan Lasenby
niessner.bsky.social
📢 LiteReality: Graphics-Ready 3D Scene Reconstruction from RGB-D Scans🏠✨

-> converts RGB-D scans into compact, realistic, and interactive 3D scenes — featuring high-quality meshes, PBR materials, and articulated objects.

📷https://youtu.be/ecK9m3LXg2c
🌍https://litereality.github.io
niessner.bsky.social
Seven papers accepted at #ICCV2025!

Exciting topics: lots of generative AI using transformers, diffusion, 3DGS, etc. focusing on image synthesis, geometry generation, avatars, and much more - check it out!

So proud of everyone involved - let's go🚀🚀🚀

niessnerlab.org/publications...
niessner.bsky.social
Want to work on cutting-edge #AI?

We have several fully-funded 𝐏𝐡𝐃 & 𝐏𝐨𝐬𝐭𝐃𝐨𝐜 𝐨𝐩𝐞𝐧𝐢𝐧𝐠𝐬 in our Visual Computing & AI Lab in Munich!

Apply here: application.vc.in.tum.de

Topics have a strong focus on Generative AI, 3DGs, NeRFs, Diffusion, LLMs, etc.
niessner.bsky.social
#CVPR submissions per year have significantly increased.

Now over 11k / year with an expectation to grow even further. This comes with a lot of implications, how to handle the reviews, presentations, etc. Kudos to the organizers for all the efforts that went into it.
niessner.bsky.social
Super excited to be in Nashville for #CVPR2025!

Looking forward to catching up with everyone -- feel free to reach out if you want to chat!

Everything is a Honky Tonk :)
niessner.bsky.social
In addition, we introduce a new OLAT dataset of human heads that features high-resolution and high frame rate multi-view recordings of diverse subjects in a calibrated light stage setting.

Great work by Jonathan Schmidt and Simon Giebenhain.
niessner.bsky.social
📢BecomingLit: Relightable Gaussian Avatars with Hybrid Neural Shading📢

We propose a hybrid neural shading scheme for creating intrinsically decomposed 3DGS head avatars, that allow real-time relighting and animation.

🌍https://lnkd.in/evNt8bV2
📷https://lnkd.in/ekB5QeEK
niessner.bsky.social
📢Code Release of Pixel3DMM 📢
Looking for a robust and accurate face tracker?

We handle challenging in-the-wild settings, such as extreme lighting conditions, fast movements, and occlusions.

👨‍💻https://lnkd.in/e3dX23WV
🌍https://lnkd.in/eQ3Zpn3J

Pixel3DMM can be run on videos and single images.
niessner.bsky.social
📢PBR-SR: Mesh PBR Texture Super Resolution from 2D Image Priors📢

We propose a new optimization to up-sample textures of 3D assets (albedo, roughness, metallic, and normal maps) by leveraging 2D super-resolution models.

📝http://arxiv.org/abs/2506.02846
📽️https://youtu.be/eaM5S3Mt1RM
niessner.bsky.social
🚀🚀🚀Announcing our $13M funding round to build the next generation of AI: 𝐒𝐩𝐚𝐭𝐢𝐚𝐥 𝐅𝐨𝐮𝐧𝐝𝐚𝐭𝐢𝐨𝐧 𝐌𝐨𝐝𝐞𝐥𝐬 that can generate entire 3D environments anchored in space & time. 🚀🚀🚀

Interested? Join our world-class team:
🌍 spaitial.ai

youtu.be/FiGX82RUz8U
SpAItial AI: Building Spatial Foundation Models
YouTube video by SpAItial AI
youtu.be
niessner.bsky.social
of 3D hair strand reconstructions from real-world scans of 400 different people, featuring complicated hairstyles, such as ponytails and buns.

🌍 seva100.github.io/GeomHair
📷 youtu.be/h9vqTiFo9As

Great work by Rachmadio L., Artem Sevastopolsky Egor Zakharov, Vanessa Sklyarova
GeomHair
GeomHair: Reconstruction of Hair Strands from Colorless 3D Scans
seva100.github.io
niessner.bsky.social
We enhance the reconstruction with a diffusion prior trained on synthetic hair data and adapted to each scan using a tailored text prompt, allowing us to recover both simple and complex hairstyles without relying on color input.

We also introduce Strands400, the largest publicly available dataset
GeomHair
GeomHair: Reconstruction of Hair Strands from Colorless 3D Scans
seva100.github.io
niessner.bsky.social
📢GeomHair: Reconstruction of Hair Strands from Colorless 3D Scans📢

We reconstruct hair strands from colorless 3D scans by extracting orientation cues directly from the mesh surface geometry by finding local characteristic lines and from shaded renderings using a neural 2D line detector.
niessner.bsky.social
Works for both single image and videos!

We also introduce a new 3D face reconstruction benchmark that evaluates both neutral and posed face geometry.

🌍 simongiebenhain.github.io/pixel3dmm
📷 youtu.be/BwxwEXJwUDc

Great work by Simon Giebenhain, Tobias Kirschstein, Martin Rünz, Lourdes Agapito
Pixel3DMM: Versatile Screen-Space Priors for Single-Image 3D Face Reconstruction
Pixel3DMM: Versatile Screen-Space Priors for Single-Image 3D Face Reconstruction
simongiebenhain.github.io
niessner.bsky.social
📢Pixel3DMM: Versatile Screen-Space Priors for Single-Image 3D Face Reconstruction📢

-> highly accurate face reconstruction by training powerful VITs via surface normals & UV-coordinates estimation.

These cues from our 2D foundation model constrain the 3DMM parameters, achieving great accuracy.
niessner.bsky.social
We show how it can be used to reconstruct photorealistic scenes, and introduce a corresponding differentiable CUDA rasterizer that enables real-time rendering.

LinPrim achieves comparable image quality with fewer primitives, adding a practical polyhedral option.

🎥 youtu.be/NRRlmFZj5KQ
LinPrim: Linear Primitives for Differentiable Volumetric Rendering
LinPrim: Linear Primitives for Differentiable Volumetric Rendering
nicolasvonluetzow.github.io