Florian Hahlbohm
@fhahlbohm.bsky.social
76 followers 77 following 19 posts
PhD student, Computer Graphics Lab, TU Braunschweig. Radiance Fields and Point Rendering. Webpage: https://fhahlbohm.github.io/
Posts Media Videos Starter Packs
Pinned
fhahlbohm.bsky.social
We recently released the code for "Efficient Perspective-Correct 3D Gaussian Splatting Using Hybrid Transparency"

Project Page: fhahlbohm.github.io/htgs/
Code: github.com/nerficg-proj...
Reposted by Florian Hahlbohm
wimmerthomas.bsky.social
Had the honor to present "Gaussians-to-Life" at #3DV2025 yesterday. In this work, we used video diffusion models to animate arbitrary 3D Gaussian Splatting scenes.
This work was a great collaboration with @moechsle.bsky.social, @miniemeyer.bsky.social, and Federico Tombari.

🧵⬇️
Reposted by Florian Hahlbohm
andreead-a.bsky.social
Had a great experience presenting our work on 3D scene reconstruction from a single image with @visionbernie.bsky.social at #3DV2025 🇸🇬

andreeadogaru.github.io/Gen3DSR

Reach out if you're interested in discussing our research or exploring international postdoc opportunities @fau.de
Reposted by Florian Hahlbohm
m-schuetz.bsky.social
Here is our gaussian splat editor: github.com/m-schuetz/Sp...

Eventually I want it to be able to take scans of ugly streets and beautify them like a photoshop for gaussians. :)
fhahlbohm.bsky.social
"DaD's a pretty good keypoint detector, probably the best." Nice one 😂
fhahlbohm.bsky.social
We also provide a multitude of data loaders, camera model implementations, as well as various utilities for optimization and visualization.
fhahlbohm.bsky.social
Each method has a Trainer, Model, and Renderer class that extend the respective base classes. Many of the current methods also define custom CUDA extensions or a designated loss class.
fhahlbohm.bsky.social
NeRFICG is a research-focused framework for developing novel view synthesis methods. Shoutout to my colleague Moritz Kappel, who is responsible for most of the underlying architecture! We think, NeRFICG is a decent starting point for any PyTorch-based graphics/vision project.
fhahlbohm.bsky.social
Further discussion and ideas for where things could be improved can be found in our paper and the "Additional Notes" in our GitHub repository.

The remainder is on our framework NeRFICG: github.com/nerficg-proj...
NeRFICG
A flexible Pytorch framework for simple and efficient implementation of neural radiance fields and rasterization-based view synthesis methods. - NeRFICG
github.com
fhahlbohm.bsky.social
BlueSky did not let me have two videos in the same post. So here's the OIT video.
fhahlbohm.bsky.social
An interesting observation we had is that OIT (enabled by setting "Blend Mode" to 3 in the config) seems to help background reconstruction and overall densification. Videos show the first 3K training iterations using hybrid vs. order-independent transparency.
fhahlbohm.bsky.social
Note that the GUI has a non-negligible impact on frame rate as it is Python-based. So you won't see maximum performance even after turning off v-sync. It is also Linux-only but my colleague Timon Scholz recently started working on a C++ version that also supports Windows.
fhahlbohm.bsky.social
Btw, all visualizations in this thread use our perspective-correct approach for rendering 3D Gaussians. It is based on ray-casting and can be implemented efficiently. However the high frame rates reported in our paper are due to the hybrid transparency approach.
fhahlbohm.bsky.social
Here are examples using (0) hybrid transparency with K=16, (1) alpha blending of the 4 first fragments per-pixel, (2) alpha blending in "global" depth-ordering, and (3) order-independent transparency. Model was trained using the settings as in (0).
fhahlbohm.bsky.social
You can also modify the "Blend Mode" (see the Readme on GitHub) and core size K for blending modes where this is applicable. To reduce compile times, we only compile kernels for K in [1, 2, 4, 8, 16, 32] and "round down" for other values (e.g., 12 -> 8).
fhahlbohm.bsky.social
Via the "Viewer Config" (F3), you can switch to rendering depth maps and expanding the advanced renderer config allows you to switch between expected (shown here) and median depth.
fhahlbohm.bsky.social
Don't get confused by the "Time" stuff, which is for dynamic scenes reconstructed by methods such as our recent D-NPC: github.com/MoritzKappel...

HTGS also does currently not support changing the background color and using camera models other than "Perspective" without distortion.
GitHub - MoritzKappel/D-NPC: Official code release for "D-NPC: Dynamic Neural Point Clouds for Non-Rigid View Synthesis from Monocular Video".
Official code release for "D-NPC: Dynamic Neural Point Clouds for Non-Rigid View Synthesis from Monocular Video". - MoritzKappel/D-NPC
github.com
fhahlbohm.bsky.social
By modifying the "Principal Point" and/or "Focal Length" you can create fun images like the one below. You can even do this while watching your Gaussians train if you set TRAINING.GUI.ACTIVATE to true in the config file.

And yes, you could in theory train on images like this.
fhahlbohm.bsky.social
Let's start with the GUI features you might want to try with HTGS. If you open the "Camera Config" panel (F4) you can switch between "Orbital" and "Walking" controls. You can also modify the near/far plane.
fhahlbohm.bsky.social
Many thanks to my co-authors Fabian Friederichs, @timweyrich.bsky.social, @linusfranke.bsky.social, Moritz Kappel, Susana Castillo and @mcstammi.bsky.social, Martin Eisemann, and Marcus Magnor!

Thoughts and things to try in the thread below:
fhahlbohm.bsky.social
We recently released the code for "Efficient Perspective-Correct 3D Gaussian Splatting Using Hybrid Transparency"

Project Page: fhahlbohm.github.io/htgs/
Code: github.com/nerficg-proj...
Reposted by Florian Hahlbohm
arthurperpixel.bsky.social
I’ve released a new version of my 3D reconstruction tool, Brush 🖌️ It's a big step forward - the quality & speed now match gsplat, and there’s a lot of other new features! See the release notes github.com/ArthurBrusse...

Some of the new features:
A autumnal stump, covered in mushrooms. This is a still from the interactive 3D reconstruction!
Reposted by Florian Hahlbohm
fhahlbohm.bsky.social
Merry Christmas :) I tried this as well but with Brush by @arthurperpixel.bsky.social . How many pictures did you take? For me Colmap only ended up using like 25/50 images and it didn't work that well. Tbf lighting was pretty bad.
Reposted by Florian Hahlbohm
ericzzj.bsky.social
Volumetrically Consistent 3D Gaussian Rasterization

Chinmay Talegaonkar, Yash Belhe, Ravi Ramamoorthi, Nicholas Antipa

tl;dr: volumetrically integrate 3D Gaussians directly to compute the transmittance across them analytically->physically-accurate alpha values

arxiv.org/abs/2412.03378
fhahlbohm.bsky.social
I really enjoyed watching the videos the last time you did this. Thanks for making them available to everyone :)