Nikita Lisitsa
banner
lisyarus.bsky.social
Nikita Lisitsa
@lisyarus.bsky.social
He/him

I teach C++ & computer graphics and make videogames

Working on a medieval village building game: https://youtube.com/playlist?list=PLSGI94QoFYJwGaieAkqw5_qfoupdppxHN&cbrd=1

Check out my cozy road building traffic sim: https://t.ly/FfOwR
Pinned
A new devlog about my medieval village building game! About 4 months of progress in this one!

#indiedev #gamedev #indiegames #devlog

www.youtube.com/watch?v=fymx...
Stockpiles, Roofs, and Much More! - Village Builder Devlog #10
YouTube video by Nikita Lisitsa
www.youtube.com
Finally have some more time to work on this. The benefit of a noise-based scene is that 1) I can easily adjust its density, and 2) I can generate it in the shader on the fly. This will help in profiling the raytracer in a variety of cases and check if it's compute/memory bound.
November 25, 2025 at 2:10 PM
Once again visualizing the number of steps the raytracer takes through the voxel map (more yellow = more steps, red = hit a voxel). Looks quite surreal
November 23, 2025 at 7:59 PM
Trying a new voxel scene to test memory vs compute GPU performance in my raytracer, using @xordev.com's dot noise: mini.gmshaders.com/p/dot-noise
November 23, 2025 at 9:10 AM
Soo chunk size profiling results are kinda funny: the best chunk size is 4x4x4, just like the 64-wide octree nodes I unsuccessfully tried earlier! Maybe if I store octree nodes data in a 3D texture, I can make the best out of the 2 approaches...
November 22, 2025 at 8:06 PM
Rewrote the voxel raytracer to use a two-level chunk system (a 3D texture atlas for 16³-sized chunks, another 3D world-space texture referencing the atlas). Without any optimizations, for primary (camera) rays it's already 25% faster; not so much for random bounce rays.
November 22, 2025 at 5:23 PM
Rewriting my voxel storage once again, this time using a two-level tree, aka chunking. Here I'm raymarching the chunks storage, where individual unrelated nonempty 16³-sized chunks are packed in some order. Quite a surreal view of the scene :)
November 22, 2025 at 4:34 PM
Given that my primary perf sink is tracing highly incoherent rays (for Monte-Carlo) and not primary camera rays (I can really just rasterize voxels, I don't need enormous scenes), maybe octrees were a bad idea from the start?
Right now in the 1 sample per frame with just 1 bounce scenario, octree traversal ends up being 30% slower than raw 3D texture traversal. With 0 bounces (just the camera ray, which are very coherent) it seems to be about 50% slower. I think I messed something up really bad 😅
November 21, 2025 at 10:59 PM
Right now in the 1 sample per frame with just 1 bounce scenario, octree traversal ends up being 30% slower than raw 3D texture traversal. With 0 bounces (just the camera ray, which are very coherent) it seems to be about 50% slower. I think I messed something up really bad 😅
November 21, 2025 at 7:30 PM
I noticed that a lot of my octree traversal perf problems are due to rays not hitting anything when they should've (and thus looping until the max step count is reached). Here I'm visualizing whole warps that had such rays (for better readability), using subgroupAny()
November 21, 2025 at 6:52 PM
Visualizing the number of steps it takes for the octree traversal algorithm to find an intersection (yellow = higher)
November 21, 2025 at 2:55 PM
First iteration of sparse octree traversal works! It's already about 2x faster than direct 3D texture traversal for primary camera rays, but much slower for incoherent random monte-carlo rays. Need to optimize the hell out of it now
November 21, 2025 at 1:33 PM
I'm rewriting my voxel thing to use wide sparse octrees and it's going exactly as expected :)
November 20, 2025 at 9:23 PM
Reposted by Nikita Lisitsa
I think "fake it til' you make it" is genuinely a great bit of advice but people (understandably!) think it means "lie until you've duped everyone" when in reality it's much more "act like you belong here because you actually do and you need to get over yourself"
November 20, 2025 at 4:27 PM
I just wanted to make games, exhibit 4562:

(this is from here: agraphicsguynotes.com/posts/understanding_the_math_behind_restir_gi)
November 19, 2025 at 12:50 PM
Still no luck integrating ReSTIR into full GI (using a simpler way than what the ReSTIR GI paper does). Left if ground truth, right is my attempt. It runs faster, but is clearly darker...
November 19, 2025 at 12:48 PM
Clamped some weights too conservatively and my lighting got funny
November 19, 2025 at 12:47 PM
Messed up scene generation a bit and got some nice
v i b e s
November 19, 2025 at 9:57 AM
I think I've messed up the weights again! (It's quite easy in ReSTIR, lol.) Here's a more correct image (I hope?). Anyway it doesn't matter much until I combine it using MIS into full GI computation
November 19, 2025 at 9:51 AM
First ReSTIR test using 16 reservoir proposal samples (right image) compared to uniform direction sampling and ignoring indirect light (left image) with many spread out lights (~3k light-emitting voxel faces here). Quite insane noise reduction going on in here!
November 19, 2025 at 9:40 AM
First test with ReSTIR, just for direct lighting for now. Figuring out the weights was a bit finicky, ngl. This scene is a bit too easy since all light sources are in the same spot, gonna try placing a bunch of light sources tomorrow.
November 18, 2025 at 9:50 PM
Just for fun: here's p=0.99 and p=1.0. The last image clearly shows why you can't use direct light sampling alone and need to combine it with something else (uniform/brdf sampling/etc) via MIS or NEE:
November 18, 2025 at 4:17 PM
Fixed the MIS implementation, now it converges to the same image as basic uniform direction sampling. Time for tests: 4 renders with different values of light sampling probability. p=0.25 seems to be the best, and reduces noise severely:
November 18, 2025 at 4:11 PM
Once again trying to implement basic MIS (to compare it to ReSTIR later). Optimized it to do raytracing & light sampling probability computation in one go so it's not that slow anymore, but I still seem to have some bugs...
November 18, 2025 at 3:49 PM
Gotta admit I love how this thing looks both in noisy-pixelated style and in the liminal smooth fully-converged style. Both are really cool aesthetics imo
November 18, 2025 at 1:26 PM
Averaging gaussian lobes for nearby light probes helps a lot, but is still equally far from real-time. After the same ~100k optimizer iterations, far away lobes mostly point in the right direction, but still have to learn the correct sharpness value.
November 18, 2025 at 8:39 AM