redblobgames.com.web.brid.gy
redblobgames.com.web.brid.gy
redblobgames.com.web.brid.gy
@redblobgames.com.web.brid.gy
Explaining algorithms with interactive diagrams
RotMG map seeds
I had just wrapped up a project and wanted to do something small/fun before I started the next one. The Flash runtime is being reimplemented in Rust, and I've enjoyed re-enabling the Flash on my site. That had me thinking about my Flash projects again. I decided to try to revisit an unsolved mystery from one of my Flash projects. Back in 2010, I wrote an article about procedurally generating wilderness maps using Voronoi polygons, and I made a demo in Flash. The authors of Realm of the Mad God used it to generate thirteen maps that shipped with the game. The authors don't remember the map seeds they put into the map generator. These maps were used from 2010 to 2024. I've been curious what the map seeds were. I told myself at the time that it would be a pain to recover the seeds. There are 232 = 4 billion map seeds. It took 5 seconds to generate each map. That'd take 20 billion seconds, or around 630 years, to generate all the maps, each with 222 = 4 million tiles. But with some optimizations and shortcuts … I found a match! World 5 from RotMG compared to Mapgen2 seed 30997-1 After some more work, I found more matches: world rotmg from realmeye seed mapgen2 from my page 1 2 perlin 20927-1 3 perlin 84542-1 4 5 perlin 30997-1 6 7 perlin 65166-1 8 perlin 7785-1 9 perlin 43671-1 10 11 perlin 59103-1 12 blob, many 13 radial, many They don't match exactly, but they are close. I wrote up the process and results. I had allocated 4 hours for this project but ended up spending 9. #table-of-results { td { vertical-align: middle; } img { max-width: 20em; } }
www.redblobgames.com
November 8, 2025 at 8:35 AM
Mapgen4's use of WebGL2
I try to avoid big software rewrites. But sometimes the rewrites are just an excuse to re-familiarize myself with the code. I rationalized rewriting Mapgen4's renderer by saying I wanted to use WebGL2. And I did use WebGL2, but the improvements turned out to be minor: Vertex Array Objects to simplify and speed up code — but OES_vertex_array_object makes this available in WebGL1 (good support on all platforms) gl_VertexID to simplify and speed up code — only available in WebGL2 R16F texture format for elevation and depth — needing EXT_color_buffer_half_float in WebGL1 (poor support on Android) or EXT_color_buffer_float in WebGL2 (good support on all platforms) Linear filtering of elevation and depth — needing the R16F format first SRGB8 texture format for linear rgb color handling — supported in WebGL2, but could be emulated in WebGL1 I did not need to switch to WebGL2 for features, but implementing those features would make the code more complex and error prone. I decided to switch to WebGL2 so that I could simplify my code. The Vertex Array Objects and gl_VertexID didn't change the rendering output, but the texture format did, both good and bad, and I wanted to share some screenshots. Let's start with a screenshot comparison I showed last time: GL_NEAREST vs GL_LINEAR filtering The differences are slight — there are blue blotches on the ground when I use GL_LINEAR filtering. But why? Elevation is slightly too big for an 8 bit color channel, so I encode the elevation in the red and green channels (8 bits each), [fract(elevation), floor(elevation)/256.0]. For example an elevation of 32.5 would be encoded as [32.5%1.0, 32.0/256.0], or [0.5, 0.125]. In the fragment shader I decode it with with green*256.0 + red, in this case 0.125*256.0 + 0.5 = 32.5. GL_LINEAR texture filtering will blend each channel independently, then combine the channels into one value: (a.hi + b.hi) / 2.0 + ((a.lo + b.lo) / 2.0) / 256.0 But that produces a different value than correctly combining the channels first and then blending: ((a.hi + a.lo / 256.0) + (b.hi + b.lo / 256.0)) / 2.0 Near sea level, some of the below-sea-level elevation must be getting blended incorrectly into the above-sea-level ground. So I can't use GL_LINEAR filtering with this encoding. If I could store a single 16-bit value instead of two 8-bit values, GL_LINEAR would work correctly. And I wouldn't have to split the elevation into two channels, and I wouldn't have to combine them together again. It'd be simpler code and it would look better. Or so I thought. Here's the comparison: RGBA8 vs R16F texture, with GL_NEAREST filtering It looks the same, right? Well, no, it's slightly different, especially in the mountains: RGBA8 vs R16F texture, zoomed in on mountains The R16F format makes the mountains smoother! It also makes the valleys smoother but that difference is harder to see. The outputs should have been the same. Why aren't they? It turns out mapgen4 had a bug in it. I didn't actually encode the elevation the way I described above. I encoded with vec4(fract(256.0*e), e, 0, 1) and decoded with dot(color.xy, vec2(1.0/256.0, 1.0)). That doesn't preserve the value. I should have used vec4(fract(256.0*e), floor(256.0*e)/256.0, 0, 1). I used ObservableHQ Plot to plot the difference in these two calculations. The black dots are the incorrect encoding, and we can see that half of them are a little bit off: Correct vs incorrect encoding This bug adds a "texture" to the sides of mountains. I like the old look better. But don't want to keep the bug. I would prefer adding a texture intentionally. After I got R16F working, I decided to compare GL_NEAREST to GL_LINEAR. I couldn't use GL_LINEAR because it interpolated each channel separately, but I can use it now that I have only one channel: GL_NEAREST and GL_LINEAR looks almost the same It looks almost the same. Looking closer, the coastlines look slightly smoother: GL_NEAREST vs GL_LINEAR, coastlines are smoother So that wasn't as much of a win as I thought it would be. To compensate for the mountains looking smooth, I added a slider mountain_folds that adds geometry to make the sides of mountains look more interesting: mountain_folds slider to add geometric interestingness I enjoyed the graphics rewrite. I hadn't looked at the graphics code in a long time, and I found lots of places for improvement. I found and understood a bug that I had long suspected. All of this could have been done without a rewrite, but the rewrite got me to actually do it.
www.redblobgames.com
October 25, 2025 at 8:31 AM
Mapgen4 river shader
Back in 2018 I had the idea to prerender short segments of river bends and confluences into a texture, and then use that texture to draw the rivers to the screen. I was trying to have draw exactly one triangle per Delaunay triangle, so that I could generate the geometry ahead of time and only change the texture coordinates. I planned to implement this as a 2D table: Debug visualization of river rendering texture To simplify, I switched to using bezier curves: Debug visualization of curved river rendering texture I was planning to use the input and output width as the indices into the table, but ended up using only one width. What I didn't realize at the time was that I no longer needed a 2D table; I could've used a 1D table. And even better, I could've drawn the curve in a shader, like I did in these shader experiments. But I was focused on shipping and not doing things the cleanest or most efficient way. After I rewrote the main renderer, I decided to rewrite the river renderer too. I adapted one of the earlier shader experiments to make a river shader, and compared before/after: Old and new river renderer It's a bit hard to tell at that zoom level, but if you open the two images in separate tabs you can flip back and forth to compare. Here's a closer view: Old and new rendering, close up The rivers look sharper. But some of the flaws that were hidden before by the blurriness are now visible. To fix those flaws, I wanted to use a shader to draw wide bezier curves inside a triangle. I didn't have a good intuition for bezier curves with barycentric coordinates, so I had an LLM generate a quick & dirty tool to help me build my intuition. It's like shadertoy but for one triangle. This was an attempt at learning how to "vibe code", and I have mixed feelings about it, but that's a story for another time. Playing with the toy made me see that there should be a way to do what I want, but it's not from bezier curves like I originally thought. Instead, I can curve fit two degree-5 polynomials, one for each side of the river. Using the LLM-generated code helped me figure this out, but it wasn't able to directly give me a solution. I'll have to work out the details myself at some point. I added "better river shader" to my list, but at a low priority, so I'm going to work on some other things next and come back to it.
www.redblobgames.com
October 14, 2025 at 2:55 PM
Mapgen4 renderer
It's been a while since I've worked on mapgen4 features. In 2018 I declared it finished. I then put it away and worked on other projects. Over the years I have updated the code (es6 modules, typescript, pointer events, boundary points, pnpm) without adding any features. I had been leaving all feature updates for a future map generator. Last year I decided it was time to start thinking about new features. I started experimenting with some prototypes. That made me realize there are many improvements I'd like to make to mapgen4 before getting into big features. Mapgen4 debug view I made a list. One of the projects on the list is rewriting the renderer. But why? Migrate to WebGL2. I started mapgen4 in 2017. WebGL2 wasn't widely available until 2021. I'd like to use it both for features and performance. I'm using a WebGL1 library, Regl.js, which does not support WebGL2. Reduce load time. Regl.js almost doubles the JS size of mapgen4. I would be able to shrink mapgen4 significantly by switching away from it. Rewriting the renderer was not something I was looking forward to. It's work that has to be done in "one shot", where the intermediate steps aren't testable. I decided to use an LLM to help me. I rarely use LLMs for coding and learned a lot. In this project I used the LLM for translating existing working code from Regl.js to WebGL1. This felt different from having it write new code. The resulting code had some subtle bugs. I was able to track some of them down by comparing the output of the original mapgen4 renderer with the rewritten renderer. Having a previous working version was important. It got me out of my analysis paralysis. Having something that runs rather than a blank editor window was a huge win. I didn't like the structure of the code it generated. It triggered XKCD 386 for me. I ended up replacing all of its code, bit by bit, testing after each change. Along the way I re-evaluated some of how the renderer worked. For example, I'm using nearest neighbor filtering in many places, even though linear might be better. GL_NEAREST vs GL_LINEAR filtering Look at how much smoother the ocean colors are with linear filtering! But there are blue splotches near rivers. I decided to keep nearest neighbor filtering for now. I also discovered some bugs, like this one where one edge of the map is jagged, but only noticeable when the camera is rotated and zoomed in: Bug: jagged edge Overall, I don't think the LLM saved me any time. I didn't end up keeping the code it gave me. But it got me unstuck, and that meant I actually made progress. The main benefit of the rewrite was output size: File Size before Size after _worker.js 18543 18579 _bundle.js 150672 69469 (total) 169215 88048 But the source code did get bigger, from 600 lines to 750 lines (to avoid 9500 lines of Regl.js). The secondary benefits are that I got to revisit some of the decisions I made, I found some bugs, and I got unstuck. I didn't stop at the main renderer. I also rewrote the river renderer, which saved another 8666 bytes. That's for the next blog post.
www.redblobgames.com
October 3, 2025 at 2:19 PM
Harnessing ChatGPT hallucinations
The web server I use (nginx) logs errors to a file. I check this occasionally to make sure nothing's going wrong with my server. I typically get these kinds of 404 requests: typosthe url /pathfinding/a-star/inroduction.html was meant to be /pathfinding/a-star/introduction.html parsing errorsthe url /articles/visibility/)and was some text /articles/visibility/) and that went through a url regexp misconfigured crawlersthe url /region-graph-path.png was originally from html but the crawler tried fetching it from the top level instead of the subfolder attackersthe url /FCKeditor/editor/filemanager/connectors/asp/connector.asp?Command=FileUpload&Type;=File&CurrentFolder;=%2F is some crawler looking for weaknesses in my web site Last week I noticed a request for /articles/autotiles/ with the HTTP Referer chatgpt.com. This is an unusual request. It's not syntactic like the above. It's a semantic error. An AI confabulation/hallucination probably. The referrer means that the user was on chatgpt.com and clicked a link to https://www.redblobgames.com/articles/autotiles/, which is a 404 error. I decided to investigate more. If I ask what does red blob games have to say about autotiles, ChatGPT will tell me: I couldn’t find any evidence that Red Blob Games (Amit Patel) has written anything specifically about autotiles (in the sense of tilesets that automatically pick tile variants based on neighbors). Their site covers many things like grids, map generation, procedural terrain, pathfinding, etc., but I didn’t locate a page dedicated to autotiling. But if I ask tell me about redblobgames tutorial on wave function collapse, ChatGPT will tell me: The Wave Function Collapse (WFC) tutorial by Red Blob Games is an in-depth guide to understanding and implementing the WFC algorithm for procedural content generation. It's highly regarded for breaking down complex concepts in a very approachable way. I don't actually have a tutorial on Wave Function Collapse, but ChatGPT will give me the url https://www.redblobgames.com/articles/wfc/. This is a reminder that you shouldn't believe everything an LLM tells you! But lots of people have already written about that. I want to explore something different. Instead of treating the hallucinations as something to discard, I want to treat them as something to harness. I looked through my error logs to find other non-existent URLs that ChatGPT gave out. I also looked at what OpenAI's crawlers found but they weren't as useful. I looked at the top ten: /sliding-puzzle/: this is a classical AI topic that could fit into game-ai /articles/minimax/: this is also a classical AI topic that could fit into game-ai /grids/isometric/: this is a topic I'd like to write about /maps/noise/introduction.html: this is the wrong folder for /articles/noise/introduction.html /pathfinding/a-star/visualization.html: this is an existing folder but a non-existing page /grids/hex-grids/: this is the wrong version of /grids/hexagons/ /articles/procgen/: this is a page I could've written, to point to other procgen topics /grids/mazes/: this is a reasonable guess of what I might have written about /articles/mazes/: same /maps/noise/: wrong folder A subset of these are interesting to me — topics that I could write about. I went deeper into the list of 404s and collected these hallucinated URLs: /x/2048/hilbert.html /pathfinding/a-star/maze.html /pathfinding/a-star/visualization.html /x/1725-aabb-rectangle-overlap/ /x/1842-agent-based-model/ /x/1845-ai-car/ /grids/algorithms/ /x/1813-alpha-beta/ /pathfinding/ants/ /x/1844-ants/ /x/1841-atmosphere-simulation/ /articles/autotiles/ /x/1614-balance/ /x/1840-barycentric-coordinates/ /x/1842-barycentric-coordinates/ /x/1819-battleship-probability/ /x/1723-battleship-strategy/ /x/1846-battleship/ /x/1911-bits/ /x/1910-bitwise/ /x/1743-boids/ /x/1847-boids/ /x/2020-boids/ /x/1729-bresenham/ /maps/catan/ /x/1830-catan/ /maps/cave-generation/ /x/1845-cell-automata/ /x/2020-cells/ /articles/cellular-automata/ /grids/cellular-automata/ /x/1840-cellular-automata/ /x/1841-cellular-automata/ /x/1843-cellular-automata/ /x/2020-cellular-automata/ /x/2023-cellular-automata/ /x/2302-cellular-automata/ /grids/circle-circle-collision/ /x/1728-circle-drawing/ /x/1721-circle-packing/ /x/1830-circle-packing/ /x/1844-circle-packing/ /x/1945-circle-packing/ /articles/cities/ /x/1948-citizenship/ /maps/city-building/ /x/1840-city-generator/ /x/1841-city-generator/ /x/2023-city-generator/ /x/1842-citygen/ /x/1843-citygen/ /articles/collision-detection/ /x/1841-combat-model/ /x/compression/brotli.html /x/compression/string.html /articles/connected-tiles/ /articles/convex-hull/ /articles/conways-game-of-life/ /x/1721-conways-game-of-life/ /x/2020/cookie-clicker-save-editor/ /x/2020-cookie-clicker-save-viewer/ /grids/coordinates/ /maps/coordinates/ /x/2020-creature-trainer/ /x/1728-curve-editor/ /x/1729-curve-editor/ /articles/2d-geometry/ /articles/3d-octrees/ /pathfinding/d-star/introduction.html /articles/3d-voxel-traversal/ /grids/3d/ /articles/data-oriented-design/ /x/1724-delaunay-triangulation/ /x/1722-delaunay/delaunay/ /x/1845-dialogue-trees/ /articles/dialogue/ /x/2018-dice-calculator/ /x/1846-digging/ /x/dither/ /x/1844-dots-and-crosses/ /articles/dungeon-generation/ /maps/dungeon-generation/ /articles/procedural-generation/dungeon/ /x/dungeon/dungeon.html /x/2020/easing-graphs/ /x/1842-economy-sim/ /x/1843-ecosystem-simulation/ /x/1843-emacs-movement/ /x/1843-erosion/ /x/1843-evolution-creatures/ /x/1724-evolution-simulation/ /x/1841-evolution-simulation/ /x/1843-evolution-simulation/ /x/1844-evolution-simulation/ /x/1845-evolution-simulation/ /x/1811-evolution-simulator/ /x/1849-evolution-simulator/ /x/1810-evolution/ /x/1840-evolution/ /x/1842-evolution/ /x/1844-evolution/ /x/1846-evolution/ /x/1904-evolution/ /x/1942-evolutionary-algorithms/ /articles/finite-state-machines/ /x/1842-fire/ /maps/flag/ /x/flag/ /x/2023-flatbuffers/ /x/2020-float-to-bits/ /pathfinding/flowfield/ /fluid-simulation/ /x/1843-fluid-simulation/ /articles/fog-of-war/ /articles/formation-control/ /x/1843-fractal-tree/ /x/1843-frame-timing/ /pathfinding/funnel/ /maps/galaxy/ /x/1728-game-architecture/ /x/1842-game-loop/ /x/2023-game-programming-links/ /x/1618-genetic-algorithm/ /x/1842-genetic-algorithm/ /x/1844-genetic-algorithm/ /articles/genetic-algorithms/ /x/1717-genetic-algorithms/ /x/1724-genetic-algorithms/ /x/1816-genetic-algorithms/ /x/1830-genetic-algorithms/ /x/1845-genetic-algorithms/ /x/1951-genetic-algorithms/ /x/2020-genetic-algorithms/ /x/1844-genetic-drift/ /articles/geometry/ /x/1845-gradient-descent/ /x/1908-gradient-descent/ /x/1811-graph-maps/mazes.html /x/1843-graphics-pipeline/ /articles/graphs/ /x/1844-gravity-simulator/ /grids/circle-rect-intersection.html /grids/floodfill.html /grids/wordsearch.html /articles/hash-table/ /maps/heightmaps/ /x/1840-hex-map-editor/ /x/1843-hex-map-editor/ /x/hex-map-maker/ /x/1840-hex-range/ /x/2342-hex-roads/ /x/1843-hex-sphere/ /x/1843-hex-tile-mesh/ /x/1842-hex-viewer/ /x/1844-hexagon-tilings-spherical/ /x/hexagonal-grid-generator/ /x/1843-hexagons-on-sphere/ /x/1840-hexagons-vs-squares/ /maps/hexagons/ /x/honeycomb/ /x/1729-hydraulic-erosion/ /x/1843-icosphere-hexagons/ /x/1812-ik/ /x/1816-ik/ /x/1843-ik/ /x/1844-ik/ /x/1846-ik/ /x/2020-infection-model/ /x/1923-intro-aabb/ /x/1722-intro-spatial-partitioning/ /x/1724-inverse-kinematics/ /x/1840-inverse-kinematics/ /x/1842-inverse-kinematics/ /x/1849-inverse-kinematics/ /x/1908-inverse-kinematics/ /maps/islands/ /x/1842-isometric-depth-sorting/ /x/1911-isometric-projection/ /x/1847-isometric-tiles/ /x/1911-isometric-voxels/ /x/1911-isometric-voxels/ /articles/isometric/ /grids/isometric/ /isometric/ /x/1930-jigsaw-puzzle/ /x/1806-jumping/ /x/1721-kdtree/ /articles/kinematics/ /x/1728-langstons-ant/ /x/1840-langtons-ant/ /x/1844-langtons-ant/ /x/1943-langtons-ant/ /x/1844-life-simulation/ /x/2048-life/ /x/2022-lighting/ /x/1842-line-drawing-algorithms/ /x/2020-lorenz-attractor/ /x/1847-lowpoly/ /x/2023-lsystem/ /x/1722-lsystems/ /x/1843-lsystems/ /articles/map-generation/ /catan/map-generator/ /maps/mapgen/ /x/1844-mapgen/ /x/1843-mapgen4/ /maps/mapmaker/ /x/1843-mapping-spherical-worlds/ /x/1910-marching-squares/ /x/1728-markov-decision-processes/ /x/1843-matrix-transformations/ /matrix-transforms/ /grids/maze-generation/ /x/1847-maze-generation/ /articles/maze/ /pathfinding/maze/ /x/2020-mcts/ /maps/mesh/ /articles/minimax/ /pathfinding/minimax/ /x/1941-minimax/ /x/1840-model-view-controller/ /articles/monads/ /articles/monte-carlo-tree-search/ /x/monty-hall/ /x/1830-morphogenesis/ /pathfinding/multi/ /pathfinding/navmesh/ /x/1841-neural-networks/ /x/2217-nodal/ /x/1847-noise-viewer/ /x/1729-noise/ /x/1840-noise/ /x/1724-np-completeness/ /x/1846-octrees/ /x/1726-offset-paths/ /x/1845-optical-flow/ /articles/organisms/ /x/1729-p-vs-np/ /x/1924-packing-boxes/ /pathfinding/pacman/ /x/2020-pandemic-simulation/ /x/1843-particle-life/ /x/2020-particle-life/ /x/1846-particles/ /articles/partitioning/ /pathfinding/jump-point-search.html /x/1843-pedestrian-simulation/ /maps/perilous-shores/ /maps/perlin-noise/ /x/1729-perlin-noise/ /x/1842-physics-simulations/ /articles/physics/ /x/1723-physics/ /x/1841-physics/ /x/pixel-count/ /x/1613-pixel-grid/ /x/1841-planet-generation/ /x/1844-planet-generation/ /x/1843-planet-generator/ /x/planet/ /articles/noise/poisson-disc/ /articles/poisson-disk-sampling/ /x/1830-poisson-disk/ /x/1848-polygon-boolean/ /x/1843-polygon-editor/ /articles/polygon-fill/ /articles/polygon-map-generation/ /x/1923-polygon-offsets/ /articles/polygon-partitioning/ /x/1721-polygons/ /articles/procedural-cave/ /x/1843-procedural-city/ /x/1843-procedural-creatures/ /articles/procedural-generation/ /x/1843-procedural-paths/ /x/1843-procedural-planet-gen/ /x/1723-procedural-rivers/ /articles/procedural-road-generation/ /articles/procgen-cave/ /maps/procgen/ /pathfinding/puzzle/ /x/1728-quadtree-visualization/ /x/1840-quadtree-visualization/ /x/1848-quadtree/ /articles/quadtrees/ /grids/quadtrees/ /x/1729-quadtrees/ /x/1843-ray-optics/ /x/1642-reaction-diffusion/ /x/1724-reaction-diffusion/ /x/1726-reaction-diffusion/ /x/1846-reaction-diffusion/ /x/1849-reaction-diffusion/ /x/2022-reaction-diffusion/ /x/reaction-diffusion/ /grids/rectangles/ /x/1844-robot-arm/ /x/1726-rock-paper-scissors/ /x/roguelike-java/ /articles/rooms-and-corridors/ /grids/rotation/ /x/1724-rotations/ /x/1723-rps-simulation/ /x/1720-sand/ /x/2216-sdf-physics/ /x/1922-self-driving-car/ /articles/shunting-yard/ /sliding-puzzle/ /x/2020-smooth-curve/ /pathfinding/smoothing/ /x/1849-soft-body-creatures/ /x/2001-solar-system-generator/ /x/2044-solar-system-generator/ /x/2142-solar-system/ /articles/spatial-indexing/ /x/1724-spherical-voronoi/ /grids/spiral/ /x/1843-spring-damper/ /x/1845-spring-mass/ /x/1843-star-map-generator/ /x/2020-starscape/ /x/1843-stat-scaling/ /articles/steering-behaviors/ /x/1844-straight-skeleton/ /x/1840-svg-vs-png/ /pathfinding/tangent-bug/ /maps/tectonics/ /x/1847-tectonics/ /x/2311-tensor-cores/ /x/1842-terrain-blending/ /x/1845-terrain-generation/ /maps/terrain/ /x/terrain/ /x/1841-tetris-ai/ /x/2312-tetris-art/ /x/1725-threading/ /articles/tile-map-logic/ /articles/tilemaps/ /grids/tiles/ /maps/tiles/ /x/1845-town/ /x/1840-triangle-intrusion/ /grids/triangle/ /x/1843-trig/ /x/1846-tsp-genetic/ /pathfinding/tsp/ /x/1723-turing-patterns/ /x/1729-turing-patterns/ /x/1844-turing-patterns/ /x/1847-turing-patterns/ /articles/turn-based-ai/ /x/1846-turn-based-loop/ /x/turtle/ /x/1841-vanishing-points/ /articles/vectors/ /x/1840-vectors/ /x/2022/06/virtual-machines-in-rust/ /x/1707-visibility-2d/ /x/1723-visibility/ /articles/visible-area/ /articles/visualizing-parallax/ /x/1846-volcano/ /x/1847-volumetric-light/ /x/1908-voronoi-3d/ /x/1836-voronoi-circle-growth/ /x/1830-voronoi-diagram/ /x/1841-voronoi-diagram/ /x/1843-voronoi-diagram/ /x/1846-voronoi-diagram/ /x/1847-voronoi-diagram/ /x/1848-voronoi-diagram/ /x/1840-voronoi-map-generator/ /x/1849-voronoi-map-generator/ /x/1721-voronoi-map/ /x/1723-voronoi-map/ /x/1730-voronoi-maps/ /x/1843-voronoi-noise/ /x/1912-voronoi-noise/ /x/1842-voronoi-relaxation/ /x/1843-voronoi-tile/ /x/1846-voronoi-tsp/ /x/1722-voronoi/ /x/1729-voronoi/ /x/1847-voronoi/ /x/2023-voronoi/ /x/voronoi/ /x/1728-voxel-meshes/ /articles/voxel-raycasting/ /x/1720-voxel-terrain/ /articles/voxels/ /grids/voxels/ /x/1840-wang-tiles/ /x/1847-water-simulation/ /x/1844-water/ /x/1843-waterflow/ /x/1844-waterflow/ /x/1728-wave-function-collapse/ /x/1729-wave-function-collapse/ /x/2023-wave-function-collapse/ /articles/wavefunction-collapse/ /x/wavefunction-collapse/ /x/2020-wavefunctioncollapse/ /x/2022-weather/ /articles/weighted-random/ /x/1836-weighted-voronoi/ /x/2023-wildfire-ca/ /maps/worldgen/ /x/1844-wraparound/ /x/1840-quadtrees /x/1843-solar-system-generator /x/1847-sine-wave-art /x/1849-town /x/1941-minimax Most of the URLs it's giving out are plausibly something I could have written. I think it's also interesting to see what it has "learned" about my URL structure. For my x/ pages, I use the first two digits for the year. ChatGPT is making up lots of URLs starting with x/18 which means that it thinks that 2018 was my most productive year. And … maybe it's right. Hm. This was a fun diversion. My server's error logs tell me that ChatGPT thinks I've written pages I haven't. Maybe I should actually write those pages! #urls { color: hsl(45 50% 30%); max-height: 20em; scrollbar-y: scroll; } blockquote { padding-inline: 2em; }
www.redblobgames.com
October 3, 2025 at 2:19 PM
Let's write a search engine, part 2 of 2
Last time we implemented a search feature that found all matching documents, sorted in some arbitrary order. This time I want to sort them in a more reasonable order. Here are some ideas, some which depend on the query and others which don't: popular pages first newer (or older) pages first promote (or demote) certain folders manually promote (or demote) based on the type of content in the page pages that have more occurrences of the query string pages where the query matches are spread out (or concentrated) pages that have the query string in titles or headings I'm going to try building this: What we're trying to build To keep the scope small, I'm going to stick to substring match and not other query matching algorithms: word match (graph pathfinding matches graph for pathfinding and pathfinding graph) ordered word match (big dog matches big red dog but not dog was big) regexp match (r.*dog matches red dog but not dog was red) wildcard match (r* dog matches red dog but not dog was red) plurals / stemming (hexagon matches hexagons and hexagonal) stop words (the algorithm matches an algorithm) spelling (dikjstra matches dijkstra) synonym match (mapgen matches map generator) case match (OPEN set matches OPEN set but not open set) punctuation match (can't matches can't but not cant) parent category match (#mapgen matches #mapgen4) Note that sometimes matching should be asymmetric, because the way we write things in queries is not the same way we write things in documents: open set should match both OPEN set and open set, but OPEN set should match only OPEN set cant should match both can't and cant, but can't should match only can't mapgen should match both #mapgen and #mapgen4, but #mapgen4 should match only #mapgen4 There's a whole lot to explore with matching algorithms but I want to finish this project in a weekend so I will stick to case-insensitive substring match. For some experiments I need access to headings, so I added headings to the search data. Some of my headings are title="heading" and some are ### heading so I extracted both: fd --extension bxml | while read -r file; do echo "%%%% $file %%%%" perl -ne 'print "HEADER=$1\n" if /title="(.*)?"/' "$file" perl -ne 'print "HEADER=$2\n" if /(.*?)<\/h\1>/' "$file" cat "$file" \ | pandoc --read html --write plain --wrap none \ | grep 'a-zA-Z0-9]' done >all-pages.txt I'm using regexps instead of xpath or xmllint to extract from my xhtml files, so it misses some things and improperly formats others. That's ok for now. The main goal is to try out ranking, and then I can go back and improve the extraction if I want to pursue this project. I then refactored the code into smaller functions to make experimentation easier. I also improved the formatting and made the search results clickable (opening in a new window). function escapeHTML(text) { // wish this was built in const map = { '&': '&', '<': '<', '>': '>', '"': '"', "'": ''' }; return text.replace(/[&<>"']/g, m => map[m]); } function parseIntoDocuments(text) { // results[0] will be "", then [1] is a filename, [2] is text, alternating const results = text.split(/%%%% (.*?) %%%%$/gm); if (results[0] !== "") throw `expected first split to be empty ${results[0]}`; if (results[1].length > 100) "expected second split to be filename"; let documents = []; for (let i = 1; i+1 < results.length; i += 2) { let filename = results[i]; let lines = results[i+1].split("\n"); lines = lines.map(line => line.trim()); // remove extra whitespace lines = lines.filter(line => line); // remove blank lines let title = (lines.find((line) => line.startsWith("HEADER=")) ?? "").slice(7) || filename; let url = filename.startsWith("redblobgames/") ? "https://www.redblobgames.com/" + filename.slice(13) : "http://www-cs-students.stanford.edu/~amitp/" + filename; url = url.replace(".bxml", ".html"); url = url.replace("/index.html", "/"); documents.push({url, title, lines}); } return documents; } const words = await (await fetch("./all-pages.txt")).text(); const documents = parseIntoDocuments(words); function search(query) { let results = []; // list of {document, snippet} // where document is {url, title, lines} // where snippet is a list of {line, matches, isHeader} if (query) { query = new RegExp(RegExp.escape(query), 'gi'); for (let document of documents) { let snippet = []; for (let line of document.lines) { let isHeader = false; if (line.startsWith("HEADER=")) { line = line.slice(7); isHeader = true; } let matches = line.matchAll(query).toArray(); if (matches.length > 0) { snippet.push({line, matches, isHeader}); } } if (snippet.length > 0) results.push({document, snippet}); } } return results; } function formatSnippet(snippet) { // TODO: be able to sort snippets before picking the top matches const MAX_SNIPPETS = 8; let result = "… "; for (let {line, matches} of snippet.slice(0, MAX_SNIPPETS)) { let pos = 0; for (let m of matches) { result += escapeHTML(line.slice(pos, m.index)) + `**${m[0]}** `; pos = m.index + m[0].length; } result += `${escapeHTML(line.slice(pos))} … `; } return result; } function formatSearchResults(query, results) { return results .map(({document, snippet}) => `[ ${document.title} ${document.url} ${formatSnippet(snippet)} `) .join("\n"); } function setUpSearch(id, sortResults) { const MAX_DOCUMENTS = 10; const el = document.getElementById(id); const update = () => { let query = el.querySelector("input").value; let results = sortResults(search(query)); results = results.slice(0, MAX_DOCUMENTS); el.querySelector(".results").innerHTML = formatSearchResults(query, results); }; el.querySelector("input").addEventListener('input', update); update(); } setUpSearch('search-4', (results) => results); I wanted to test the refactored code before implementing ranking. This shows documents in some arbitrary order, like before: Search engine 4: Let's try ranking by the number of matches in the document: setUpSearch('search-5', (results) => { function score({document, snippet}) { let numberOfMatches = 0; for (let {matches} of snippet) { numberOfMatches += matches.length; } return numberOfMatches; } return results.toSorted((a, b) => score(b) - score(a)); }); Search engine 5: This is certainly better. The top 5 results before were four hexagon articles and then one about colors. The top 5 results now all have lots of hexagon content. Let's try ranking header matches higher than other matches: setUpSearch('search-6', (results) => { function score({document, snippet}) { let numberOfMatches = 0; for (let {matches, isHeader} of snippet) { numberOfMatches += matches.length; if (isHeader) numberOfMatches += 10; // needs tuning! } return numberOfMatches; } return results.toSorted((a, b) => score(b) - score(a)); }); Search engine 6: I think this is even better than before. But it's still mediocre. Should I prioritize matches that occur in a cluster (max matches per block) or matches that are spread out (number of matching blocks)? I don't know! It will take a lot of testing. Ranking takes a lot of work. There's so much more that can be done to make search better. Some ideas: try out the other ranking ideas I listed at the top of the page extract documents correctly; there are going to be lots of little details to get right (data scientists know that "data cleaning" is one of the most time consuming parts of a project) identify pages that are just linking to other pages (for example, my blog home page), and prefer returning search results from the linked-to pages trim snippets at word boundaries centered at the query matches instead of picking the beginning of the line split documents into blocks better (for example, and should go together, not separate blocks), or maybe get rid of the block idea entirely try the query matching rules I mentioned earlier, to handle words, phrases, ordering, synonyms, plurals, etc. set up a test suite of queries and hand-picked best results, so that I can evaluate the different ranking algorithms and tuning parameters change ranking philosophy from the set of best pages to the best set of pages use word2vec or a vector db (from the LLM world) to implement fuzzy matching or group related pages together generate a good snippet instead of the first N matches Tangent: I think the description of search results is underrated. Early web search engines used a static description provided by the author of the page ("meta tag" description) or by a curator (Yahoo / DMoz directories), or generated a description per page. Google generated a description by scanning the document to find the query words, so that you could see the keywords in context of the document. Being able to see the matches of your query instead of a generic description made people feel that Google was actually paying attention to the query words. This is a surprisingly hard problem that I don't see much written about. (For many queries, I think this was more important than PageRank™) Another question is how to scale this up. I decided not to here. All the words on all my web pages put together are only a few megabytes, smaller than the javascript alone on the NYTimes home page. I can load the entire web site and search over it. But if I had many more pages, I might want an inverted index, which can tell us which documents contain a word. See Let's Build a Full Text Search Engine. If I wanted to index pages written by other people, I might want ranking functions that were more resistant to spamming, such as using links between documents and the anchor text used to describe those links. I'm not planning to implement these right now. I wanted to see how far I could get in a weekend. The search results I'm getting here are decent. If you want to play with it some more, I made a standalone version that uses the full page to show search results. This was a fun project! /* for this page I need to more strongly distinguish inputs (kbd) from outputs (code) */ kbd { padding-inline: 0.4em; background-color: hsl(120 15% 95%); color: #486; border: 0.5px solid #486; border-radius: 4px; white-space: nowrap; } code { padding-inline: 0.4em; background-color: hsl(60 15% 95%); border-radius: 4px; white-space: nowrap; } figure.search { box-shadow: 0 1px 3px 3px rgb(0 0 0 / 0.5); border: 1px solid oklab(0.5 0.0 -0.15); border-radius: 4px; font-family: var(--sans-serif); input { /* I want this style to be similar to kbd */ font-family: var(--monospace); font-size: 1rem; font-weight: bold; border-width: 0.5px; border-radius: 4px; background-color: hsl(120 15% 95%); border-color: #486; color: #486; accent-color: #486; } figcaption { font-size: unset; background: oklab(0.9 0.0 -0.01); } .results { text-align: left; height: 12em; overflow: scroll; b { background: oklab(1.0 0.1 0.2); } a { display: block; padding-left: 0.5rem; background: oklab(0.9 0.0 -0.01); color: oklab(0.5 0.0 -0.15); text-decoration-color: oklab(0.5 0.0 -0.15); } .url { padding-inline: 2rem; font-family: var(--monospace); font-size: 50%; line-height: 1.5; color: oklab(0.5 -0.15 0.0); } .snippet { padding-inline: 2rem; max-height: 6em; overflow: clip; font-size: 75%; line-height: 1.5; } } } /* allow code to be wider than the main column */ section > .org-src-container { width: fit-content; max-width: unset; pre { width: fit-content; min-width: var(--body-width); max-width: unset; max-height: 20em; overflow-y: auto; } }
www.redblobgames.com
September 2, 2025 at 1:30 PM
Let's write a search engine, part 1 of 2
I have a search box on my web site that uses Google to search my site only using the site: operator. Try it — type in hexagon tiles and you'll see that it shows my hexagon tile page. Search my site using Google: I wondered though: what would it take to make my own search feature that worked as you typed? Prototype of search feature I also wondered if there are other things I might want to build with this data, like Simon Willison's "related pages" feature. Or browsing by category. Or browsing by date. The first thing I need to do is extract the text of the pages I want to search, without formatting or images or html. I write my pages in xhtml. A page might contain this xhtml tree: _Graph search_ algorithms let us find the shortest path on a map represented as a graph. One popular algorithm is A* . Let's start with Breadth First Search and work our way up from there. For searching, I'd like to simplify this into a set of lines, each representing one "block" of text: Graph search algorithms let us find the shortest path on a map represented as a graph. One popular algorithm is A*. Let's start with Breadth First Search and work our way up from there. I can then run a tool like grep on these lines. Will this produce good search results? I don't know! I need to try it and see. I started thinking about the extraction algorithm from xhtml to text. Which tags should I consider "block"? I started looking at MDN for a list. I also wondered what to do about situations like this: This is a block of text inside another block of text Should it be This is a block of text inside another block of text or This is a block of text inside another block of text But I realized that trying to figure this out is a distraction. Maybe it'll matter. Maybe it won't. But I don't know yet, and I shouldn't prematurely try to solve this problem. Instead, the most important thing is to investigate the biggest unknown. The biggest unknown in this case: is searching over lines even useful? So I instead tried this (using the xhtml input file, which is the page without the header, nav bar, footer, copyright, etc.): cat pathfinding/a-star/introduction.bxml \ | pandoc --read html --write plain --wrap=none \ | less I looked at the output and decided this is a reasonable starting point! It gives me the text of the page. I can refine it later. The next step is to collect the text from all my pages: fd --extension bxml | while read -r file; do cat "$file" | pandoc --read html --write plain --wrap none done >all-pages.txt That's only 1.6MB. That could easily be loaded into a web browser. There are many web pages bigger than that! I made a few changes to my shell script: Added a separator with the filename Skipped 'noindex' files, as these are often unfinished pages I don't want in a search result Skipped lines that don't have text fd --extension bxml | while read -r file; do fgrep '' "$file" \ || (echo "%%%% $file %%%%"; cat "$file" \ | pandoc --read html --write plain --wrap none \ | grep '[a-zA-Z0-9]') done >all-pages.txt Great! What's next? Let me load this into a web page: const words = await (await fetch("./all-pages.txt")).text(); const lines = words.split("\n"); Now I can search over these lines. function search1(event) { let results = []; let query = event.currentTarget.value; if (query) { for (let line of lines) { if (line.indexOf(query) >= 0) results.push(line); if (results.length > 10) break; } } document.querySelector("#search-results-1").textContent = results.join("\n"); } document.querySelector("#search-query-1").addEventListener('input', search1); Type hexagon into this search box: Search engine 1: Ok, great, this works and is fast! I don't have to worry about building an inverted index or other optimizations. But it's hard to see the matches. Let's add: highlight the matches trim extra whitespace from the document case insensitive match function escapeHTML(text) { // wish this was built in const map = { '&': '&', '<': '<', '>': '>', '"': '"', "'": ''' }; return text.replace(/[&<>"']/g, m => map[m]); } function search2(event) { let results = []; let query = event.currentTarget.value; if (query) { query = new RegExp(RegExp.escape(query), 'gi'); for (let line of lines.map(line => line.trim())) { let matches = line.matchAll(query).toArray(); if (matches.length > 0) { let result = ""; let pos = 0; for (let m of matches) { result += escapeHTML(line.slice(pos, m.index)) + "**" + m[0] + "** "; pos = m.index + m[0].length; } result += escapeHTML(line.slice(pos)); results.push(result); } if (results.length > 10) break; } } document.querySelector("#search-results-2").innerHTML = results.join("\n"); } document.querySelector("#search-query-2").addEventListener('input', search2); Type hexagon into this search box: Search engine 2: Hey, that's looking pretty good. We're matching lines. But I want to match documents. I had inserted document separators %%%% into the text file. Let's use that now. function parseIntoDocuments(text) { // results[0] will be "", then [1] is a url, [2] is text, with url/text alternating const results = text.split(/%%%% (.*?) %%%%$/gm); if (results[0] !== "") throw `expected first split to be empty ${results[0]}`; if (results[1].length > 100) "expected second split to be url"; let documents = []; for (let i = 1; i+1 < results.length; i += 2) { let url = results[i]; let lines = results[i+1].split("\n"); lines = lines.map(line => line.trim()); // remove extra whitespace lines = lines.filter(line => line); // remove blank lines documents.push({url, lines}); } return documents; } const documents = parseIntoDocuments(words); function search3(event) { const MAX_DOCUMENTS = 5; const SNIPPET_LENGTH = 3; let results = []; // list of documents let query = event.currentTarget.value; if (query) { query = new RegExp(RegExp.escape(query), 'gi'); for (let document of documents) { let snippet = []; // list of lines for (let line of document.lines) { let matches = line.matchAll(query).toArray(); if (matches.length > 0) { let result = " … "; let pos = 0; for (let m of matches) { result += escapeHTML(line.slice(pos, m.index)) + `**${m[0]}** `; pos = m.index + m[0].length; } result += escapeHTML(line.slice(pos)); snippet.push(result); } if (snippet.length > SNIPPET_LENGTH) break; } if (snippet.length > 0) results.push({document, snippet}); if (results.length > MAX_DOCUMENTS) break; } } let html = results .map(({document, snippet}) => `_[${document.url}]_ \n${snippet} …`) .join("\n"); document.querySelector("#search-results-3").innerHTML = html; } document.querySelector("#search-query-3").addEventListener('input', search3); Type hexagon into this search box: Search engine 3: Great! We're now grouping matches into documents, and limiting how many show up. But we're showing the first N matching documents. This is like Ctrl + F over all documents. A "search engine" should show the best documents instead of the first in alphabetical order. Let's attempt that in the next post. /* for this page I need to more strongly distinguish inputs (kbd) from outputs (code) */ kbd { padding-inline: 0.4em; background-color: hsl(120 15% 95%); color: #486; border: 0.5px solid #486; border-radius: 4px; white-space: nowrap; } code { padding-inline: 0.4em; background-color: hsl(60 15% 95%); border-radius: 4px; white-space: nowrap; } figure.search { box-shadow: 0 1px 3px 3px rgb(0 0 0 / 0.5); border: 1px solid oklab(0.8 0.01 0.02); border-radius: 4px; font-family: var(--sans-serif); input { /* I want this style to be similar to kbd */ font-family: var(--monospace); font-size: 1rem; font-weight: bold; border-width: 0.5px; border-radius: 4px; background-color: hsl(120 15% 95%); border-color: #486; color: #486; accent-color: #486; } figcaption { font-size: unset; background: oklab(0.85 0.01 0.02); } } pre b { background: oklab(1.0 0.1 0.2); } pre i { background: oklab(1.0 -0.1 -0.2); } /* allow code to be wider than the main column */ section > .org-src-container { width: fit-content; max-width: unset; pre { width: fit-content; min-width: var(--body-width); max-width: unset; max-height: 20em; overflow-y: auto; } }
www.redblobgames.com
September 1, 2025 at 1:27 PM
Hexagon conversions
My hexagon guide has many conversion routines — axial to cube, cube to offset, hex to pixel, etc. Sometimes these steps can be combined into larger steps or separated into smaller steps. There's a balancing act between: give the reader the final answer give the reader the ingredients so they can adapt them as needed My original goal was to provide code for these 22 conversions: However, I didn't know all the conversion formulas, and ended up providing only these 16: So that means if you want to go in reverse from pixel to odd_r, you build the chain pixel → axial → cube → odd_r. Not great. Things got more complicated when I added doubled_h, doubled_v. And I want to add several spiral systems: Last week I wanted to add conversions for non-uniform pixel sizes. That was adding far more edges to this graph. I simplified by splitting the pixel conversion into multiple steps: The problem is that … I think most readers want a formula that solves their problem. Breaking things into steps makes it easier for me, and I think it's better for understanding what's going on, but it's less convenient for the reader. I took the single step conversion from axial to pixel: Before: single step axial to pixel And split it into axial to cartesian and then scaling the cartesian: After: separate scaling step Why? Because it allows scaling non-uniformly, to match a desired pixel art size: Non-uniform scaling If we inline the calculations we end up with this: Non-uniform scaling It's nice. There's no more sqrt(3) there! By keeping the steps separate, it allows for adapting to more situations: translate transform to get a non-zero origin scale transform to get non-regular hexagon sizes rotation transform to get angles other than "pointy" and "flat" isometric view combined a shear and rotation operation I think it's easier to understand inverting the process from hex-to-pixel to pixel-to-hex when the steps are separate. I have mixed feelings about this change but I made it in part because I wanted to show how to adjust the conversions to match the size of art assets. You can see the new code in the hex-to-pixel and pixel-to-hex sections of the hexagons guide. I've added a section where you can enter the pixel art asset size, and I output the conversion routine. Maybe I can extend that interactive code generator to work for all the coordinate systems. Let me know what you think. I might change it back based on feedback. While working on this section, I realized I want to add more direct support for doubled coordinates. It probably makes more sense to go from offset coordinates to doubled to pixel than offset to cube to axial to pixel. But that will wait for another day. I also realized that the main page has pixel-to-hex that returns a rounded value, but the implementation guide has a pixel-to-hex that returns a fractional value, and you have to round it yourself. That's because the fractional value is sometimes useful. I updated the implementation guide to provide both.
www.redblobgames.com
June 11, 2025 at 1:00 PM
Mapgen4 trade routes
Last time I talked about de-optimizing mapgen4 so that I could more easily create experiments. I wanted to share one of those experiments. I recently assembled some of my existing ingredients into something that was just plain fun to watch, and I wanted to share this simulation of traders moving around a map: Trader simulation Conceptually, each dot on this simulation represents a trader following a path between two random points. The first ingredient is a pathfinding algorithm such as A*. Pathfinding works on a graph. So the second ingredient is a graph: Graph of regions I'm using a Delaunay + Voronoi dual graph. Here's a random path on this graph: Path on the region graph Each trader follows a path. But I decided it'd be more interesting to animate the points on a graph of edges instead of the regions. Here's a graph between the edges of the original map. Each node in this pathfinding graph is an edge in the original graph, and each edge in this pathfinding graph is between two edges in the original graph: Graph of edges That's quite a bit more dense! Here's the path on the new graph — it looks smoother: Path on the edge graph The choice if graph is something to keep in mind. There may be many choices of pathfinding graph for a given map. I usually start by making the pathfinding graph the same as the underlying game map, but there are many more choices, and lots of tradeoffs to consider. In this case I chose the graph between edges because it let me create multiple "lanes" of traffic. I separated the traffic going one direction from the traffic going the other direction. That made the visuals nicer. I can run pathfinding on a graph, but I actually want to create lots of paths. A* finds one path at a time. There are algorithms that can find multiple paths at once. This is another thing to keep in mind. Sometimes we're so focused on a small part of the problem that we miss out on on simpler or faster solutions that solve the larger problem. There are several classes of pathfinding algorithms: ProblemsSolutions sourcedestinationalgorithm classalgorithms oneone single pair shortest path A* oneallallone single source shortest paths Dijkstra's, Bellman-Ford allall all pairs shortest path Floyd-Warshall, Johnson's It could be useful to precompute all the paths here, using Floyd-Warshall. Then all the animated points would move along the already-computed paths. In my previous experiments I found that Johnson's Algorithm was faster than Floyd-Warshall. Johnson's Algorithm runs Dijkstra's Algorithm for each source location and then stores all the outputs. for each location: run dijkstra's from source location create lots of traders: follow a path between a random source and a random destination Because the map is changing as the user paints on it, I wanted to avoid an expensive precomputation step. I decided to implement something slightly different. Instead of running Dijkstra's from all locations, then creating all the traders, I ran Dijkstra to one location, created the traders from that location, then ran Dijkstra to another location, created the traders there, etc.: for each location: run dijkstra's from source location create lots of traders: follow a path between the known source and a random destination I run Dijkstra's Algorithm to build flow fields for pathfinding : Flow field It was fun putting together ingredients into something new: Flow field pathfinding - Dijsktra's Algorithm Voronoi/Delaunay graphs Mapgen2's renderer Mapgen4's generator + painting Try out experiment 2515 — simulation of traders in an editable procedurally generated map.
www.redblobgames.com
May 10, 2025 at 1:05 PM
De-optimizing mapgen4
Lately I've been experimenting with map algorithms. I have three starting points: Generator Elevation Biomes Rivers Editable? mapgen1 noise noise down no mapgen2 distance field distance field down no mapgen4 noise simulation up yes (What happened to mapgen3? It was a failure. I had tried to change the programming language, data structures, and algorithms at the same time. I had better luck changing one thing at a time.) I'd prefer to use mapgen4 for my experiments. But it's hard. Mapgen1 and mapgen2 were non-interactive. They had to run at a reasonable speed, but just once. My goal was to have it take less than 1 second. In mapgen4 I wanted every edit to re-run the entire simulation from scratch. I re-calculated the elevation, evaporation, wind, rainfall, biomes, and river flow, and then render the whole thing, ideally in 30ms. But the simulation was far more expensive than what I was using in mapgen1 and mapgen2. It took 30 seconds. I needed to speed it up by a factor of 1000. It took a few months in 2018, but I did it. I'm proud of that. To make the code run faster, I: Precomputed as much as I could (triangle mesh, polygon vertices, simplex noise, mountain positions, …) Made assumptions about what algorithms ran, in what order, and what data would not change. Rendered everything on the GPU instead of on the CPU. Precomputed textures encoding data. Used multiple threads with message passing (no shared memory or locking back then). Reduced the resolution of some of the data so that I would have less data to process. Cached results of some algorithms. Used fixed-sized arrays of numbers instead of variable-sized arrays of objects. But all of this made the code hard to work with. I had optimized it to do only the things I needed in mapgen4. It's hard to change anything, especially at run time, with a slider. And I grew to hate working with that code. Back in 2019 I posted to twitter: When I'm working with simple but inefficient code, I can move from design A to B to C easily. When I optimize the code, I'm often pushing myself into a local minima. It's not only harder to change designs, I often can't even see the other designs anymore. Mapgen4 is stuck. This also happens to me with abstractions. When I use abstractions early, I push myself into local minima. Harder to change, also harder to see other possibilities. Last year I had to de-abstract the code on my A* page before I could make progress https://simblob.blogspot.com/2020/04/graph-search-diagrams-and-reusable-code.html The next time I'm working on a map generation project, it will be hard to start from mapgen4 because it's so optimized into its niche. I hate the code. I'm going to have to de-optimize it first before I can move around in generator design space. Then re-optimize it at the end. It's been many years now, and I'm ready to revisit the code. I wanted to experiment with some map algorithms, and tried using mapgen4 as the base. But it was a pain. Drawing a bezier curve or text in webgl is so much more work than drawing it in 2D Canvas. So I thought: what if I drew to a 2D Canvas, and then "draped" that over the mapgen4 map? I tried that back in January. It worked. Mostly. I don't have the map data in the main thread; it's only in the worker thread. So I drew to the 2D Canvas in the worker thread, using OffscreenCanvas, a feature that wasn't available back in 2017 when I started mapgen4, but is available as of 2023. It was a little awkward. And sometimes glitchy. And inconsistent across browsers. It's good enough for my own experiments but not as good for things I want to share. So when I wanted to try an actual experiment, I went back to mapgen2. It's reliable. I tried an experiment to build towns and roads on the map. It was so much easier to work with mapgen2 code than mapgen4 code. But it doesn't let me paint on the map. And the generation algorithm is designed for Realm of the Mad God, not for other projects of mine. So I decided to start de-optimizing mapgen4. Mapgen2 and mapgen4 share some "DNA". They're both built on the same underlying data structures and libraries, but they use different map generation and rendering code. So I tried gluing the mapgen2 renderer onto the mapgen4 generator, and … it worked! For now, I left in the multithreaded part of mapgen4, but I removed all the webgl rendering code and replaced it with mapgen2's software renderer. There's plenty more de-optimization I could do, but that'll be another day. So try it out! It's a mapgen2 + mapgen4 hybrid. You can draw on the map like mapgen4 but it will draw in the style of mapgen2. It's not optimized to the extent mapgen4 was, but I think it'll be a good place for me to try some map experiments.
www.redblobgames.com
April 23, 2025 at 12:59 PM
Hexagon spiral coordinates
My guide to hexagonal grids is one of the most popular pages on my site. I keep a list of things I want to add to that page. One of them has been spiral coordinate systems. I had thought I would wait until I actually use them in a real project, so that I would have real world experience with the thing I'm writing about. I'm afraid of writing about things I'm unsure about, or information that's incomplete. But I haven't used them yet. I decided to stop waiting. Spiral coordinates on a hexagonal grid I decided to make some diagrams showing the parts I do understand. This also led me to try to understand more. I gave some unoptimized sample code. Part of me wanted to wait until I have the best sample code to present. But … I've already been waiting for so many years. I decided to publish the unoptimized code for now. There are also lots of variants I could have covered: 0-based vs 1-based, outside-in vs inside-out, ring-based vs path-based, uniform direction vs alternating direction, single spiral vs recursive spirals, and probably more. But if I wait until I have figured out all of these, it will take even longer. So I published just one variant for now. Take a look at the new section. Please let me know what you'd like to see changed or improved! While working on this, I also added an accidental discovery: running atan2(r,q) on axial coordinates instead of atan2(y,x) on cartesian coordinates produces something angle-like that can be used for sorting by angle, while being slightly cheaper than actually calculating the angles: Something angle-like that can be used for sorting by angle That's not a topic I wanted for the main page, so I put it on a separate page.
www.redblobgames.com
March 13, 2025 at 4:47 PM
Thoughts on Flash
I know a lot of people hated the Flash Player web plugin, but I found it to be quite useful for my experiments. It gave me vector graphics in the browser so that I could share demos without asking people to download an executable from me. And it ran long before SVG / HTML5 was widely available in browsers. I had been porting some of my old Flash code to Javascript, but that takes time that I could be instead spending on new projects. So I'm glad to see that the Ruffle Flash emulator has made so much progress on ActionScript 3: Ruffle support for ActionScript 3 from their site Some of the things I resurrected: My map "polygon map generator" demo that accompanied the article of the same name Realm of the Mad God experiments - mapgen1, mapgen2, dungeons Blinking faces using the Oryx tileset An old level editor for Realm of the Mad God, which I later used in a reddit AMA five years later "There is no secret sheep level", an experiment with isometric graphics and field of view, which led me to this experiment seven years later Two road drawing experiments, one for intersections and one for bezier vs circular curves, which led me to write this article nine years later Corridor map generation for when you're inside the belly of a giant beast Radial space station map generation Spaceship editor that figures out how to fly from your spaceship designs Back when I initially got interested in making interactive tutorials (2007), HTML5 wasn't around. Java applets and Flash applets were the best choices to run in a web browser, and I found Java was the respectable but slow/clunky choice, whereas Flash was the fast/lightweight choice, but didn't get any respect. ActionScript 3 was a decent programming language. Think of it like TypeScript + JSX but ten years ahead of its time, and based on the ECMAscript standard. It had type checking, classes, modules, etc. The Flash graphics system offered 2D vector graphics, 2D bitmap graphics, and 3D graphics, and ways to combine all three in a fine-grained way. That's something I can't easily do in HTML5. Many of the interactive parts of my pages, including the ones about pathfinding, hexagons, and procedural map generation, have their origins in experiments I did in Flash. I was quite glad that Ruffle made some of these work again. But while looking at the Polygon Map Generator article, I realized I haven't updated it since 2010. It has lots of references to the ActionScript 3 source code. I think ActionScript was nice — But Flash is dead, so nobody's using ActionScript anymore. I decided to remove specific references to ActionScript code, and instead point to either descriptions of the algorithms or JavaScript/TypeScript code. I also took the opportunity to update some of the text based on what I've learned since then. A big one is that the article was meant to describe what I did and not what you should do, but I didn't convey that well. I made specific decisions based on the game design, and those decisions may not be right for another project. In each section where I made such a decision, I added alternative decisions that I've used or seen in other projects. I used to think of my pages as something I wrote once and then published. I'm trying instead to of think of them as living documents that I update as I find better ways of explaining things. Updating the Flash parts of my site led me to revisit and update some of my older pages.
www.redblobgames.com
February 4, 2025 at 4:41 PM
What I did in 2024
It's time for my annual self review. In last year's review I said I wanted to improve my site: fix broken links organize with tags improve search post to my site instead of to social media move project tracking to my own site I didn't have any specific goals for writing articles or topics to learn. So what did I do? The biggest thing is that I'm blogging more than in recent years: Number of blog posts per year (plot from Observable Plot) Site management Changes to Twitter and Reddit in recent years have made me think about how I share knowledge. When I share something, I want it to be readable by everyone, forever. I don't want it to be readable only to "members" or "subscribers", like Quora or Medium. I had posted to some of these sites because they were open. But they're sometimes closed now, requiring a login to view what I posted. My web site has been up for 30 years. The Lindy Effect suggests that what I post to my own site will last longer than what I post to Google+, FriendFeed, MySpace, Reddit, or Twitter. I don't expect Mastodon, Threads, or Bluesky to be up forever either. The article Don't Build Your Castle in Other People's Kingdoms recommends I focus on my own site. But while my own site is easy to post to, my blog hosted by Blogger is not. I want to make blogging easier for me. I looked at my options for blogging software, and concluded that my web site already supports many of the things I need for a blog. So I decided to write my own blogging software. How hard could it be? Famous last words, right? It's foolish in the same way as "write a game, not a game engine". But it actually went pretty well! I only had to support the features needed for my own blog, not for everyone's blogs. I didn't need it to scale. I could reuse the existing features I have built for my web site. There are still some features I want to add, but I think I got 80% of what I wanted in <200 lines of Python. I made it easier to post to my blog, and I posted a lot more this year than in the previous few years. I'm happy about this. New pages I sometimes pair a "theory" page with an "implementation" page. The A* theory page describes the algorithms and the A* implementation page describes how to implement them. The Hexagons theory page describes the math and algorithms and the Hexagons implementation page describes how to implement them. Last year, I studied mouse+touch drag events in the browser and then wrote up a theory page with my recommendations for how to handle the browser events. I claimed that the way I structured the code led to a lot of flexibility in how to handle UI state. This year I made an implementation page with lots of runnable examples showing that flexibility. I show basic dragging, constraints, snapping, svg vs div vs canvas, handles, scrubbable numbers, drawing strokes, painting areas, sharing state, resizing, and Vue components. I show the code for each example, and also link to a runnable CodePen and JSFiddle. Concepts and implementation pages I'm very happy with that page, and I wrote a blog post about it. I also wanted to write a reference page about Bresenham's Line Drawing Algorithm. This page failed. I had started in 2023 with an interactive page that lets you run different implementations of the algorithm, to see how they don't match up. But I realized this year that my motivation for writing that page was anger, not curiosity. My goal was to show that all the implementations were a mess. But anger isn't a good motivator for me. I don't end up with a good result.. I put the project on hold to let my anger dissipate. Then I started over, wanting to learn it out of curiosity. I re-read the original paper. I read lots of implementations. I took out my interactive visualizations of brokenness. I changed my focus to the properties I might want in a line drawing algorithm. But I lost motivation again. I asked myself: why am I doing this? and I didn't have a good answer. There are so many things I want to explore, and this topic doesn't seem feel like it's that interesting in the grand scheme of things. So I put it on hold again. Updates to pages I treat my main site like a personal wiki. I publish new pages and also improve old pages. I treat my blog differently. I post new pages, but almost never update the existing posts. This year on the main site I made many small updates: Wrote up what I currently understand about "flow field" pathfinding Rewrote parts of a page about differential heuristics, but still quite unhappy and thinking about more rewrites Simplified the implementation of animations in the hexagon guide, when switching from pointy-top to flat-top and back Added more animation modes to my animated mapgen4. This is a fun page you can just stare at for a while. Fixed a long-standing bug in A* diagrams - a reader alerted me to mouse positions not quite lining up with tiles, and I discovered that functions like getBoundingClientRect() include the border and padding of an element. Added a demo of combining distance fields to my page about multiple start points for pathfinding. Updated my two tutorials on how to make interactive tutorials (1 and 2) to be more consistent, point to each other, and say why you might want one or the other. Updated my "hello world" opengl+emscripten code with font rendering and other fixes Continued working on version 3 of my dual-mesh library. I don't plan to make it a standalone project on GitHub until I have used it in a new project, but you can browse the copy of the library inside mapgen4. Made my hexagon guide printable and also savable for offline use using the browser's "Save As" feature. Improved typography across my site, including some features that Safari and Firefox support but Chrome still doesn't. Reduced my use of CDNs after the polyfill.io supply chain attack. I continue to use CDNs for example code that I expect readers to copy/paste. Switched from yarn to pnpm. I liked yarn 1 but never followed it to yarn 2 or yarn 3, and decided it was time to move away from it. Made of my pages internally linkable, so you can link to a specific section instead of the whole page. Used Ruffle's Flash emulator to restore some of the Flash diagrams and demos on my site. When I tried it a few years ago, it couldn't handle most of my swf files, but now it does, hooray! I didn't remember all of these. I looked through my blog, my notes, and version control history. Here's the git command to go through all my project folders and print out commits from 2024: for git in $(find . -name .git) do dir=$(dirname "$git") cd "$dir" echo ___ "$dir" git --no-pager log --since=2024-01-01 --pretty=format:"%as %s%d%n" cd - >/dev/null done Learning Curved and stretched map labels I decided that I should be focusing more on learning new things for myself, instead of learning things to write a tutorial. The main theme this year was maps: I made a list of topics related to labels on maps. These were all potential projects. I ended up spending a lot of time on basic font rendering. What a rabbit hole! Most of the blog posts in 2024 are about font rendering. I did some small projects using square, triangle, hexagon tiles. I experimented with generating map features and integrating them into an existing map. For example, instead of generating a map and detecting peninsulas, I might want to say "there will be a peninsula here" so that I can guarantee that one exists, and what size it is. I tried my hand at gradient descent for solving the parameter dragging problem. In my interactive diagrams, I might have some internal state s that maps into a draggable "handle" on the diagram. We can represent this as a function pos(s₁) returning position p₁. When the handle is moved to a new location p₂, I want to figure out what state s₂ will have pos(s₂) closest to p₂. Gradient descent seems like a reasonable approach to this problem. However, trying to learn it made me realize it's more complicated than it seems, and my math skills are weak. I wanted to create a starter project for rot.js with Kenney tiles. I was hoping to use this for something, but then never did. While learning about font rendering, I also got to learn about graphics, antialiasing, sRGB vs linear RGB, gamma correction, WebGL2. This was a rabbit hole in a rabbit hole in a rabbit hole in a rabbit hole… But secondarily, I got interested in programming language implementation: I'm reading Crafting Interpreters, Bob Nystrom's book about how to write interpreters and compilers. It's been great so far. I haven't done the exercises yet. I'm learning more about Web Assembly (wasm). I first got interested in Emscripten in 2011, before wasm or even asm.js. I want to try out some of the new features that became available this year, like garbage collection and tail calls. I followed part of Tomas Petricek's programming language course, and did the exercises after learning some F#. I watched some of Ian Piumarta's talks (1, 2) and read some of the papers (Open, extensible composition models from 2011, Making COLAs with Pepsi and Coke "a white-paper advocating widespread, unreasonable behaviour" from 2005) At the beginning of the year I was following my one-week timeboxing strategy. I've found it's good to prevent me from falling into rabbit holes. But my non-work life took priority, and I ended up relaxing my one-week limits for the rest of the year. I also fell into lots of rabbit holes. I am planning to resume timeboxing next year. Next year I want to continue learning lots of new things for myself instead of learning them for writing tutorials. The main theme for 2025 will probably be text: name generators large language models programming languages procedurally generating code I also want to continue working on maps. It has been six years since I finished mapgen4, and I am starting to collect ideas for new map projects. I won't do all of these but I have lots of choose from: towns, nations, cultures, factions, languages roads, trading routes farms, oil, gold, ore valleys, mountain ranges, lakes, peninsulas, plateaus rivers, coral reefs, caves, chasms, fjords, lagoons forests, trees, snow, waterfalls, swamps, marshes soil and rock types groundwater atmospheric circulation ocean currents tectonic plates animal and plant types named areas icons, stylized drawing update the graphics code in mapgen4 I don't plan to make a full map generator (but who knows!). Instead, I want to learn techniques and write quick&dirty prototype code. I also plan to continue enhancing my web site structure and build process, including navigation, link checking, project management, bookmarks, more blog features, and maybe sidenotes. Although text and maps are the main themes, I have many more project ideas that I might work on. Happy 2025 everyone!
www.redblobgames.com
January 1, 2025 at 4:40 PM
Hexagon page animations
When I first wrote my hexagon guide in 2013 I used d3.js, which has a nice animation system. I had some trouble with CSS transitions in SVG back then, so I was using Javascript transitions using SVG attributes instead of CSS. This looked something like: d3.select(".grid") .transition() .attr("transform", "rotate(30)"); That would rotate the grid from flat-topped to pointy-topped, but the text would be rotated too: Text rotates with the diagram The solution was to undo the rotation on the nodes: d3.select(".grid text") .transition() .attr("transform", "rotate(-30)"); Text re-rotated to undo the diagram rotation When I rewrote the hexagon guide in Vue in 2018, I wrote my own animations that set a Javascript value between 0 and 1, and turned that into an angle between 0° and 30°. I then applied that angle to both the grid and the text: In both of these implementations, I used a Javascript value to change SVG attributes. As an optimization, I limited the animation to diagrams that were visible. Diagrams that weren't visible would "snap" to the final position. During the rewrite I was hoping to simplify the implementation a little bit. SVG2 added a feature in 2016 specifically to solve the problem of rotated text: vector-effect: non-rotation. Caniuse shows it being supported, and the MDN page shows it as supported. However when I tested it back in 2018, and again in 2024, it didn't work anywhere. There's a Chrome Bug that says the non-rotation value is unimplemented since 2017, and a Firefox bug that says they won't implement it unless Chrome does (probably because developers won't use all the features Firefox has implemented that Chrome didn't). So I can't use this feature. I wanted to use CSS transitions back then, but browser features and bugs back then limited what I could do. Since 2018, browsers have adopted lots of new features, and many bugs have been fixed. I decided to try using CSS transitions again. But why CSS transitions? Partly because it's easier to program, and partly because it should be faster. With Javascript transitions, I can start from something like this: … and want to turn it into this: … During the transition, I have to change all of those numbers every animation frame. As of today, I have 2145 elements ✕ 60frames/sec ✕ 0.5sec = 64,350 html updates to run the full animation. With CSS transitions, I can do this instead: … and then change it to … just once, not every frame. And I only have to change the class at the top, not all the individual elements. That means I have 39 html updates to run the full animation … down from 64,350! First, I needed to make sure CSS transitions would work. I designed a CSS easing function and tested it with an online playground. It worked across the browsers I test on. I implemented this change in small steps and tested each step across browsers: Switch from transform attribute to transform style, still controlled via Javascript. Refactor styles to have the diagram rotate and the text unrotate. Switch from inline style to class, moving the style rules to the global CSS. Add CSS transition rule, transition: transform 0.5s; transition-timing-function: cubic-bezier(0.5, -0.2, 0.5, 1.2);. Some elements couldn't use CSS transitions so wrote a Javascript approximation to it, along with a test to make sure the CSS and Javascript versions stay in sync. I did run into one bug, with Safari this time. The animations should all play when I change the CSS class. In Safari, for elements not currently on the screen, it would start the animation when the element is scrolled into view. That's too late. It should have finished by then. To work around this bug, I used IntersectionObserver to trigger the animation only on visible elements. I'm glad I was able to simplify the implementation. I hope the animation runs smoother on more devices, but on my computers I wasn't able to tell.
www.redblobgames.com
January 1, 2025 at 4:40 PM
SDF Halos
I had previously posted about drawing outlines around fonts. The goal is to make labels easier to read in situations like these: Small text on a map Especially where the text and background are the same lightness, an outline with the inverse lightness can make it more readable: Small outlined text on a map These outlines are called "halos" in the cartography world. There are many styles one can use. A solid outline is the simplest thing to implement, so I started with that. Apple Maps and Google Maps use a solid outline too. However, a solid color outline can be distracting, and Cartographer Tom Patterson instead uses background blurring with feathering (variable strength) to produce nicer looking halos. That article made me want to try other styles. With signed distance fields, the shader can map signed distance to a color. I made a visualization of this mapping. For plain text, everything below 0 is filled and everything above 0 is transparent: Distance to color for plain text I implemented outlines by setting the color at distances between 0.0 and 0.1: Distance to color for outlined text I also tried adding a separate soft outline (halo) that fades out over some distance: Distance to color for soft outlines Let's look at little closer at the output: Three styles of text rendering: plain, hard outline, hard+soft outline We don't need outlines in regular UIs because the text is readable by default. Outlines are distracting. But in maps the variety of backgrounds means the safest thing is to use an outline. Let's look at different areas of the output: Areas to evaluate in rendered text In area A the dark lines for the mountaintops make the text harder to read, especially at the bottom of the T and G. In the original image at the top of the page, the beginning of the word PEAKS is especially hard to read. In area B the text is already readable without an outline. The same is true in area F. Area E is readable although it could be better. In area C the black text is hard to read against a dark ocean background. Area D also has low contrast, especially the letter R. What can we do to improve these? Regular hard outlines are the obvious answer. But I think area B looks better without the outline. And area B may be a common case in some map styles. So I wanted to try some other options for outline styles. Tom Patterson suggests blurring the background. I tried implementing an "outline" that uses the blurred background color: Text rendered with a blur around it I think this helps with area A. We can think of the label as being printed on frosted glass. The downside is that the map detail is lost. And it doesn't help in areas C, D, E. Let's combine this with hard and soft outlines: Outlined text on blurred background I still prefer area B without the outline. What would happen if I selectively applied the outlines? The shader can calculate the contrast at each pixel and suppress the halo over areas where contrast is good: B and F. Let's see the halo only: The halo rendering Suppressing the halo in some areas gives us this (and it could be tuned up or down): The halo rendering only when contrast is poor Here's the result: Selectively outlined text on blurred background However the downside of this style is that a label over different types of areas will look inconsistent. Another thing I wanted to try is improving the outline in area C. It looks much stronger than the outline in area D. I tried tinting it the same color as the background: Outlining text to match the background color My experimental code only worked for the light outlines and not for dark outlines. I think I'll have to revisit if I want to use it in a real project. In terms of implementation, selective outlining, outline tint, and background blur all require me to read the background pixels from the map image. I converted my code to use linear rgb to make these calculations correct. Of these three features, background blur is the trickiest, because I'm also writing the blurred value out near the labels. If there are overlapping labels, what happens is that I draw the blurred background, draw the first label, draw the blurred background again, and draw the second label. This looks bad. It would be even worse if I didn't combine distance fields as I described in a previous blog post. I didn't try to fix any of this for these experiments. My goal was to see whether selective outlining, outline tint, and background blur help, not to make a production-quality text renderer. I don't know which of these techniques will look best in a real project, but I now have some options: Hard outline Soft outline (halo) Blurred background Selective outline Tinted outline Applying all of them to the example from the top of the page, I think it's an improvement over the regular hard outlining: Before and after comparison In games like Magic The Gathering you spend some of your time collecting cards for your deck and some of your time playing those cards in a game. These font experiments for me are the "deck building". I'm learning a lot, and I now have more options available. But at some point I want to use this in a real project.
www.redblobgames.com
January 1, 2025 at 4:40 PM
SDF headless tests, part 2
In the last blog post I wrote about how I wanted to test the parameters of msdfgen, which generates multi-signed distance fields for fonts and other shapes. While testing the emrange parameter, I found lots of bugs in my renderer. I also wanted to try out msdfgen's new asymmetric range parameter, aemrange. The distance field goes both inside and outside the font edge, but if I want outlines, I want it to go pretty far outside the font. I made a visualization to show the range: Contour lines showing the distance field From this we can see that the interior has far fewer contour lines than the exterior. That's because fonts are relatively thin. If it were a large object like a solid circle, it would have many contour lines on the interior. The encoding of the distance field maps distances to a value 0.0–1.0, which gets stored in the texture as 0–255. By default, the encoding sets distance=0 to value=0.5 which will be 127.5. Let's look at how much of the font space is used by each encoded value. Blue means I'm using that value for the interior of the font. Green means I'm using it for the exterior of the font. Gray means I'm not using that value. Symmetric use of 0-255 values within the font atlas image This histogram tells us two things: Of all the pixels in the distance field encoding, most of them are used (blue or green). There's not a lot of waste representing values that I don't need (gray). This is good. Of all the 0–255 values in the encoding, around half of them are used. There's a lot of waste on the right side where no pixels use that value. This is bad. Most of the values I care about are on the left half. MSDFgen version 1.3 adds support for an asymmetric encoding. I switched from -emrange 0.8 (which corresponds to -0.4 to +0.4) to -aemrange -0.7 0.1, keeping the full range at 0.8. Does this improve the quality? Symmetric vs asymmetric use of range It did not improve quality! If anything it's slightly worse. What's going on? Keeping the full range at 0.8 was the problem. I had shifted the range, which extended the contour lines out farther, but didn't make them more accurate: Asymmetric shifted use of 0-255 values Of all the pixels in the distance field encoding, fewer of them than before are being used (blue or green), and a lot more are wasted (gray). This is bad. Of all the 0–255 values in the encoding, around half of them are used. There's less wasted on the right side but more wasted on the left side. This is bad. I instead want to stretch the range, with -aemrange -0.4 0.1: Asymmetric stretched use of 0-255 values Symmetric vs asymmetric use of range Of all the pixels in the distance field encoding, most of them are being used (blue or green). This is good. Of all the 0–255 values in the encoding, msot of them are being used. This is good. The output does look better. The contour lines don't go out farther; instead they are spaced more closely together, which means more precision. The curves are smoother. Asymmetry helps by throwing away the range that is unused. I also wanted to check for the opposite problem: that some value is needed but doesn't fit in the 0–255 range. I added code to the shader to detect this: if (distances.g >= 1.0) gl_FragColor = vec4(1, 0.5, 0, 1); and here's what it looks like if I cut off too much of the range: If the range is too asymmetric, the interior distances get clipped This is an example of a shader "assert" that helps me figure out when something is wrong. What is the optimal aemrange? It took me way too long to realize that I should set the low and high endpoints independently: Low endpoint is based on the maximum range I want for outline, halo, and other special effects. High endpoint is based on the maximum distance value inside the font, as calculated by msdfgen. Setting the range this way should maximize font rendering quality. In both cases the range needs to be extended slightly beyond the limit so that the texture interpolation can find an accurate value. But I wasn't so smart. I tweaked repeatedly until I found values that were good enough. Writing this blog post helped me understand what is going on, and I'll be able to choose better next time.
www.redblobgames.com
January 1, 2025 at 4:40 PM
SDF headless tests, part 1
In the last few posts I have shown some of the experiments I did with font rendering. Those experiments were all in the renderer. I'm using msdfgen-atlas to generate the textures used by the renderer, and I wanted to experiment with msdfgen's parameters. Instead of generating new font data and then reloading the browser, I decided to try "headless" rendering controlled by a shell script. Testing with different parameters In my blog post about antialiasing I found that instead of changing a slider, I wanted to see the entire range of outputs at once. I did that here as well. For each parameter value I ran msdfgen and generated an output file. Then I compared all the output files against each other to learn about the effect of each parameter. Getting my code running in stackgl/headless-gl took a lot of work. My code wasn't designed to work headlessly. But I also had trouble with the installation process (which needs python?!) and poor interactions with twgl.js not liking headless-gl. I'm not sure what to think of this. Is there another headless gl library I should use? Or maybe I can use gl in the browser, with the File API to save the output to files? Or maybe Electron/Tauri? I was frustrated the whole time, and didn't take good notes on what went wrong. I should have stopped, taken a break, and evaluated the situation with a level head. But instead I kept pushing through with tweaks and fixes until I got something running. [*Update* <2024-11-18 Mon> maybe I should look at Puppeteer.] In any case, I did get something running, and here are some findings— The first thing I wanted to test was the effect of the emrange / pxrange parameter. for test in 1 2 3 4 5 6 7 8 9 do ~/Projects/src/msdf-atlas-gen/bin/msdf-atlas-gen \ -type mtsdf -emrange 0.$test -dimensions 511 511 \ -font ~/Library/Fonts/FiraSans-Regular.otf \ -imageout assets/FiraSans-Regular.png \ -json assets/FiraSans-Regular.json node msdfgen-parameters.js cp _output.png output/test-emrange-$test.png done emrange from 0.1 to 0.9 There's a lot going on here! Columns 1 and 2 show that the emrange affects how far out I can draw a halo. The other columns show that when I extend emrange to be higher, the font quality degrades. Columns 3 through 6 should look the same in every row (except for font quality). But they don't. The outline thickness changes (see column 4), the font thickness changes (see column 5), and the drop shadow distance changes (see column 6). That means there are bugs in my rendering code. If I hadn't run this test, I might not have found them. I thought about these bugs for a while and realized: emrange affects the slope The emrange affects the slope of the mapping from distance to the value in the texture. My shader was using the y value, the encoded sdf. But I want be using the x value, the distance value. After I understood what I did wrong, I was able to fix the bugs. The outline thickness, font thickness, and drop shadow distance are now consistent: fix outline and font thickness to handle emrange While going through these tests, I found and fixed several other bugs, including how I was using msdf+sdf for outlines and how my drop shadow shader worked. I'm glad I worked on this headless rendering test, but I had enough trouble making my code working headless that I want to look for an easier approach next time. In the next post I explored the asymmetric version of emrange.
www.redblobgames.com
January 1, 2025 at 4:40 PM
SDF curved text
Over the last few posts I wrote about things I did to improve font quality, such as antialiasing and combining distance fields to merge outlines and halos. But I want to "pop up the stack" a bit and talk about one of the bigger goals for this project. I want to render text in styles that I've seen in maps, both online and offline, both fantasy and real. In particular, I want to apply spacing, rotation, and curvature to the labels. Sample map with labels These are common in cartography, not only in fantasy maps like Tolkein's but also in real-world maps. Eduard Imhof's classic 1975 paper, Positioning Names on Maps has a ton of great advice on how to position labels, and not only recommends curving text, but also sketches out examples: Clear areal association often requires bending and spreading a name so that it is stretched as much as possible across the horizontal axis of an area. Figures 40, 41, 42 from Imhof's paper I've had this paper sitting on my computer desktop since 2011. And I'm sure I was interested in this topic long before then. I will be re-reading it again before writing a label placement algorithm. Both Apple Maps and Google Maps use curved text, as shown in these examples: Screenshots: curved labels on Apple and Google maps Fantasy maps used curved text too. Tolkein's Lord of the Rings maps used curved text for areas and rivers. Scott Turner's inspirational blog includes a post Labeling the Coast part two in which he shows how he analyzed the coastline to find a suitable curved arc, and also that he liked curved arcs better than more complex paths. There are also posts on Cartographers Guild about when and how to use curved labels. So I want to implement curved labels. To figure out how to make text flow along a curve, I first sketched it out on paper, then made a standalone interactive widget. There are three cases to handle: Positive curvature: the text should be positioned along the baseline. Zero curvature: the text is not curved. Negative curvature: the text should be positioned along the ascender line. Three curvature cases: >0, =0, <0 If I always curve at the baseline, the tops of the characters are too close together. That's why I have to curve at the ascender line instead of the baseline when the curvature is negative: Upwards curvature at baseline makes letters too close together After calculating the geometry, there are two ways I know of to render the curved text. "Curving" or "Wrapping": Position and rotate each letter along a curved path, then render them onto the intermediate combined distance field. "Shaping" or "Warping": Draw the letters first onto the intermediate combined distance field, then distort the entire label into a curved shape. One advantage of warping over wrapping is that it allows for many more effects, such as these: More shapes to fit the text to However, whenever stretching the label non-uniformly, the distance field gets distorted. In this example the white halo on the left side (The) is much larger vertically than horizontally. I can find a way to fix this but I decided not to pursue it, since I was primarily interested in curved text and not arbitary warpings: Trapezoid shows the distance field is non-uniform Google's guide says wrapping generally works better than warping, but I ended up trying warping first, and decided I liked it well enough. However, in the last post I had said next time I might choose to not use an intermediate combined distance field. I think without that step, it would be easier to use wrapping than warping. Using distance fields, I can apply the outline, halo, and antialiasing after the curving step. I'm very happy with the way the curved labels turned out. Sample map with labels
www.redblobgames.com
January 1, 2025 at 4:40 PM
SDF combining distance fields
Learning about font rendering, I was looking at text closely last time, and I noticed another issue. The shadows of each letter overlap the previous letter. That's because I'm drawing one letter at a time. So for example in the fl, I draw the f's letter, outline, and shadow, then I draw l's letter, outline, and shadow. So l's shadow is drawn on top of f's letter. Shadows of each letter overlap the previous letter Closeup showing the shadow drawn over the character to the left The first thing I considered was to draw all the shadows first and then draw all the letters. But a second problem here, harder to see, is that shadows are drawn on top of the adjacent letter's shadows, and that causes them to be darker than they should be. The distance fields for adjacent letters always overlap: Sprite distance fields always overlap It's only when the letters are close enough that you can see artifacts of the double rendering. To solve both problems, I can generate a new distance field which is the min() of the distance fields for each character. The min() of signed distance fields is the union of the shapes. I used the blendEquation() function with MAX_EXT instead of the default FUNC_ADD (it's max instead of min because msdfgen's encoding is inverted). The MAX_EXT extension seems to have 100% support in WebGL 1, and is always included with WebGL 2. Adding an intermediate distance field texture The output texture holds a combined distance field. I then use the distance field shader to draw that combined distance field to the map. I could do this once per string I want to draw or once for the entire scene. I decided to do it once per string, because that allows me to use different colors and styles (thickness, outline width, shadow, halo) per string. I ran into a few bugs with this: because I couldn't figure out how to min() msdf fields, I put an sdf in the combined distance field instead of msdf, and that meant the corners of fonts got a little rounder; to compensate I increased the resolution of the combined distance field having a different resolution for the original and combined distance fields messed up the antialiasing; in the previous post I described a "slope" of distance field vs output pixels, but now there's an intermediary with the slope stretched out the letters got out of alignment when I moved them around for the intermediate texture; in particular, I had to calculate a new baseline position accounting for the change in resolution One way I compared parameters was by rendering them in alternating frames. Here's an example showing that the combined distance field resolution matters (slightly): Comparison of 3X vs 5X resolution on the combined distance field Here's the final result, showing that the overlapped drawing is fixed, especially at the bottom left of the second g: Before and after combining distance fields The combined distance field approach solves the problem but it means I need to write each string to a separate intermediate texture, which leads to a rabbit hole of having to allocate texture space during rendering, possibly reusing textures, and dealing with gpu pipeline stalls. That's an area I don't understand well. Fortunately I don't need it to be optimized for my project. But for a game project, I might choose to do the two pass approach instead, letting the shadows get drawn twice in overlapping areas. Are there better approaches? I don't know.
www.redblobgames.com
January 1, 2025 at 4:40 PM
SDF antialiasing
Last time I was looking at letter spacing with my renderer to see how it compared to Google Chrome on Mac. But while doing that I noticed that their antialiasing looked nicer than mine. So I tweaked parameters, including antialias edge width, gamma, and threshold bias. My renderer (first) vs Google's (second) I got results that were close to Google Chrome on Mac, but beyond that it became unclear which variant was actually better. I kept tweaking parameters: Many attempts to tweak parameters Sometimes it came down to looking really closely at the pixels. But which is better? I don't know. Some of these comparisons will look better in the original blog post than in a feed reader My renderer (first) vs Google's (second) I realized after a while that this is a rabbit hole. I had to force myself out of this endless tweaking cycle. Besides, Chrome on Mac is different from other browser + windowing systems, so I shouldn't spend all my time trying to match it. But two months later, I revisited antialiasing, because I needed to better understand it to implement halos and drop shadows. This happens a lot with my experiments. I'll discover something, then I'll tweak a lot, then I'll put it away for a while, and later I can come back to it and try to understand what I did. To implement antialiasing let's look at the mapping from distance to color. Distance maps to either black or white Here's the output. Pixels are either black or white: The letter e rendered with no antialiasing For antialiasing we want to smoothly transition between colors, so that it looks like this: The letter e rendered with antialiasing But how much? If we do too much, it looks blurry: The letter e rendered with more antialiasing We want it to scale based on the output pixels, not the input distance field. And that means we need to know something about the number of pixels on screen. On the msdfgen page there's a shader that includes antialiasing. They're measuring screenPxRange representing how many pixels in the output corresponds to one "distance unit" in the signed distance field input. If we want antialiasing to occur over edge_blur_px pixels in the output, we can divide edge_blur_px ÷ screenPxRange to find out what signed distance range this represents. For example if we want to antialias over 2 pixels, and screenPxRange is 8 px / distanceunit, then 2 px ÷ 8 px / distanceunit is ¼ distanceunits. The msdfgen code will antialias between 0 and +¼. Another option would be to antialias between −⅛ and +⅛. This is what the blending looks like: Color blends between black and white The slope of that line is screenPxRange / edge_blur_px. I wanted to get a better sense of what edge_blur_px should be. Over how many pixels should I apply antialiasing? I had previously tweaked it, then made the antialiasing width an interactive parameter to tweak faster. When I revisited it a few weeks later, I realized it'd be better to see all the outputs at once instead of using interactivity to see one at a time. For more on one interactive vs multiple non-interactive visualization, see Bret Victor's page about "Ladder of Abstraction". Testing a range of antialiasing widths I had set edge_blur_px to 1.0 in my previous tweaking, and this confirms that 1.0 is a reasonable choice. Lower values from 0.5 down look a little blocky, and higher values from 2.0 up look a little blurry. I decided to zoom into that range: Testing a range of antialiasing widths These may all look the same at first, but look closely with a magnifying glass and you'll see differences. From eyeballing this, I think maybe 1.2 or 1.3 might be the best choice, at least for large text. I don’t have any explanation for this. Could be it 1.25? Could it be sqrt(1.5)? If I looked at other sizes and characters, would I conclude it has to be higher, like sqrt(2) or 1.5? Is there signal processing math to prove the best value? Would it be different with gamma correction? Should I use smoothstep instead of linearstep? I don’t know. I'll set it to 1.2 for now. I'm quite happy with what I have, but not happy with how long it took. Two months went by between the first and last image on this blog post. I had many fixes and tweaks in those two months. I'll describe those in the next few posts. figure.comparison { overflow-x: auto; overflow-y: clip; img { box-shadow: rgba(0, 0, 0, 0.1) 0px 1px 5px 0px; } } img.pixel-peep, figure.comparison img { image-rendering: pixelated; width: unset; max-width: unset; }
www.redblobgames.com
January 1, 2025 at 4:46 PM
SDF letter spacing
My summer project is to work on labels for maps. In the previous post I described how I created outlines, and how I had a bug in the rendering. While looking closely at text to fix that bug, I noticed in one of my tests that the k and s seemed too close together. The h and e seemed a little too far apart. Letter spacing issues The spacing between characters is normally set in the metrics for the characters. There's also a way to override the spacing between specific pairs of characters, using "kerning tables", named kern and GPOS in TrueType. I don't have kerning implemented. Hypothesis: I need to implement kerning. To test this, I rendered k and s with other letters to see if either one was the issue: Testing spacing of s and k It does seem like k is generally close to the letter to the right of it. So that suggests it's the letter k and not a kerning issue. But I was curious about kerning, so I checked the font data and this font (Fira Sans) doesn't have a kerning table in it. So that means k really is a little too close to the neighbor. I verified this by checking the rendering in Google Chrome on the Google Fonts web site (second image) and compared to my rendering (first image): Spacing issues Sure enough, both the h + e and k + s spacing issues are there. So that's just how the font is. Ok, I guess there's nothing I can do here, at least for this font. Later I will try other fonts, and then I can revisit this issue. I was extremely pleased that my font renderer looked so close to Google's, not only the spacing but also the shapes. Looking closely at the edges led me down another rabbit hole … a tale for the next blog post.
www.redblobgames.com
January 1, 2025 at 4:40 PM
SDF font outlines
In the previous post I introduced my summer project, to render labels on maps. As part of this, I want to be able to draw outlines, halos, and drop shadows. Signed distance field fonts are well suited for this. The basic use is to consider the signed distances -1 to 0 to be "inside" (filled) and signed distances 0 to +1 to be "outside" (transparent). Mapping distance to color To draw an outline, we can add another range, maybe 0.0 to +0.2: Adding outlines to the distance map While "pixel peeping" the quality of the outlines, I noticed a little bit of black interior color leaking outside the white outline: Noticed a tiny bit of black outside the white outline What could be causing this? I looked through the code but nothing was obvious. To troubleshoot/debug, I need to make the problem happen reliably. Once I can reproduce it, I can start generating hypotheses and then testing them. So the first step is to make sure this problem happened on initial run, and not only after changing anything interactively. I made sure the d got rendered in that spot and that the black color leak was there. The next step is to come up with a hypothesis. I thought that the rendering of black pixels was extending past the rendering of white pixels. The next step is to come up with a test to disprove or prove the hypothesis. If the black pixels were rendering outside the outline, one way to see this would be to change the colors. I changed the black pixels to yellow, and to my surprise, yellow did not go outside the outline. I should've changed the outline color too, to make it more visible, but in this screenshot we can see that what's outside the outline is still black, not yellow: It's not the interior color that's leaking out This means my hypothesis was wrong. That's ok. Debugging may take several hypotheses and tests. But what could cause black pixels outside a sprite, when none of the sprite colors were black? This is when I remembered articles I read from Tom Forsyth, John Carmack, Eric Haines, and others say that I should be using premultiplied alpha to avoid black fringing around sprites. Do I really need that? I don't have sprite graphics here. But that became my next hypothesis. How can I test it? By changing the blending. This fixed the problem: Noticed and fixed black leaking outside the outline I think most people would be happy with fixing the bug, but I call this only a partial success. I fixed it but don't understand why my previous code was broken. And sadly, I hadn't checked in the buggy version, so I can't easily go back and study the cause. Without understanding the cause, I may end up making the same mistake again in a future project. If I run into this bug again I will want to study it more closely. This was one of many bugs and mysteries I encountered in this project.
www.redblobgames.com
January 1, 2025 at 4:40 PM
Labels on Maps
My friend L recently mentioned that he hadn't seen any blog posts from me. It's true, I haven't posted for a while. Earlier this year I had explored signed distance field fonts, in particular using Viktor Chlumský's multi-channel distance fields. I really enjoyed the many experiments I did, and I learned a lot. I had intended to use it in a real game project but the timing wasn't right. So I put it away. Then before I got started on a new project, life happened. I had to take a break and attend to other things. I haven't been blogging because I haven't had projects to blog about. Then in July I started thinking about other places I could apply the font code. I thought back to my 2019 blog post about map annotations, where I observed that even small amounts of text added to a procedurally generated map could make the map much more interesting. I started dreaming up a new map generator. Broadly, the tasks would be along these four categories: Procedurally generate a map that has interesting "point" features including chasms, volcanos, waterfalls, and towns. Identify large scale "area" features on a map such as peninsulas, bays, and mountain ranges. Generate names for both point and area features, tied into geography, history, cultures, etc. Place labels on the map corresponding to these features. Normally I'd approach this as separate steps, but I wanted to consider the possibility that these steps influence each other. For example what if I identify two adjacent mountain ranges that are each too small for a label? I could modify the map to turn these into one larger mountain range. Maybe I modify a label to match the size of the mountain range, or modify the mountain range to match the length of the label. Maybe I favor east-west oriented geographic features because it's easier to read the labels for them. Before I got too far, I wanted to get a sense of what the final maps would look like. Seeing the end result can help me decide if this is going to be inspiring or mundane. For now, I can manually identify features and generate names to get started. I wanted to quickly get something running, the equivalent of rendering a triangle in a 3d world. I used a static map image and then I drew text on top of it using my distance field font renderer from earlier this year: Initial label test That's the "hello world" level for this project. The next step was to try text that I might actually want to render: Outlined labels on a map I decided yes, this would be a nice summer project. The next step was to spend a lot of time reading about map labeling. I made notes for myself. I don't normally share these notes because interpreting them is dependent on what's in my head, but I'm going to share them here so you can get a sense of the kinds of notes I take while reading: https://en.wikipedia.org/wiki/Automatic_label_placement and also a paper evaluating algorithms suggests simulated annealing is best for areas - polylabel Positioning Names on Maps - paper by Edward Imhof, 1975, paywall https://observablehq.com/@veltman/centerline-labeling - @veltman's page used by Azgaar's project heredragonsabound has many posts on this subject mewo2's post http://mewo2.com/notes/terrain/ → code city labels are placed on one side of the city icon, with penalties for overlapping with other things region labels get a score for each possible location, with overlaps decreasing score, bonus for being over land https://heredragonsabound.blogspot.com/2017/04/use-force-layout-luke.html also - on imgur scott says mewo2's tries label positions but doesn't let them move around; he's using force layout instead simulated annealing part 1 and part 2 uses simulated annealing because force layout got stuck in local minima tip: put margins on the left/right of labels so that they don't merge with an adjacent label tip: save intermediate steps because sometimes with simulated annealing the final result is worse tip: spend more time moving labels that are currently bad, and not the ones that look good already tip: use a spatial hash to speed up the intersection tests labeling areas is different than labeling points some shapes are much harder to label, but he avoids making those shapes labeling coasts part 1, part 2, part 3, part 4, part 5, part 6 measures the curvature of the coastline to decide what goes there (coast label vs bay) labels along a curved path didn't look good in his experiments, but arcs worked ok to make a mysterious area like "The Lost Coast", eliminate some of the features generated there he also has a special "islands at the end of a peninsula" feature he wants to label he's using code to detect existing peninsulas and bays; try the simplest thing, test it a lot, tweak it bays can have multi-line labels also label "points" as navigation aids labeling oceans part 1 and part 2 unlike other area labels, there's an implied extension of the ocean past the map border places candidate circles in water, bonus for touching the map border, might be better if circle goes partly off the map labels along an arc arc curve should be pointed away from the center of the map label text should be upright river path labels part 1, part 2, part 3, part 4, part 5, part 6, part 7 looks for less-curvy location along the river smooths out the path using Visvalingam's Algorithm offsets the label from the river uses simulated annealing to rule out bad placements uses the Greiner-Horman polygon clipping algorithm to detect polygon intersection the Martinez-Rueda can handle more cases Sutherland-Hodgman is faster but more limited, and he uses it first before Greiner-Horman getting the area of the polygon intersection worked better for overlap avoidance than a boolean yes/no intersection he plotted number of iterations vs quality of results, and found most of the improvement happens at the beginning check the font metrics to make sure you're getting the bounding box you want don't need to label all rivers; he labels the ones that are "substantially longer" than the river name, and only ones that are straight enough Imhof's paper suggests adding more labels for the same river, above major fork points he's using svg so he can reuse svg's text along path, zoom for debugging, and other svg features using a rectangle approximation to the curved path led to labels not being placed right, so switched to a polygon approximation fixing upside down labels was trickier than it seemed at first sometimes it's ok to have labels drawn on top of graphics (forest, mountains), but masking/outlining can help make it stand out to make labels look hand-drawn, he offsets/rotates characters occasionally island labels part 1, part 2, part 3 he generates groups of islands with a separate algorithm, not just single islands as part of noise generation draws convex hull around them wants a label for the whole group had an issue where an island merged into the mainland, so the labels were wrong when there are lots of islands, the labels are often too crowded https://frozenfractal.com/blog/2024/7/24/around-the-world-18-charting-the-seas/ instead of force layout or simulated annealing, tries a bunch of possibilities in one pass I'll sometimes take parts of my notes and turn them into a proper page, but most of the time I don't. I use text files instead of bookmark services because I want to be able to group things together in outline form. I hear good things about Obsidian but I use emacs org-mode. That's just for step 2 of the four steps I listed. There are also lots of things to learn for step 3 and step 4. After reading all of this, I felt overwhelmed. Scott's heredragonsabound blog is amazing, and you can see he's put many years of work into these problems. I wanted a summer project, not a multi-year project. I decided the scope of this project was too large for me with my current skill level. To level up, I should work on smaller projects first. So I started a one week project to manually place and style labels on maps, focusing on step 4. That was six weeks ago. I'm still working on it. I've had many more bugs and unexpected rabbit holes than expected. I'll blog about them soon!
www.redblobgames.com
December 25, 2024 at 4:40 PM
Work in progress: heuristics
I have several pages that are unfinished because I can't find an explanation I'm happy with. Sometimes while trying to come up with an explanation, I realize I don't actually understand the topic as well as I thought! One of these topics is heuristics for the A* algorithm.While trying to understand the topic better, I came up with this example: Diagrams showing a nearby and faraway goal, with the same distance heuristic That wasn't the first attempt at an explanation. But I was still unhappy with it. I decided to remake the diagrams again. The problem I'm trying to demonstrate isn't specific to grids, so I decided to use a non-grid example. Conveniently, I had a non-grid example at the top of the A* page, so I adapted it for this page. I wanted to display numbers on the map but they didn't fit in some of the smaller cells so I removed all the small cells. New diagram I rewrote not only the diagrams but also the introductory text. But I'm still not happy with it! I think the grid has some advantages for visualization. I'll try rewriting this section again at some point. There's still a lot more to work on. The introductory section is just to set up the motivation. Section 2 is about basic insight for improving the heuristic. Section 3 is about the solution. Section 4 is the code. Then there are several more sections, including demos with maps from real games. I've been working on the differential heuristics page since 2015 and I'm still not happy with it! Maybe I need to rethink my strategy, either by reducing the scope of the page or by planning to rewrite the page after I publish it. I love the idea — it's very little code (<20 lines?), can produce a big speedup, and not limited to grid maps like some other A* optimizations. My general heuristics page is in even worse shape. The demos on that page are completely broken. I feel bad that after all this time I still haven't finished these pages. I'll keep working on them (slowly).
www.redblobgames.com
December 2, 2024 at 4:54 PM