Charlie Loyd
banner
vruba.bsky.social
Charlie Loyd
@vruba.bsky.social
A pixel/geography/other person in Oakland. With @rahawahaile.bsky.social. Marginally more active at @[email protected] on mastodon. He/him.
(But also: hire me!)
February 1, 2026 at 11:13 PM
So in the actual implementation, Potato expects (1) a pan band, (2) the particular spectral sensitivities, and (3) the particular spatial artifacts of the WV-2/3 sensors. However, almost everything in it on a conceptual level should translate for … pretty much any visible sensor.
January 5, 2026 at 10:31 PM
this is obviously ai
December 26, 2025 at 12:02 AM
(Some of it is in the source data. You can see Potato drawing some non-physical “shadows” around the boats, for example; it’s definitely not filtering as much as it should be here. But that can only be part of what’s going on.)
December 25, 2025 at 5:52 AM
Yeah. I have some hunches about what’s going on, but I’ve tried not to spend time even informally reverse-engineering it. Knowing exactly what’s going on here wouldn’t help me do anything I want to do.
December 25, 2025 at 5:47 AM
This one is less subtle. Look at the paddleboards’ colors and the ringing artifacts (dark halos) around the paddleboards, boats, etc. Also, those faint diagonals in the water in the standard image? They don’t diffract. They’re artifacts, not ripples.
December 25, 2025 at 4:59 AM
Stand-up paddleboards and boats, Marina Del Rey, 2025-01-16 (CID 103001010C12B000). L: standard, R: Potato.
December 25, 2025 at 4:58 AM
One of the big aims is to make images that look like photos, pictures, not just visualizations of data that happens to be visible light. (Nuance on this is in the essay in docs/personal.md.) So putting aside technical details, what I’m looking for here is a sense of seeing a real moment.
December 24, 2025 at 11:46 PM
Boats at a breakwater, Manila. A subtler one, maybe – zoom in? 2025-11-13, latitude 14.5818, longitude 120.9576, CID 10400100770EF000. Commercial off-the-shelf pansharpening on the left, Potato on the right.
December 24, 2025 at 11:46 PM
Secret Potato lore (it’s in the docs, but not the interesting part of the docs): I hand-rated more than 1,400 satellite images on several quality axes to filter the training data. I put a lot of city miles on QGIS. This was a terrible idea, but I chose to be guided by the sunk-cost fallacy.
December 24, 2025 at 10:33 PM
(This is me gently reminding any satellite data execs who might be reading that if you want people to increase the value of your data, at some point you have to let them see your data. “They’ll pay us to improve our product” is not the 🌌🧠 strategy you seem to think. Release large sample datasets.)
December 24, 2025 at 5:14 PM
Short version: As shipped, it’s narrowly adapted to the particular artifacts of the WV-2/3 sensor. But I expect it to be adaptable to others with less work than starting from scratch would be. (If I’d had a good pool of Planet training data, I would have tried!)
December 24, 2025 at 3:58 PM
Hatpty thbrithvdy!
December 24, 2025 at 5:23 AM
Sure, by the Pan band.
December 24, 2025 at 12:01 AM
Look at the yellow bases of the lamp posts, the edge between road median and paved surface, and the details in the rails. Look at the vegetation: which looks more like real plants? But the one that gets me is the roof color. Google’s own user-submitted data shows who’s got it right.
December 23, 2025 at 11:47 PM
Here’s a highway by a switch yard at the edge of the Port of Durban, South Africa. (I prefer mundane test images over landmarks.) CID 10400100770EF000; 2022-04-2. Latitude -29.8937, longitude 31.0134. Google Earth on the left, Potato on the right.
December 23, 2025 at 11:45 PM