Wade Dorrell
banner
waded.org
Wade Dorrell
@waded.org
350 followers 150 following 690 posts
Quality, human factors, gardening, denominators, negative space, late-stage dadops, LLAP, LLTBTB. Ex MSFT, ex actionable data startups, nunc fintech
Posts Media Videos Starter Packs
The bowl of Halloween gum survives another year. We're never around, neighbors t-or-t elsewhere, so we've put this gum we got for a kids party years ago out every year. I try one, every year, they're still fine, but they were never good. 6 more pieces, 6 more years?
🤣 This slop mobile game company replying to every reviewer as "Dear Lord." English, as she is spoke.
Not me making a bracket to hang startling ghost decoration behind doors at eye level. 👻
Always with the agent trying to use Guava in new Java code, and giving me code to review that doesn't compile 🤦
Ok, so the cockles is where the children that form the heart gather, like a playground?
'tis the season 👻
rollingstone .com is increasingly a site I visit because its design is so 90s (when my newspaper designs for the computer were just newspapers, but on the computer.)
Oh yeah, well the You Just Replied To A Computer Reply Guy Store called, and they're all out of you!
True, although the hint in that proceeding room made this one really obvious for me the first time. I also like the sequences where the "wildlife" gives a hint about the action required, rather than the weapon, to make progress.

youtu.be/oNIPE3wk0fA?...
Super Metroid's Greatest Moment
Or maybe it's the worst moment. Super Metroid is much more than its structure. Its maze-like map is filled with memorable moments that make the player question what else is possible. My favorite…
youtu.be
Better leave the garage door open 🌱
Ah, the new Superman is legitimately good. I didn't expect so much goofing off!
So are there completeness of information thresholds that define "assume" vs. "presume" vs. "know?" I'm going with 1%, 2%, and 4% (but that feels high)
Smart avoidance of DRY is fine, but when AI spits out repetitive code, and comment, and readme, and test that repeats the code rather than prove anything, GTFO I said DRY MF'r.
I feel like I already posted about AI initiatives encouraging writing minivans squared, but here's the reference again.
Coining "VX", vibe experience, while in the area. This vibe coding felt/went good, that vibe coding felt/went bad, some one/thing's gotta push it toward the good.
Reposted by Wade Dorrell
I had the distinct honor of being the very first person to donate to Radio Boise. I hadn't even met Jeff yet, but I knew what a boon community radio could be. To see the current management trash the first amendment is infuriating. They have seen the last of my money.
(Also I feel like the "warming up" behavior, where climate doesn't blow until things have warmed up, made it harder to understand the controls. If I press a button I think will turn on the blower, and it doesn't blow right away, hard to know I pressed the right button.)
One thing @Tesla could do better w/ screen is label climate controls. All these years and I still don't understand the nuance between the 3 defroster-ish symbols; that they're the same symbols all manufacturers use doesn't attack the issue.
Minor habanero harvest 🌱
I must've imported some root weevils at some point. I hope this works, they do a surprising amount of damage! Spent all summer picking the adults off plants at night.
What's with the new ad format that starts with half a second of uncanny AI-alley, then switches straight to the same old gameplay video for one of those shitty machine gun firehose mobile games? (They figured out our eyes like the weird stuff?)
Guns are garbage, pass it on
I'll raise ya peanut butter and raisin AND golden raisin sandwich!
Who'd have guessed that language models hallucinate because training and evals reward guesses over acknowledging uncertainty? Certainly not most people. I mean come on. We deserve cookies.
openai.com/index/why-la...
Why language models hallucinate
OpenAI’s new research explains why language models hallucinate. The findings show how improved evaluations can enhance AI reliability, honesty, and safety.
openai.com