Cody
cody.chaos.net.nz
Cody
@cody.chaos.net.nz
Correct, and the Himalayan is one of them.
November 8, 2025 at 3:59 AM
The Himalayan Birkin is extremely limited production. A normal Birkin is pricy ($10-20k) but Himalayan ones (if you can convince Hermes to sell you one) can be over $100k and can go for well over that on the secondary market. Every Himalayan is a collector item.
November 8, 2025 at 3:56 AM
What's weird is that when you write it out like that, it sounds like sarcastic mockery. "Sure, and the dog ate your homework too, right?"

And yet!
October 28, 2025 at 11:45 AM
Reposted by Cody
the moral of Trump is that a very small minority of people are devoted fascists but a pretty large portion of people are utter morons who will support fascism out of their own sheer stupidity
October 18, 2025 at 3:57 PM
The critique, I think, is that it should read "violating the law" not "reversing policy."

The situation is worse than described, and part of a trend of journalists underselling the scope of rule and norm breaking.
October 8, 2025 at 4:03 AM
Wouldn't surprise me if she was plagiarising someone, but can we not pretend a random AI hallucination means anything? Zero informational content.
September 24, 2025 at 12:19 PM
It certainly means "more senior" but may have other meanings. Usually one or two steps above "senior" but may be more. Usually just another engineer but may be more independent or research focused. Titles are a dumpster fire in this industry.
August 16, 2025 at 7:48 AM
I believe that is a good summary of the current state of the tech. I don't want to rule out an unexpected development; the field has has plenty. But the process that gave us GPT-5 is fundamentally not trending towards AGI.
August 8, 2025 at 1:22 PM
That doesn't mean a "guess machine" isn't useful or valuable. A lot of the time, that *is* what humans function as.

But the comparison you're making suggests you are missing something fundamental about human cognition or LLMs.
August 8, 2025 at 12:42 PM
With all due respect, it is, im fact, clear that humans at times rise far beyond that and can show deep understanding, comprehension, and reflection, generating novel insights, concepts, and ideas.

LLMs have never done so; it's unclear if the technology will ever be capable of such.
August 8, 2025 at 12:37 PM
That's the problem in a nutshell. We expect correct human-written answers to have certain signifiers. LLMs get the signifiers right every time, but they are correct merely by chance. So, not only do we struggle with the errors, we struggle with the fact that they're harder to detect.
August 8, 2025 at 6:01 AM
It really is just fancy autocomplete, though. And yes, it's miraculous, but also, it generated something that *looks* like nuanced interpretation, but no interpretation happened. That's all it can do. The most advanced models will, as of today, tell you there are three "b"s in "blueberry".
August 8, 2025 at 5:57 AM
Reposted by Cody
HALPING!!!!!!!!
(now that I made Ken cringe, because my halping can never be predicted)
July 27, 2025 at 4:22 AM