No one listens to me
No one listens to me
@dpiepgrass.bsky.social
The schelling point no one visits. Aspiring rationalist EA LW. Evidence + Logic + Epistemics = Truth, Clearer Thinking, and 80000 Hours of failure
I also found it odd that Dwarkesh and Collison didn't point out that democratic governments are, you know, democratic while corporations are not...

I guess you have to be careful what you say to the world's richest man.
February 7, 2026 at 6:40 AM
But oh well, I suppose somebody at the GAO did at least try to make a reasonable model and that $233 Billion to $521
Billion is the best available estimate of fraud.

As for Musk's implication on Dwarkesh yesterday that maybe millions supposedly aged 115+ might be collecting social security checks..
February 7, 2026 at 6:13 AM
- It has pictures of a letter sent from OMB to the GAO criticizing the report and, like me, complaining about the lack of breakdowns (OMB does agree that the fedgov doesn't collect enough data though)
February 7, 2026 at 5:51 AM
Anyway, yeah, it was ironic. My deleted message specifically discussed the affective death spiral dynamic where ass-kissers in an influencer's space are rewarded and critics are punished, not for being wrong, but for being critical. (I was speaking of YouTube, but it's a general phenomenon oc.)
January 30, 2026 at 2:04 PM
After attacking the Ukrainian energy system for four winters in a row, Russians finally created a dangerous situation in Kyiv.
January 22, 2026 at 1:09 PM
And wow, he actually said this. I wonder if, among all his persuasion videos, he's like "protip: make up gruesome shit about your enemies."

(his claim doesn't even make sense, as most police are Republicans.)
January 21, 2026 at 8:31 AM
"Trump joined the Republican tribe to win the presidency. Now I was joining the Trump tribe. For a war against Hillbullies [ie pro-Hillary Clinton bullies]. I was all in."
January 21, 2026 at 8:20 AM
Scott Alexander tells me that Scott Adams once said "the best advice I would give to white people is to get the hell away from black people; just get the fuck away", which got him cancelled, which is why he went MAGA. Oh well, he had some nice cartoons anyway.
January 21, 2026 at 8:08 AM
So why is it notable that Ross or DHS leaked this to right-wing news? Because it indicates right-wingers think that this video clears Ross of wrongdoing. It shows Ross standing in front of the car for the first of three shots fired, and to them, nothing else matters.

Renee was a mother of 3.
January 12, 2026 at 9:31 PM
January 9, 2026 at 12:31 AM
The "necessary" death rate from starvation has been reached to declare famine: "two adults or four children per 10,000 people per day".

For Gaza assuming population 2 million, that can be equivalently stated as 2,800 adults "or" 5,600 children "or" 4ish October7 attacks per week.
August 28, 2025 at 12:25 PM
It's not a misinformation or disinformation campaign, but Mercator is a misleading projection, and it's very dumb that Google has done nothing to avoid it (at ultra-high zoom levels) for over 20 years.
August 15, 2025 at 8:30 AM
I can't bring myself to put a 👍 on a Professor Dave video, but he invited some scientists on his show to discuss Sabine Hossenfelder's claims, and I think they have at least as much right to be heard as the YouTube superstar herself.
www.youtube.com/watch?v=oipI...
August 12, 2025 at 3:48 AM
This isn't a high concentration of nanoplastics, but assuming they are normal plastic particles, they should contain energy, and their small size should encourage evolution of microbes that can consume them.
July 11, 2025 at 4:08 PM
Trump's NOAA proposes eliminating all climate-related research work at NOAA, including the Mauna Loa observatory that has continuously monitored rising CO₂ levels for over 65 years. (as proposed in Project 2025) www.cnn.com/2025/07/01/c...

CO₂ levels continue their gentle exponential trend.
July 3, 2025 at 6:23 PM
Is anybody *really* surprised that The White House officially gave Trump a Sith Lord lightsaber? Someone should do a poll.

He may lie a lot, but that doesn't extend to the subtext, right? The subtext is always like "I'm bad. No, really. We're proper baddies." People know―right?
May 5, 2025 at 7:40 PM
I think people aren't getting some very basic memos. For example: ourworldindata.org/much-better-...
April 6, 2025 at 7:38 PM
You see this place? You have to live here.

People who unfairly tear down and attack others mysteriously find themselves with legions of followers. Why build taller, they think, when you can make others shorter? I'm always amazed how little people care about making Earth better.
April 6, 2025 at 7:33 PM
Oh and yeah, obviously an LLM develops language-agnostic concepts as the diagram shows, but less so in models with less "model scale" (I expect this depends more on compute than on model size, beyond some minimum. And training data is important.)

Anthropic's post: www.anthropic.com/news/tracing...
April 1, 2025 at 1:13 AM
LLMs play characters. That character could have emotion, but the LLM itself doesn't and its AI assistant character mostly doesn't either. So it makes sense that grammatical consistency can override the refusal signal if it's not specifically trained on the situation b/c it has more grammar training.
April 1, 2025 at 1:07 AM
This jailbreak is another case where I think "hmm, not a likely human failure mode". Or is it? Imagine a person being excited to be the Helpful Assistant he was trained to be, so starts giving bomb instructions before remembering that he's not supposed to. But somehow I doubt this is the same thing.
April 1, 2025 at 1:01 AM
This part suggests there could be a fundamental difference, as I'm pretty sure I'm not prone to this failure mode: I can & do separately detect uncertainty in each separate detail of everything I say.

But you see that Trump guy? If he has any uncertainty sense, he spent his whole life ignoring it.
April 1, 2025 at 12:45 AM
I've been suspecting that LLMs might be unable to tell when they are hallucinating―lacking an internal sense of uncertainty.

This part is consistent with that hypothesis. I wonder though: are humans the same way? Do some people make shit up so much because they never got anti-bullshit training? 🤔
April 1, 2025 at 12:25 AM
Me, I tend to do mental addition in the forward direction, e.g. for 491 + 248, I go "6...13...so 73...9...739."

Humans & transformers can both devote many "parameters" to the problem, but a xformer has the extra luxury of a large working memory & perfect internal calculations, so it can get exotic.
April 1, 2025 at 12:05 AM
...what?!

I would've thought the Claude staff were smarter than that. Seems pretty obvious that Claude would be, to the extent its architecture + gradient descent allows, planning out both lines at once before writing the first word ("He"). Transformers attracted billions of $ for a reason. 🤷‍♂️
March 31, 2025 at 11:57 PM