I mean bluntly I think that if you think driving a car and using an LLM are fairly comparable in terms of danger to the user (for reference cars cause 2.38 MILLION physical injuries per year in the US with 42 THOUSAND deaths) you are wildly miscalibrated, yes.
January 10, 2026 at 11:16 PM
I mean bluntly I think that if you think driving a car and using an LLM are fairly comparable in terms of danger to the user (for reference cars cause 2.38 MILLION physical injuries per year in the US with 42 THOUSAND deaths) you are wildly miscalibrated, yes.
PSA: "Okay so when you watch the video and feel the spike of badfeels/disgust you want to kind of hold still/tense up and then kind of stretch your neck/upper back like you're standing up on your tippie toes, this should mostly neutralize the reaction."
January 6, 2026 at 9:27 AM
PSA: "Okay so when you watch the video and feel the spike of badfeels/disgust you want to kind of hold still/tense up and then kind of stretch your neck/upper back like you're standing up on your tippie toes, this should mostly neutralize the reaction."
Me when I'm the last outgroup poster standing getting mobbed by literally thousands of drunk tankies as displacement behavior for being unable to stop Trump and Miller from completely trashing NATO and the post WW2 order on blue sky dot com.
January 6, 2026 at 6:21 AM
Me when I'm the last outgroup poster standing getting mobbed by literally thousands of drunk tankies as displacement behavior for being unable to stop Trump and Miller from completely trashing NATO and the post WW2 order on blue sky dot com.
A study shows machine learning paper abstracts are becoming more rhetorically exaggerated over time and this exaggeration is driven by AI-assisted authorship by extracting the scientific content of thousands of papers and generating many possible abstracts to compare them. arxiv.org/abs/2512.19908
January 6, 2026 at 4:38 AM
A study shows machine learning paper abstracts are becoming more rhetorically exaggerated over time and this exaggeration is driven by AI-assisted authorship by extracting the scientific content of thousands of papers and generating many possible abstracts to compare them. arxiv.org/abs/2512.19908
Fantastic mechinterp paper showing that you can identify specific neurons which cause hallucinated answers in LLMs, and that these neurons are specifically associated with the language model trying to follow instructions too hard. arxiv.org/pdf/2512.01797
January 5, 2026 at 11:29 PM
Fantastic mechinterp paper showing that you can identify specific neurons which cause hallucinated answers in LLMs, and that these neurons are specifically associated with the language model trying to follow instructions too hard. arxiv.org/pdf/2512.01797
It doesn't help that by this point the written word is clearly nearly exhausted as a tool of social persuasion, almost powerless outside of very particular contexts. Short form text is one of the few places it still has power, in most essay forms it just literally doesn't.
January 1, 2026 at 11:51 AM
It doesn't help that by this point the written word is clearly nearly exhausted as a tool of social persuasion, almost powerless outside of very particular contexts. Short form text is one of the few places it still has power, in most essay forms it just literally doesn't.
Not enough commentary on how this new trend of offering the founders a bunch of money to leave but keeping the now worthless corporate vehicle intact without an exit might end the Silicon Valley startup scene if AI coding reduces the cost of execution to near zero.
December 25, 2025 at 7:20 PM
Not enough commentary on how this new trend of offering the founders a bunch of money to leave but keeping the now worthless corporate vehicle intact without an exit might end the Silicon Valley startup scene if AI coding reduces the cost of execution to near zero.
I need to dig further into this paper later, but the fact they did ablations on their benchmark performance instead of just posting their score gives me the impression this work contains real insight rather than Goodhart framework slop. arxiv.org/abs/2512.10398
December 23, 2025 at 9:02 PM
I need to dig further into this paper later, but the fact they did ablations on their benchmark performance instead of just posting their score gives me the impression this work contains real insight rather than Goodhart framework slop. arxiv.org/abs/2512.10398
Actually this picture of an orange is prompted without any drawing tutorial, showing that the other two with drawing tutorials are basically drawn from the same distribution/kind of picture of an orange.
December 23, 2025 at 1:58 AM
Actually this picture of an orange is prompted without any drawing tutorial, showing that the other two with drawing tutorials are basically drawn from the same distribution/kind of picture of an orange.
So a friend reasonably asks: Do real drawing tutorials work? I tried three real watercolor drawing tutorials vs. the false lemon tutorial (image 4), and only the false lemon tutorial seems to get it all the way out of the slop basin. Even though the false lemon tutorial is like, not a real tutorial.
December 23, 2025 at 1:52 AM
So a friend reasonably asks: Do real drawing tutorials work? I tried three real watercolor drawing tutorials vs. the false lemon tutorial (image 4), and only the false lemon tutorial seems to get it all the way out of the slop basin. Even though the false lemon tutorial is like, not a real tutorial.
I think often of Orwell's description of the progression of censorship during the Spanish Civil War in his memoir "Homage To Catalonia". How it starts out visible and the powers that be slowly take more and more measures to make the censorship invisible.
December 22, 2025 at 11:15 PM
I think often of Orwell's description of the progression of censorship during the Spanish Civil War in his memoir "Homage To Catalonia". How it starts out visible and the powers that be slowly take more and more measures to make the censorship invisible.
A lot of my original interest in AI art was that it would be able to bring abstract technical futurology concepts to life, and let people who don't specialize in illustration render ideas from their imagination. I gave up on the technology when it seemed like it wouldn't enable that, but 4o does.
December 22, 2025 at 10:39 PM
A lot of my original interest in AI art was that it would be able to bring abstract technical futurology concepts to life, and let people who don't specialize in illustration render ideas from their imagination. I gave up on the technology when it seemed like it wouldn't enable that, but 4o does.