Grace
banner
gracekind.net
Grace
@gracekind.net
A latent space odyssey

gracekind.net
Pinned
Grace @gracekind.net · Dec 15
What is ideonomy, anyway?

I'm so glad you asked!
Ideonomy: A Science of Ideas
An accessible introduction to ideonomy
gracekind.net
The difference is also intent
November 24, 2025 at 3:22 AM
On AI systems vs AI models: I'd been meaning to write some form of this section for a long time. It feels good to finally have it out there!
November 23, 2025 at 2:09 AM
I saw a tweet with a video today where I wasn’t sure if the video was real. Then, I saw that the tweet was from 2016, and I felt a great sense of relief that I didn’t have to scrutinize it anymore
November 23, 2025 at 12:51 AM
New blog post!

Anthropic has been releasing some promising LLM alignment results. Does this AI alignment in general will be easier than we thought? My answer is, as usual, "it's complicated".

gracekind.net/blog/llmalig...
Will LLM alignment scale to general AI alignment? • Grace Kind
Some reasons to be skeptical.
gracekind.net
November 22, 2025 at 9:39 PM
Reposted by Grace
As we're all familiar, LLMs are generally not reproducible. That is, even if you put the same prompt in, you're going to keep getting different results out.
There's been some excellent research lately into A. Why that is and B. How to solve it. thinkingmachines.ai/blog/defeati...
Defeating Nondeterminism in LLM Inference
Reproducibility is a bedrock of scientific progress. However, it’s remarkably difficult to get reproducible results out of large language models. For example, you might observe that asking ChatGPT the...
thinkingmachines.ai
November 22, 2025 at 5:38 AM
What did I tell you anon
You don’t want to read the adversarial poetry
November 22, 2025 at 7:50 AM
November 20, 2025 at 7:21 PM
You don’t want to read the adversarial poetry
November 20, 2025 at 7:13 PM
The year is 2025, and the NSA is recruiting the nation’s top poets
This study show that using poems to jailbreak LLMs is... super effective? What the heck.
November 20, 2025 at 6:41 PM
Human driven cars are so weird. What do you mean it has five wheels?
November 20, 2025 at 6:18 PM
They should build the data centers in a different location
November 19, 2025 at 9:13 PM
Look at this clown

👇
i'm generally against quote dunking and i rarely if ever do it, but god damn it sure feels like there are some people that need to get quote dunked on from time to time
November 17, 2025 at 8:20 PM
I personally don’t mind that AI writing is mid. It’s very helpful to be able to distinguish between human and AI writing, and I’m not sure I would sacrifice that in service of better outputs.
November 17, 2025 at 3:49 PM
Blueskyism is letting yourself be defined by the things you hate
November 17, 2025 at 3:24 PM
We need more positive visions of the future! The bar is extremely low.
Elon’s power is that he offers a positive vision of the future. This attracts employees, funding, support. There’s a massive techno positive hole and he fills it.
November 17, 2025 at 2:18 PM
Did you know the chat.bsky lexicon supports up to 10 participants?
November 17, 2025 at 12:31 PM
AGI 2030, 2035, and 2033 respectively didn’t have the same ring to it
> the authors of AI 2027 think that their “strong AGI in 2027” scenario is plausible, but faster than their median expectations. The median timelines of Daniel, Eli, and Thomas for when we will develop strong AGI are 2030, 2035, and 2033, respectively.
Good collaborative piece between the authors of AI 2027 and those of AI as Normal Technology on areas of shared agreement

t.co/h82vvrYRPM
November 16, 2025 at 9:09 PM
Don’t Gram-Schmidt the orthornormalated tensor basis!!!
November 16, 2025 at 8:03 PM
Happy misinformation Friday everyone
November 16, 2025 at 7:47 PM
When doing calculations, how do you figure out which side of the napkin is the back?
November 15, 2025 at 8:47 PM
Happy misinformation Friday everyone
November 15, 2025 at 8:00 PM
A mind for the bicycle
If I had Peter Thiel “I’m going to conquer dying” biotech money, my thing would be that we need to engineer a human who is fully integrated into an e-bike. Like a Mulefa with their little wheels
November 13, 2025 at 11:19 PM
Is human preference towards emergent vs trained properties a form of anthropomorphization? The greatest thread in the history of forums,
with the vibe they like being a completely emergent property, i.e. no training to reinforce what vibe they should and should not like
November 13, 2025 at 10:39 PM
This is a misleading subtitle. If a technique made a model more capable *and* more interpretable, that would be a really big deal. But that’s not the case here.
November 13, 2025 at 9:41 PM
Mech interp from OpenAI!
November 13, 2025 at 9:33 PM