Alice Valiant
banner
alice.strange.domains
Alice Valiant
@alice.strange.domains
creative technologist, nyc
👁️🏳️‍⚧️ Woman In Her 30s With a Wife
writings @ https://strange.domains
no that’s just my name :)
January 18, 2026 at 5:46 PM
article’s a year old. can be ignored.
January 18, 2026 at 5:39 PM
the future will be competing visual metaphors for coding agent labor camps
January 18, 2026 at 1:58 AM
liking this bc I’m delusional enough to think I’m one of the special software girlies who will thrive in the new world
January 18, 2026 at 1:55 AM
felt weird to like this one, Amber
January 18, 2026 at 1:26 AM
the long shadow of ChatGPT: every LLM output is determined by the entirety of the context window, not just the last thing we asked it. but chatbot interfaces don't highlight that
January 17, 2026 at 1:49 AM
github.com/JD-P/miniloom
this is the latest implementation I know of but it's not quite what I had in mind; being able to prompt the same model, different times, and have all the responses displayed in parallel was what I was thinking of
GitHub - JD-P/miniloom
Contribute to JD-P/miniloom development by creating an account on GitHub.
github.com
January 17, 2026 at 1:49 AM
adversarial dating chatbots requiring more sophisticated countermeasures to avoid bad actors. zero knowledge proofs to see if you have the same hobbies
January 16, 2026 at 6:39 PM
id like to see more Loom-like interfaces, where we can parallelize our prompting over the same starting places and see where the trajectories go. that, to me, more accurately captures the idea that this piece is reaching for (which i generally agree with)
January 16, 2026 at 6:36 PM
one thing i felt this was missing was the role of the context window. the LLM isn’t working in this pure space of idea projection, where you have a cube that you rotate. as long as inquiry proceeds in the same window, everything is influenced by everything before it
January 16, 2026 at 6:36 PM
Reposted by Alice Valiant
Turns out when you direct the universal function approximator to predict the encoded outputs of human minds at the scale of 'the entire human corpus' the resulting network has to get mind-shaped internally to succeed well at the task
October 4, 2025 at 2:55 PM
yee
January 16, 2026 at 2:43 AM
im out of touch w the masses bc the notion of “explaining tool use” never even occurred to me. duh they can use tools now,
January 15, 2026 at 10:58 PM
i told a friend about chain of thought prompting and she got mad at me. “why would that work. how.” girl,
January 15, 2026 at 9:27 PM
technical writing is the programming language by which you direct an LLM!!
January 15, 2026 at 5:45 PM
i understand
January 15, 2026 at 2:58 AM