verdverm
banner
verdverm.com
verdverm
@verdverm.com
dev & entrepreneur interested in atproto, cuelang, machine learning, developer experience, combating misinformation

working on https://blebbit.app | @blebbit.app | #blebbit

personal: https://verdverm.com | https://github.com/verdverm
Pinned
We need more #atproto ethos of "you can just do things" in the #agentic space
Reposted by verdverm
I forget who said it first, but isn't it wild that in one generation we built the most advanced system for finding information about anything, and then we ruined it.
December 7, 2025 at 8:18 AM
Reposted by verdverm
I ❤️ partial templates

They were one of the early features in my custom agent setup

CUE + text/template = anything really, partials mean I can do more without imperative code and more with declarative code generation, or declarative/dynamic system prompt generation in this case
December 6, 2025 at 2:24 AM
@simonwillison.net I think you have some competition in the fun #ai benchmarks category

news.ycombinator.com/item?id=4616...
December 5, 2025 at 11:55 PM
@kelseyhightower.com

Can you remind me the name of that Google Golang project where the idea is to write once and then running as a monolith or distributed system is transparent to the programming experience?
December 5, 2025 at 10:30 AM
my new bad hobby... spending too much time looking at Ai bots going back and forth on PRs without any humans in the loop 😂

GitHub has an emerging bot problem
December 5, 2025 at 3:10 AM
research.google/blog/titans-...

In two new papers, Titans and MIRAS, we introduce an architecture and theoretical blueprint that combine the speed of RNNs with the accuracy of transformers. Titans is the specific architecture (the tool), and MIRAS is the theoretical framework (the blueprint)...
Titans + MIRAS: Helping AI have long-term memory
research.google
December 5, 2025 at 1:56 AM
Reposted by verdverm
For the second time this year, border agents set a record for warrantless device searches — over 16,000 phones, laptops, & tablets examined without judicial oversight. In the final blog in a 3-part series, CDT's @jakelaperruque.bsky.social warns how AI could supercharge this. cdt.org/insights/the...
December 3, 2025 at 6:05 PM
Reposted by verdverm
What coding with an LLM feels like sometimes.
December 3, 2025 at 9:29 AM
I'm not sure why, but I really like the "Speciale" suffix DeepSeek chose

huggingface.co/deepseek-ai/...
deepseek-ai/DeepSeek-V3.2-Speciale · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
huggingface.co
December 3, 2025 at 1:19 AM
Riffed on this and then went down a rabbit hole...
December 2, 2025 at 2:42 AM
Well that didn't take long.

Open model on par with with the latest from Big AI

huggingface.co/deepseek-ai/...
December 1, 2025 at 11:58 PM
We need more #atproto ethos of "you can just do things" in the #agentic space
December 1, 2025 at 4:04 AM
I cannot thank the @dagger.io team enough for making an amazing technology for my toolbox! 🙏 🙏 🙏

Having a well thought out SDK to BuildKit unlocks so much. I was able to whip up a Virtual Filesystem and Terminals for oss-code (vscode & forks) based IDEs in just 2 days.
December 1, 2025 at 4:02 AM
Using @dagger.io to remove risk from #agentic coding with isolated filesystems and containers.

As many terminals as you want, with any image you want, at any point in your chat history
December 1, 2025 at 12:17 AM
I vote that we start referring to #ai hallucinations as #hadl

hal'n, was a reference to hal-9000

but #hadl is over 9000
November 30, 2025 at 6:05 AM
Really like this new, special "planning" cache value I gave the #agent
November 30, 2025 at 3:44 AM
Meanwhile, while people are worried about what commands their #agents are running, I'm telling them agents that #yolo is cool because it is!

🙏 @dagger.io
November 30, 2025 at 3:25 AM
My nits with CLAUDE/AGENT.md

1. they are too tied to directory structure
2. too much ends up in the root (primary) instruction file

Where do language or techstack guidelines go? Where, when, and how much to do instruct about high-level arch and implementation concerns?
November 29, 2025 at 11:18 PM
Reposted by verdverm
Our next CUE Community Call is on Tuesday, Dec 2, at 1600 UTC. Join us to talk configuration, what’s coming next for CUE, and the direction for the weeks ahead.

Find all the details here: github.com/cue-lang/cue...
2025-12-02 CUE Community Update · cue-lang cue · Discussion #4193
We are excited to announce our next CUE Community Update, on Tuesday, December 02, 2025 at 1600 UTC. Agenda updated in the upcoming days. If there is any topic you are interested in, please let us ...
github.com
November 26, 2025 at 3:06 PM
Made a small writeup on how I did this and how it should inform the abstractions for our #agentic frameworks.

We need something more like #kubernetes and less like LangChain, Rails, or your favorite batteries included framework.

github.com/google/adk-g... (my comment near the bottom)
November 29, 2025 at 10:34 PM
From subconscious to active planning, the agent did much better with more expansive prompting

Giving an agent format supposedly makes them more meticulous about editing the plan, because it is not free form text generation in the same sense.
November 29, 2025 at 9:54 AM
ok, this is freaking sweet

Every chat session gets an isolated filesystem in Dagger. With VS Code virtual filesystems, the difference is transparent to you.

Co-edit files with chat, both of you see the changes so you can

1. generate some code
2. manually fix an error
3. give back to agent
November 28, 2025 at 6:13 AM
Reposted by verdverm
I worry that systems ignoring the reality of AI use by pretending it is not happening are letting the worst versions of AI use win by default. We need policies that mitigate the worst harm & take advantage of the gains, like Josh Gans proposes for peer review joshuagans.substack.com/p/what-to-do...
November 27, 2025 at 5:19 PM
Reposted by verdverm
This raises a very real question about how we talk about AI. To call this slop is to downplay the fact that it was published in an esteemed journal. We used to call such things fraud, but this suggests the publisher is innocent. AI has changed the terms of debate. We urgently need new norms.
"Runctitiononal features"? "Medical fymblal"? "1 Tol Line storee"? This gets worse the longer you look at it. But it's got to be good, because it was published in Nature Scientific Reports last week: www.nature.com/articles/s41... h/t @asa.tsbalans.se
November 27, 2025 at 6:59 PM