A. H. Zakai
banner
kripken.com
A. H. Zakai
@kripken.com
Software engineer. Loves fantasy novels and Agatha Christie. he/they

Tech: WebAssembly, Emscripten, Binaryen. All opinions here are my own, not my employer's (Google).

More links in: http://kripken.github.io/blog/about/
99%
January 11, 2026 at 11:38 PM
Yes, his predictions are so bad there is an entire wiki page for them:

en.wikipedia.org/wiki/List_of...
List of predictions for autonomous Tesla vehicles by Elon Musk - Wikipedia
en.wikipedia.org
January 10, 2026 at 7:48 PM
I think it's likely wrong (and evil&irresponsible of him to assume it!) but at least comprehensible?

We have lots of scifi depicting that future (likely where Elon got the idea): Robots do the work so basic needs (food, housing, healthcare, etc.) are all met, for the poor and rich alike.
January 10, 2026 at 7:46 PM
isn't that risky, though?

i write code for a living, so i don't want to sound like i'm afraid of more people coding! more coders is great. but running code without understanding it feels very bad to me

(there are sandboxes, so maybe there is a responsible way to do this. i'm just not sure)
January 10, 2026 at 12:33 AM
Sorry, I did not mean to be rude!

I am very happy you found something that helps you!
January 9, 2026 at 11:54 PM
(Obviously there are some excellent open source codebases to read, and "lessons learned" blogposts about long-term code management, but... how much, and how good?)

I do believe it's possible. But the next step might be much slower.
January 9, 2026 at 8:52 PM
LLMs don't learn like humans, but still, learning small-scale coding is much easier - tons of data, and easy to practice with quick feedback.

While for long-term maintenance tradeoffs, I'm just not sure how much useful data exists, or how easy it is to generate if not.
January 9, 2026 at 8:52 PM
Hmm, I kind of disagree with that post. Not just because "trends might change", but this trend specifically:

The data needed to learn how to write a function/feature/application is different than to write a long-lived codebase of a valuable product.
January 9, 2026 at 8:52 PM
Yes, I agree it is an intuitive finding. Another reason: people losing weight through diet and exercise have gained some beneficial habits, unlike with the drugs.

Still, it's really important data. That 4x difference is higher than I'd expect!
January 9, 2026 at 8:38 PM
You're right that the title is no surprise.

But the sub-header clarifies what they are saying:

"Analysis finds those who stopped using medication saw weight return four times faster compared with other weight loss plans"

So yes, it is no surprise the weight returns. But it returns quite fast
January 9, 2026 at 7:40 PM
The environment is a larger issue than water, though. And the original topic here was narrowly focused on water.

There are good arguments against AI! It is just that water isn't one of them.
January 5, 2026 at 9:26 PM
If you're saying AI and other things might be bad for *non*-water reasons, I might agree.
January 5, 2026 at 9:24 PM
Why do you find the comparison to golf courses (and the other things in the graph from the main thread) disingenuous?
January 5, 2026 at 9:08 PM
But the data does show that this is, objectively, not a significant use of water (at least not on average).

It is good if young people are more interested in the environment! But repeating false claims does not help, and may hurt if they do it instead of something actually productive.
January 5, 2026 at 8:28 PM
Also: alfalfa, corn, and soybeans, which are very high on that graph, are mainly used as animal feed (90% of soybeans!)

soygrowers.com/key-issues-i...

Even a small shift towards plant-based food would make a big dent in those water numbers, far larger than AI's footprint
Animal Agriculture & Soy | American Soybean Association
Animal agriculture is the soybean industry’s largest customer, as more than 90 percent of U.S. soybeans produced are used for protein source for animals.
soygrowers.com
January 2, 2026 at 8:02 PM
See also

arxiv.org/abs/2512.12411

which tempers the Anthropic claims, though it also confirms them to some extent.

My impression is that this is an area of open research.
Feeling the Strength but Not the Source: Partial Introspection in LLMs
Recent work from Anthropic claims that frontier models can sometimes detect and name injected "concepts" represented as activation directions. We test the robustness of these claims. First, we reprodu...
arxiv.org
January 2, 2026 at 4:19 PM
Certainly LLMs' confident claims about themselves are not to be trusted.

However, there may be more to this, as very recent research does find signs of introspection:

www.anthropic.com/research/int...
Emergent introspective awareness in large language models
Research from Anthropic on the ability of large language models to introspect
www.anthropic.com
January 2, 2026 at 4:19 PM
I can imagine it being a strength, yeah. But also a weakness, and I wouldn't trust myself to tell the two situations apart
January 2, 2026 at 12:07 AM
Original full article (well worth reading)

danwang.co/2025-letter/
2025 letter | Dan Wang
Corgis, compute, Cold War; Ecclesiastes; ties; Stendhal; humor; Pascal's Wager; deep infrastructure; Germanic obedience; Texas State Fair
danwang.co
January 1, 2026 at 10:10 PM
Is the first half more magical? Well, yes. But by itself it would feel lacking, incomplete.

The very last pages are saying something profound, which could not be expressed without the first and second halves of the book, the magical and the mundane - and their interconnectedness.
January 1, 2026 at 10:10 PM