opossum caviar
banner
caviar.0xpossum.dev
opossum caviar
@caviar.0xpossum.dev
they, 199X, dev/hacker/researcher/artist,

opinions expressed are not mine, they're your opinions, and your employers opinions too. and are 100% correct
A problem is that LLMs can be made to adhere to style guides way more easily than they can be made to generate useful and informative text. I've used lists and formatting for years, and now I suffer from ChatGPT Writes Like Me.
December 15, 2025 at 3:30 PM
There was also that Minecraft video model, which is a Minecraft "clone" entirely in a neural network, which is (1) way more interesting and (2) far more literally "100% AI".

Like, fuck AI and all that, but this is a silly accolade to try to claim
December 14, 2025 at 9:23 PM
This is also silly because there are tons of "fully playable games" created through AI. As much as I am on the "this is silly" camp, you can get something that meets that basic definition from one prompt nowadays
December 14, 2025 at 9:21 PM
I think they were inspired by that "I paused Better Call Saul and thought it was a character study" post.
December 13, 2025 at 10:40 PM
Matrix sequels are supposed to be bad! They ruin that tradition by making a good sequel. What were they thinking???
December 12, 2025 at 10:30 PM
Misunderstood this as "You can log in to PornHub with SpotIfy on the PlayStation 5" and I thought that was extremely funny
December 10, 2025 at 4:05 PM
The TLDR is: We can be sold technologies which do not exist.

Remember when Tesla sold "fully self driving" cars in 2015, despite that being a technology that did not and does not exist?

We'll be sold "fully automated science" before that technology will be possible.
December 10, 2025 at 4:44 AM
Throughout my work, I naively thought "Oh, the models in this domain have so many outstanding problems! But *once* those are fixed, here is how this can be used."

Turns out it doesn't matter if it *works*, all that matters is if it can be sold on the market.
December 10, 2025 at 4:39 AM
Maybe it will be possible we'll have the technology, understanding, and ethical decisionmaking required to automate science.

But what I think is *more* likely is that we won't have any of those things, but we'll kill a culture of science and replace it with something we *call* automated science.
December 10, 2025 at 4:37 AM
My concerns are that this would centralize even more power in people with bad and counter-productive decisionmaking, but also cement and exacerbate the existing many problems with how academic science is directed, performed, and communicated.
December 10, 2025 at 4:37 AM
Questions like "what questions should we dedicate resources to try to answer" are similar to questions like "what information can we use to better inform our state space search". (And we've kind of run out of ways to answer that with "Bayes")
December 10, 2025 at 4:37 AM
But more concerning is that both science in general and deep learning have some barely-addressed epistemic problems with ad-hoc and arbitrary answers. That's how we get the p=0.05 boundary, or papers *still* being published against MNIST (even though we've effectively *as a community* overfit on it)
December 10, 2025 at 4:36 AM
But DL research itself means doing *science* about how to *produce* a model with knowledge. The similarities are pretty blatant.

(This makes things like the NFL theorem have pretty profound implications! But there are more immediate philosophical concerns.)
December 10, 2025 at 4:36 AM
The astrology industry has ingrained their wiry claws into my circles and I've seethed for years about it.

There's such a thing as reality! It's important!
December 6, 2025 at 7:37 PM
that's actually pretty much all of them
December 6, 2025 at 5:26 AM