chromecow.bsky.social
@chromecow.bsky.social
OverDrive, Libby and Kanopy are owned by private equity firm KKR, according to Wikipedia. So, stay vigilant.
October 30, 2025 at 9:34 PM
By which I mean LLMs, by which I mean highly tuned personal feedback machines (sweet, sweet dopamine). We are wizards, creating illusions so real we are trapped in them until we die.
So, that's fun.
July 22, 2025 at 6:56 PM
I also think we can survive it, and maybe the mature version (of the internet? Our media literacy?) is something better. But we seem to be getting hit with a lot of Great Filter events all at once. See also: AI
July 22, 2025 at 6:54 PM
And, I fully cop to the fact that olds like myself love to panic over new forms of media, photography, radio, television, comic books. But a machine that hooks every monkey brain on the planet instantaneously and then rewards outrage with dopamine...that seems...objectively bad.
July 22, 2025 at 6:52 PM
Imagine Human Boot II: The Reckoning, stomping on a human face, forever.
June 24, 2025 at 5:40 PM
True. I think it's a bigger issue in less artistic spheres. AI for programming, for instance. Becomes impossible to introduce a new programming language, because there's no corpus to train on, and all future workflows are built around AI.
June 24, 2025 at 5:38 PM
Anyway. If anyone wants to help solve the power issue, please invest in my venture to cover the moon with solar panels (light side) and data centers (dark side).

That's right: AI Moon

😁
June 24, 2025 at 5:27 PM
Discoverability is also on my mind. Like Netflix (sorry not sorry), a huge amount of minimally curated slop can be generated and released with no effort (except all that electricity). So then, it makes it harder to find the actual good, human art work (which includes stuff made with AI tools)
June 24, 2025 at 5:25 PM
Worse if it consumes it's own output, even if it can avoid model collapse.
June 24, 2025 at 5:19 PM
The last point about fossilization is one I don't hear much about. The idea being: Train an LLM on all the internet art. Going forward, artist protect their output by opting out of training sets (lets say). The tool then becomes locked into a particular time-slice of human culture.
June 24, 2025 at 5:19 PM
For example:

- Is there such a thing as an ethically sourced training dataset?
- What are the moral implications of using an unethical training set?
- What are the moral implication of LLM power usage
- Do LLMs fossilize the culture they sample?
June 24, 2025 at 5:15 PM
Take digital photography. Today, absolutely seen as a valid artform, and the same tool is also the basis for the panopticon surveillance state. So, this is the angle I like to poke AI from.
June 24, 2025 at 5:14 PM
Agree 100%. Every new artistic technology has an integration curve. Photography was very much not seen as art for a long time, as you allude to.

For me, the more interesting questions are around the morality of the systems.
June 24, 2025 at 5:12 PM
Archduke Ferdinand joins the chat...
June 22, 2025 at 7:30 PM
Due for a rewatch.
June 18, 2025 at 7:38 PM
I'm starting to experiment with simple rules systems. Hoping to run Index Card RPG one of these days.
June 17, 2025 at 7:01 AM
Lead, follow or get out of the way. There are progressive voices that are actually ready to fight for this country.

Literally, what the fuck are you doing for America?
May 14, 2025 at 4:46 AM