Kwaze Kwaze
banner
kwazekwaze.bsky.social
Kwaze Kwaze
@kwazekwaze.bsky.social
Enjoys landscapes and logistics.
Candide is better as a musical.
Pinned
"AI is here to stay"
My guy, it's had practical applications for over a decade.

"It's not going away"
Neat, neither is plastic.

"We have to adapt"
Great! Let's regulate and enforce or push for more equitable laws around data rights and labor.

"No not like that"
Reposted by Kwaze Kwaze
Again: we have discussed all of this before. We warned you that things like this were going to happen. We told you that the values embedded in these LLM/GPT-type "AI" systems were a) trash & b) woefully under-examined.

And yet here we are.

And that is also why I'm furious at CBS about this framing
November 15, 2024 at 7:54 PM
Reposted by Kwaze Kwaze
There's no point to the voice-based faux call center scheme from a practical standpoint unless you believe that it's an adequate replacement for human interaction, and since the vast majority of people will never feel that way, you're just being a dick about adding pointless steps
December 26, 2025 at 2:23 AM
Reposted by Kwaze Kwaze
I only pirated your game at the conceptual stages. In every other stage, from installation to playing the game to beating it, there was no piracy involved.
December 23, 2025 at 4:41 AM
They'll be lightly reprimanded and have a slightly lower raise that year but next year be tasked with presenting on their success in coding with AI agents to the entire company.

They will then get a raise for having a company-wide impact.
Somewhere out there right now is a programmer vibe-coding an FDA-regulated medical device, and somewhere else is the unemployed person whose job was to inspect that code and hold it to standards and who was fired by a 19 year old.
December 23, 2025 at 4:20 PM
Reposted by Kwaze Kwaze
I am thinking that a surprising amount of good could be done simply by banning chat interfaces to LLMs. Most of the plausible use-cases that might turn into something good don't use them, basically all the awful ones do.
December 22, 2025 at 4:04 AM
Reposted by Kwaze Kwaze
To imagine that technology is not intimately imbued with the decisions of its creation and use is to perceive it as some kind of abstract mathematical ideal that always already existed outside of time and place, and that your own actions and choices had no agency whatsoever in its creation.
December 21, 2025 at 5:30 PM
Literally no difference between motorcycles and bikes. Two wheels. Metal. Pedals. Ambulation.

Talks and walks like a duck.

You think you're special because you pedal harder? Lol

If you're mad I'm in the bike lane you need to get over yourself, smacks of old person. Motorcycles aren't going away!
December 21, 2025 at 5:26 PM
As always "Anti-AI obsession" = bothering to care.

The type of person being alluded to that "just doesn't understand the tech" isn't pissed off over something like a predictive analytics system from 2019 because they aren't even aware that exists.

And, if they were, said person wouldn't care.
I am frustrated by the anti-AI obsession on this place. I understand people are annoyed by AI being imposed on us for trivial things and by the AI uber alles discourse but it really feels like older people complaining about a new technology.
December 19, 2025 at 6:35 PM
"Human reviewed AI" just means your system doesn't actually need to work so long as you've got a scapegoat.
“…seemingly rather than probe why the images weren’t more carefully reviewed to prevent a false alarm on campus, the school appeared to agree with ZeroEyes and blame the student.”
School security AI flagged clarinet as a gun. Exec says it wasn’t an error.
Human review didn’t stop AI from triggering lockdown at panicked middle school.
arstechnica.com
December 19, 2025 at 4:00 PM
At a certain point of power imbalance playing devil's advocate and granting nuance is just rolling over at best (if not just running defense)

You don't actually gotta hand it to the trillion dollar technofascism project
periodically I'll see a backlash that goes a little too far or seems kinda unfair. i never say anything because i think we need to be kinda unhinged and zealous in our response to this shit. I'd take an honest-to-God crusade against this stuff over letting it proliferate.
We can do even more.
December 18, 2025 at 9:51 PM
Reposted by Kwaze Kwaze
If LLMs hadn't been championed as a hypergrowth tool that would produce God, but instead had been allowed to grow naturally into the use cases it can serve (NLP interfaces, a component of but not the whole of programming workflows), we'd certainly be in a different world. They chose that, though.
December 17, 2025 at 1:41 PM
The stripping of any and all attribution is the most destructive and damning aspect of all gen"AI" products. Regardless of output media.

They're epistemic black holes by design and wouldn't be attractive to these people if they weren't.

Comparisons to Google are at best desperate straw grabbing.
December 17, 2025 at 8:54 PM
Reposted by Kwaze Kwaze
Like man, my career only exists in the way that it does now because a production designer was googling "realistic pokemon" and found me. If he could have just had a machine spit that out, I never would have gotten a job. This is a major issue hitting artist discoverability.
December 17, 2025 at 12:32 AM
Reposted by Kwaze Kwaze
Representation for me, simulacra for thee.

When the argument is "it's better than nothing" one must always ask why "nothing" is the only other option on the table.
I wrote this brief talk on why “augmenting diversity” with LLMs is empirically unsubstantiable, conceptually flawed, and epistemically harmful and a nice surprise to see the organisers have made it public

synthetic-data-workshop.github.io/papers/13.pdf
December 16, 2025 at 1:44 PM
Reposted by Kwaze Kwaze
The “you have to expose yourself to other ideas crowd” sure loves the “a mirror that will change your face while saying what you believe” technology. Weird…
December 10, 2025 at 6:15 PM
If we admit it's possible to simulate a hell for billionaires when they die we must assume every universe eventually produces a hell for billionaires when they die.

Therefore the probability of going to hell when you die approaches 100% for all billionaires.
do the billionaires ruining every aspect of what it means to be alive and a human not know they’re going to hell to be tortured for all of eternity
December 5, 2025 at 8:27 PM
Reposted by Kwaze Kwaze
I'll stop here. But note that these are all cases involving ADULTS outsourcing the basic requirements of their professional responsibilities to these tools of cognitive automation.

Now tell me we need to be pushing this into schools. I fucking dare you.
December 4, 2025 at 2:24 PM
Reposted by Kwaze Kwaze
Here's the good news: though the workers 'making' these AI locs can't do much to stop them, WE, THE CONSUMERS, CAN.

Email them. Cancel your subscription. Call them and complain.
Be a Karen. Tell them they have lost your dollar.

Workers secretly want this. They want consumers to riot.
November 30, 2025 at 5:05 AM
Reposted by Kwaze Kwaze
Really gotta remove all connection to the reality of a thing huh. Its like the worst aspects of toxic positivity. Never let anything be bad, perfect presentation only, add a beige sheen to all the memories
November 30, 2025 at 10:05 PM
Woah a thinking machine
Hey @erinbiba.bsky.social I hope this finds you
November 29, 2025 at 5:21 PM
Reposted by Kwaze Kwaze
New article inspired by the recent hoopla surrounding Karen Hao's book (re: water use). I argue that "no one should look to the EA community as exemplifying good habits of epistemic and moral conduct." Here's why: www.realtimetechpocalypse.com/p/how-effect...
How Effective Altruists Use Threats and Harassment to Silence Their Critics
How does the Effective Altruist community respond to critics? With threats and harassment. Even people who still call themselves "Effective Altruists" are afraid to openly criticize the community.
www.realtimetechpocalypse.com
November 28, 2025 at 5:26 PM
No no this is fine actually they're switching away from evaporative cooling and-- wait wrong problem
November 29, 2025 at 1:42 AM
Logical conclusion to all this.

This stuff only appeals to idea guys and idea guys don't really want to create anything. They just wanna say they did.
November 27, 2025 at 12:18 AM
Reposted by Kwaze Kwaze
like, these differences are subtle--a spam filter is a language model, "tuning" and "training" are differences of emphasis rather than kind--but if google is going to obfuscate rather than educate and be transparent they deserve what they get
November 23, 2025 at 1:19 PM
Reposted by Kwaze Kwaze
Instead, corps decided the better business mode was to have their LLMs plagiarize the source material, rewording it to give the illusion of possessing intrinsic knowledge. This enabled them to con executives into believing these machines can think and will one day surpass human intelligence. 2/2
November 24, 2025 at 8:25 AM