Dave Kasten
davekasten.bsky.social
Dave Kasten
@davekasten.bsky.social
Do what seems cool next.

"You need to learn WHY things work on a starship."-Admiral James T. Kirk
Reposted by Dave Kasten
I misread this image so badly that I can only really hope to illustrate what I saw.

( Also a good excuse for an #art study )
November 23, 2025 at 10:06 PM
Reposted by Dave Kasten
the really interesting thing here is that the way you control and steer the behavior, like down to the fundamental part of training, is how you talk about 1) what to do and 2) why to do it. it is largely a philosophical problem that happens to have an engineering component
i thought this was a joke but it isn't. bsky.app/profile/theo...
November 24, 2025 at 6:22 PM
aaaaaaaaaaaaa this is an annoyingly useful gloss on the Tanakh
no wonder g_d was pissed when they ate that fruit
November 24, 2025 at 6:11 PM
Reposted by Dave Kasten
This is *amazing*. This goes straight to Deceptive Design Hall of Shame.

They made a "take a break" nudge that has no obvious "ok, I'll take a break" affordance. Its three affordances are:

1) Keep chatting (default, highlighted)
2) x out — keeps chatting
3) "This was helpful" — what is this?

🧵
One of the changes that OpenAI has made to make ChatGPT safer is a "take a break" nudge. There's something quite interesting about the design here. Which thing does it make you want to click?
November 24, 2025 at 2:17 PM
Reposted by Dave Kasten
cryptography is a mathematical system for transforming problems which aren't key-management problems into key-management problems
November 22, 2025 at 2:15 AM
Reposted by Dave Kasten
somehow the crown we threw in the gutter is going to be waiting for us to pick it up when we're able
Much like USAid, if the rest of the world wanted to replace what the US has done for the last 80 years, they would have been doing it already
people predicting a "post-American world" really over their skis
November 21, 2025 at 1:18 PM
Reposted by Dave Kasten
Kash Patel's personal firing of the Pride flag-displaying employee was always absurd and unconstitutional, so I'm glad David Maltinsky is suing, but, wait.

It was a flag FLOWN BY THE FBI that was given to him BY THE FBI?! Come on.

Complaint: storage.courtlistener.com/recap/gov.us...
November 20, 2025 at 9:56 PM
Reposted by Dave Kasten
excuse me only Beatrix Kiddo can do this
November 20, 2025 at 3:40 PM
Yup. I had a trans coworker as a teenager in Ohio under the Bush Administration.
Today is the Transgender Day of Remembrance. Trans people have literally always been here; every time I see someone sneering "oh but there were no trans people when I was in high school," well. I dated a trans girl in high school, and was close friends with a trans guy.
November 20, 2025 at 4:42 PM
Reposted by Dave Kasten
These CRIMINALS keep demanding JURY TRIALS to establish whether they are, in fact, CRIMINALS. We must put an end to this
November 20, 2025 at 4:18 PM
Reposted by Dave Kasten
I feel like I need to keep reiterating that general purpose chatbots are probably the most complicated use of all the possibilities with this modern iteration of AI
November 20, 2025 at 11:41 AM
Reposted by Dave Kasten
French authorities are taking X to court because Grok is now doing straight up holocaust denial. Gas chambers intended for disinfection, cyanide residue tiny, ‘story’ persists because of taboo against critical examination. (Full translation in alt text)
November 20, 2025 at 1:03 PM
Reposted by Dave Kasten
if you do not think that your multiple ton hunk of metal which you are flinging down the highway at sixty miles per hour will eventually kill someone you are not qualified to work on any such thing
abolish waymo
November 20, 2025 at 2:04 AM
Yuuuup. Very true in general. We've decided we're racing in no particular direction at max speed, and damn the torpedoes
i thought: as long as i can steer toward the destination, i'm willing, i guess, to hold the rudder no matter how bad the idea to go fast is. then they decided we didn't need a rudder.
Just a lot of "it's going to happen anyways so I might as well be the one to do it." I can't work that way. I need to really believe that my work is going to make the world better with some reasonable confidence
November 18, 2025 at 9:39 PM
Reposted by Dave Kasten
Bluesky, where you can watch me thinking through a moral crisis in public
November 18, 2025 at 8:47 PM
Reposted by Dave Kasten
It really, really does not feel like many of the people pushing AI and robotics to go as fast as possible have a good model of how it's going to go well
November 18, 2025 at 8:44 PM
Reposted by Dave Kasten
i thought: as long as i can steer toward the destination, i'm willing, i guess, to hold the rudder no matter how bad the idea to go fast is. then they decided we didn't need a rudder.
Just a lot of "it's going to happen anyways so I might as well be the one to do it." I can't work that way. I need to really believe that my work is going to make the world better with some reasonable confidence
November 18, 2025 at 9:21 PM
Reposted by Dave Kasten
i am completely comfortable with going fast. i am completely comfortable figuring out where as you're going. i recognize that this is risky and weird, but i tend to high risk tolerance. i should maybe not be in charge due to this.

the dominant paradigm is "not even trying to figure it out"
i thought: as long as i can steer toward the destination, i'm willing, i guess, to hold the rudder no matter how bad the idea to go fast is. then they decided we didn't need a rudder.
Just a lot of "it's going to happen anyways so I might as well be the one to do it." I can't work that way. I need to really believe that my work is going to make the world better with some reasonable confidence
November 18, 2025 at 9:24 PM
Reposted by Dave Kasten
there have been a lot of good papers which have the thesis "AI is bad" and a lot of good papers which have the thesis "AI doesn't work," but there has not been a single good paper with the thesis "AI is bad and it doesn't work," because in every case the first is used as proof of the second.
November 18, 2025 at 5:18 PM
Reposted by Dave Kasten
"this computer program knows english" is weird. like, it's very weird. it was not true until recently. it seems OBVIOUSLY weird to me! i am convinced people are managing to ignore it for weird reasons
November 18, 2025 at 5:51 AM
Reposted by Dave Kasten
for relatively normal people (ie, we're not assigning reading here) i think the most convincing thing is to just talk to the thing? like, they know english. they just do. it's not close. you can trick them, they're weird, the personalities are ehhh, but they can carry a conversation.
November 18, 2025 at 5:51 AM
Reposted by Dave Kasten
the default left wing position is that it is simultaneously

1) repugnant that people want to own sentient slaves, but
2) laughable to imagine that they could, so not a real concern
This is true and ironic considering that “we might actually make a machine, which is owned by somebody, sentient” is actually an incredible moral argument against AI research
at this point i think a lot of people have negatively polarized themselves into cartesian dualism out of spite
November 18, 2025 at 3:07 AM
Reposted by Dave Kasten
i think this also underrates the way in which reinforcement learning + causal masking influences models towards the use of specific phrasal structures as, essentially, operators. eg., "it's not X--it's Y" is a way to suppress feature activation X in favor of feature activation Y.
November 17, 2025 at 5:33 PM
Reposted by Dave Kasten
i quit my job working on these things because i concluded i could not ethically continue, but the problem is that the space of criticism is dominated by people who believe that they are evil, useless, and cannot be improved, which ironically excludes almost every way in which they are harmful.
November 17, 2025 at 8:52 AM
Reposted by Dave Kasten
Very funny to compare this to the Valve hardware announcement video, and think back to the Amazon guy on LinkedIn who was like, “why can’t we compete with Valve??? It is a mystery!!!”
just saw the worst gaming ad ive ever seen
November 16, 2025 at 3:12 PM