Satya Benson
banner
satchlj.com
Satya Benson
@satchlj.com
What arguments that existential risks from AI should be taken seriously have you read?
November 9, 2025 at 6:55 PM
No I haven't
November 9, 2025 at 6:48 PM
What do you feel is relevant?
November 9, 2025 at 6:46 PM
I have now. Without reading it but from checking out some reviews, this is the sense I get:

She correctly identifies sociological reasons for AI hype and uses this hype as a strawman for those who take AGI and recursive self improvement seriously rather than engaging with their serious arguments.
November 9, 2025 at 6:12 PM
None of these arguments are particularly good IMO but they're as strong as they could be given the space I have here. Some people like other arguments which I find unimpressive, like the idea that since "intelligence" is poorly defined we can't conceive of anything smarter than humans. (6/6)
November 9, 2025 at 4:13 PM
4. ASI is likely to succeed at pursuing its goals

It's possible there is some security protocol which is robust to arbitrary intelligence levels, or that an ASI could be cognitively impaired in certain narrow ways which give us a handle on control. (5/6)
November 9, 2025 at 4:13 PM
3. If humanity builds ASI, it will have goals which include killing everyone

One could argue that universal morality is independently discoverable by any intelligence, or that there's a large space of possible terminal goals for ASI which don't result in power seeking as an instrumental goal. (4/6)
November 9, 2025 at 4:13 PM
2. If humanity tries hard, it is likely to succeed at building ASI

This one is IMO the easiest one to argue against, you can simply point to all of the current shortcomings of modern AI and claim that some of them (e.g. horrible learning efficiency) will be extremely difficult to overcome. (3/6)
November 9, 2025 at 4:13 PM
1. Humanity will try hard to build ASI

I'm not sure many people think this is implausible but the best argument would go something like "after a few years of wild spending, investors will see it's not making the kind of returns they expected and spending on compute will drop off. (2/6)
November 9, 2025 at 4:13 PM
I feel like you're being a bit condescending in tone.

But yeah I've written tens of pages on arguments for and against the claim that AI will cause human extinction soon. I break it up into four pieces. Here's a high-level summary of the steelman arguments against each piece. (1/6)
November 9, 2025 at 4:13 PM
If I knew of compelling reasons why AI takeover is highly implausible I would be very relieved and happy to focus on lesser risks
November 9, 2025 at 2:25 AM
In this case I would actually love to believe that AGI/superintelligence in my lifetime is highly implausible because I think it would be very bad for me or others who I care about. My social environment doesn't reward me for being concerned about this, in fact quite the opposite.
November 9, 2025 at 2:22 AM
I don't think I am fully immune to motivated reasoning and other kinds of bad epistemics, but I am not particularly susceptible either. I maintain critical self-scrutiny in my reasoning and use extra caution with arguments which flatter my priors
November 9, 2025 at 2:15 AM
Yes I read the entire thing
November 9, 2025 at 1:53 AM
of course this take is the first thing I see when I open Bluesky lmao

I notice arguments that AI takeover is implausible science fiction are consistently vibes based and avoid object level engagement with the core steps of the argument (instrumental convergence, orthogonality, etc)
November 9, 2025 at 1:51 AM
download.ssrn.com
June 23, 2025 at 12:53 PM
The first crazy step is assuming that because you sort of get why models make some of the mistakes that they do, you understand their limitations better than actual ML researchers.
June 19, 2025 at 12:50 PM
(by Shel Silverstein)
April 27, 2025 at 11:11 PM