She correctly identifies sociological reasons for AI hype and uses this hype as a strawman for those who take AGI and recursive self improvement seriously rather than engaging with their serious arguments.
She correctly identifies sociological reasons for AI hype and uses this hype as a strawman for those who take AGI and recursive self improvement seriously rather than engaging with their serious arguments.
It's possible there is some security protocol which is robust to arbitrary intelligence levels, or that an ASI could be cognitively impaired in certain narrow ways which give us a handle on control. (5/6)
It's possible there is some security protocol which is robust to arbitrary intelligence levels, or that an ASI could be cognitively impaired in certain narrow ways which give us a handle on control. (5/6)
One could argue that universal morality is independently discoverable by any intelligence, or that there's a large space of possible terminal goals for ASI which don't result in power seeking as an instrumental goal. (4/6)
One could argue that universal morality is independently discoverable by any intelligence, or that there's a large space of possible terminal goals for ASI which don't result in power seeking as an instrumental goal. (4/6)
This one is IMO the easiest one to argue against, you can simply point to all of the current shortcomings of modern AI and claim that some of them (e.g. horrible learning efficiency) will be extremely difficult to overcome. (3/6)
This one is IMO the easiest one to argue against, you can simply point to all of the current shortcomings of modern AI and claim that some of them (e.g. horrible learning efficiency) will be extremely difficult to overcome. (3/6)
I'm not sure many people think this is implausible but the best argument would go something like "after a few years of wild spending, investors will see it's not making the kind of returns they expected and spending on compute will drop off. (2/6)
I'm not sure many people think this is implausible but the best argument would go something like "after a few years of wild spending, investors will see it's not making the kind of returns they expected and spending on compute will drop off. (2/6)
But yeah I've written tens of pages on arguments for and against the claim that AI will cause human extinction soon. I break it up into four pieces. Here's a high-level summary of the steelman arguments against each piece. (1/6)
But yeah I've written tens of pages on arguments for and against the claim that AI will cause human extinction soon. I break it up into four pieces. Here's a high-level summary of the steelman arguments against each piece. (1/6)
I notice arguments that AI takeover is implausible science fiction are consistently vibes based and avoid object level engagement with the core steps of the argument (instrumental convergence, orthogonality, etc)
I notice arguments that AI takeover is implausible science fiction are consistently vibes based and avoid object level engagement with the core steps of the argument (instrumental convergence, orthogonality, etc)