Agents are not going to be like chatbots. They are going to be like programming languages come alive.
This has been disproven empirically (consumers capture most surplus value, as always)
So a would-be paperclip maximizer would underinvest in self-improvement
This has been disproven empirically (consumers capture most surplus value, as always)
So a would-be paperclip maximizer would underinvest in self-improvement
If you don’t do this, everyone hates you. It makes actual thinking impossible.