@pumppumppump.bsky.social
Same thing Palmer said in the F1TV commentary. It looked more egregious than it was.
November 9, 2025 at 10:35 PM
Same thing Palmer said in the F1TV commentary. It looked more egregious than it was.
cough *linux mint*
October 14, 2025 at 7:33 PM
cough *linux mint*
Has Luigi been tried and convicted?
September 12, 2025 at 10:52 PM
Has Luigi been tried and convicted?
Reposted
That we don't know how consciousness comes about does not allow you to jump to whatever conclusion you want or just decide that computation is the only thing that matters. That's ignoring the hard problem, not accounting for it.
September 6, 2025 at 8:18 PM
That we don't know how consciousness comes about does not allow you to jump to whatever conclusion you want or just decide that computation is the only thing that matters. That's ignoring the hard problem, not accounting for it.
Reposted
I don’t like the term hallucination either. As you said, it implies a mind, but it also implies a momentary lapse in judgment, or a mistake. These tools are working as intended when they present misinformation. LLMs provide linguistic probability, not facts.
September 1, 2025 at 5:16 PM
I don’t like the term hallucination either. As you said, it implies a mind, but it also implies a momentary lapse in judgment, or a mistake. These tools are working as intended when they present misinformation. LLMs provide linguistic probability, not facts.
Because it's a text prediction engine? LLMs generate convincing text because they’re excellent at predicting word patterns, not because they understand or experience anything. Statistical prediction will not lead to AGI or consciousness.
August 25, 2025 at 11:23 PM
Because it's a text prediction engine? LLMs generate convincing text because they’re excellent at predicting word patterns, not because they understand or experience anything. Statistical prediction will not lead to AGI or consciousness.