Shawn Murphy
@smurp.com
1/ One could ask an LLM to fake finese, but it is not in the nature of the data-structures underlying LLMs nor in their processing model to behave in that way. The foundation models are lossy compressions of their training data (ie, roughly, the web).
I wish people would skip the nonsense and just say this, which is valid criticism and it's astonishing to me how little visible effort major labs put into fixing it. Stop having ChatGPT write in that authoritative style, have it use markers of epistemic uncertainty and convey doubt.
It may be true (I don't know enough neuroscience to say) that LLMs & human brains use similar techniques to make connections between concepts & learn. But most humans don't speak confidently & coherently about something unless they actually know it. The ones who do... well, we have words for them.
June 19, 2025 at 7:01 PM
1/ One could ask an LLM to fake finese, but it is not in the nature of the data-structures underlying LLMs nor in their processing model to behave in that way. The foundation models are lossy compressions of their training data (ie, roughly, the web).