If understanding and causality are fundamentally independent of language — e.g. animals are fully capable of causal reasoning without it, then why are we trying to force these abilities out of large language models? Isn’t this approach misguided?
If understanding and causality are fundamentally independent of language — e.g. animals are fully capable of causal reasoning without it, then why are we trying to force these abilities out of large language models? Isn’t this approach misguided?