Trust but verify.
Opinions subject to change, beliefs updated according to learning rate schedule.
We need to understand human cognition more clearly and cognition reasoning in general.
Instead of focusing on “human” thinking as the pinnacle, what if we thought about creating more robust forms of cognition?
This should be the AGI goal.
We need to understand human cognition more clearly and cognition reasoning in general.
Instead of focusing on “human” thinking as the pinnacle, what if we thought about creating more robust forms of cognition?
This should be the AGI goal.
But it is not perfect.
Reasoning strategies and RAG are good approaches towards making LLMs more robust.
But it is not perfect.
Reasoning strategies and RAG are good approaches towards making LLMs more robust.