This first turn on Claude Opus 4.1 is so wrong it might just take burning the rest of humanity's natural gas reserves to fix it.
This first turn on Claude Opus 4.1 is so wrong it might just take burning the rest of humanity's natural gas reserves to fix it.
This is especially true for LLMs that see its own pretrained memory as a tool for knowledge recall, rather than as an orchestrator (most LLMs).
This is especially true for LLMs that see its own pretrained memory as a tool for knowledge recall, rather than as an orchestrator (most LLMs).
huggingface.co/diffbot/Llam...
huggingface.co/diffbot/Llam...
Download Diffbot LLM. Run it off your own GPU. Congrats, your on-prem #AI is smarter than #Perplexity.
Download Diffbot LLM. Run it off your own GPU. Congrats, your on-prem #AI is smarter than #Perplexity.
3. We open sourced Diffbot LLM. Perplexity chose to keep theirs secret.
3. We open sourced Diffbot LLM. Perplexity chose to keep theirs secret.
What IS significant is how we got here vs. Perplexity.
1. Diffbot LLM is a side project. Sonar is Perplexity's entire business.
What IS significant is how we got here vs. Perplexity.
1. Diffbot LLM is a side project. Sonar is Perplexity's entire business.
The next morning, we beat Sonar Pro.
The next morning, we beat Sonar Pro.
The SimpleQA benchmark they used is open source and LLM judged...
The SimpleQA benchmark they used is open source and LLM judged...
24 hours later, it's the 2nd best performing model (and it's not because of #DeepSeek).
Why? 👇
24 hours later, it's the 2nd best performing model (and it's not because of #DeepSeek).
Why? 👇
We look forward to building a future of grounded AI with you all.
We look forward to building a future of grounded AI with you all.
And we are excited to share that we are releasing Diffbot LLM open source on #Github, with weights available for download on #Huggingface.
github.com/diffbot/diff...
And we are excited to share that we are releasing Diffbot LLM open source on #Github, with weights available for download on #Huggingface.
github.com/diffbot/diff...
Knowledge is best retrieved at inference, outside of model weights.
Knowledge is best retrieved at inference, outside of model weights.
Not only is credit provided to publishers, every fact is also independently verifiable.
Not only is credit provided to publishers, every fact is also independently verifiable.
Naturally, this means Diffbot LLM always provides full attribution to its cited sources.
Naturally, this means Diffbot LLM always provides full attribution to its cited sources.