This isn’t it. We have got to do better.
This isn’t it. We have got to do better.
“It’s like a GROUP BY in SQL”
instantly clicked, thanks for speaking my language
“It’s like a GROUP BY in SQL”
instantly clicked, thanks for speaking my language
Hand made, pasture-raised, grass fed.
Welcome to Downlink.
Hand made, pasture-raised, grass fed.
Welcome to Downlink.
The issue is dev time. Time spent curating, training, evaluating, and deployment.
What if the small models trained automatically?
I’m building this.
The issue is dev time. Time spent curating, training, evaluating, and deployment.
What if the small models trained automatically?
I’m building this.
I deserve some recognition. I should get to cut the line.
I deserve some recognition. I should get to cut the line.
It was 4 episodes too long, way too dark, and completely threw out the idea that it takes DAYS to walk up.
The ending was cool at least.
It was 4 episodes too long, way too dark, and completely threw out the idea that it takes DAYS to walk up.
The ending was cool at least.
YOU CANNOT ASK LLMs FOR A CONFIDENCE SCORE. THEY DON’T KNOW WHAT THAT MEANS.
Using MetaMedQA, researchers exposed a gap between AI confidence and accuracy, urging improved metacognition for safe healthcare deployment. 🩺💻 #MLSky
YOU CANNOT ASK LLMs FOR A CONFIDENCE SCORE. THEY DON’T KNOW WHAT THAT MEANS.
What the hell is your problem.
Also, no one gives a fuck who your daddy was.
What the hell is your problem.
Also, no one gives a fuck who your daddy was.
In case you’re wondering, it’s taken from an out-of-copyright book: General Zoology by George Shaw, vol. ii, part 2, page 458, published in 1801. archive.org/details/p2ge...
Please don't encourage the VCs. This phrase will launch a thousand bad blog posts.
Please don't encourage the VCs. This phrase will launch a thousand bad blog posts.
If you make a lot of API calls to gpt-4 or claude and struggle with latency, I’d love to chat.
For discriminative-style tasks I see a 30% effective speed up on average. Sometimes much higher.
If you make a lot of API calls to gpt-4 or claude and struggle with latency, I’d love to chat.
For discriminative-style tasks I see a 30% effective speed up on average. Sometimes much higher.
If you make a lot of API calls to gpt-4 or claude and struggle with latency, I’d love to chat.
For discriminative-style tasks I see a 30% effective speed up on average. Sometimes much higher.
If you make a lot of API calls to gpt-4 or claude and struggle with latency, I’d love to chat.
For discriminative-style tasks I see a 30% effective speed up on average. Sometimes much higher.
Would love to hear your custom feed recommendations! Looking for technical content.
Would love to hear your custom feed recommendations! Looking for technical content.