Roland Dreier
banner
rbd.bsky.social
Roland Dreier
@rbd.bsky.social
I put the R in RDMA. All posts based only on information from within my past light cone.
Generally love your data viz but having the “hottest” colors represent the slowest growth makes these maps much harder to read
January 27, 2026 at 7:48 PM
Not at all the main point but why is the data viz trend “use the coldest color for the fastest growth”?? Second US map I've seen like this in just a couple days
January 27, 2026 at 7:48 PM
I don't follow it closely but his latest grift seems to be attacking some of his former ideological allies. But yeah I don't think it's sincere and I don't think he's a good source for this or any other story.
January 27, 2026 at 6:49 PM
I think this is a really important point - there are a lot actors who are more than happy to take general anti-AI backlash and use it as an excuse to make IP law worse

bsky.app/profile/rbd....
Important point, if you find yourself rooting for Disney in an intellectual property lawsuit, perhaps it's time to consider ... are we the baddies dot png
But the furor around AI training is poised to take out fair use at the knees--not because of an objection to the actual doctrine, but because the fury at AI companies is real, justified, and *completely blind* to the second-order consequences of what they're demanding.
January 27, 2026 at 5:29 PM
😬 you do not, under any circumstances, gotta hand it to Hanania
January 27, 2026 at 5:17 PM
Yeah it's an issue - it took until last week for us to get Claude enabled at my *VERY* AI-native job (although we had Cursor for longer). But the part about an “~insurmountable head start over latecomers” doesn't ring true - if anything it's getting easier and easier to get up to speed
January 26, 2026 at 6:33 PM
Sure you can go on r/LocalLlama and debate which variant of Qwen3 to run on your $8,000 RTX PRO 6000 but you're not being “left behind” if you just subscribe to Claude
January 26, 2026 at 5:41 PM
For 2 vs 3 you could in theory look at the model with and without the Claude system prompt and even the base model vs the final fine-tuned Claude model

Kinda the mind-bending thing about LLMs is how you can get at squishy questions like this in an empirical / quantitative way now
January 25, 2026 at 2:51 AM
This is kinda something interpretability research gets at? If you can see inside the inference process you could maybe see “user is being cruel” features light up but maybe they're not strongly or directly connected to the output. (Or even train a classifier model to detect this latent knowledge)
January 25, 2026 at 2:44 AM
The second author is his dad, a Stanford CS PhD / former CEO of Infosys. I think if anything the better critique of the paper (beyond a skeptical look at what it says) would be “what is Vianai selling and how does this paper relate?”

www.vian.ai/who-we-are-v...
Who We Are - Vishal
www.vian.ai
January 24, 2026 at 3:31 PM
And why are firms scrambling to borrow money to buy GPUs? Because there is SO MUCH DEMAND that if you can get them online, it is wildly profitable to rent them out.

Sure, there's potential competition, who knows how long this can go on, etc, but for now...
January 24, 2026 at 1:52 AM
Generally love your data viz but having the “hottest” colors represent the slowest growth makes these maps much harder to read
January 23, 2026 at 5:06 PM
It used to be aspirational for a lot of developers to design and build software systems without physically typing in most of the code, back when that was called “being a staff engineer”
January 23, 2026 at 3:50 PM
Wouldn't it make perfect sense for Norway to take their oil revenue and invest it in creating a more diversified future Norwegian economy? I think the reason they invest globally is they just have too much oil money relative to other Norway-local assets
January 22, 2026 at 9:53 PM
Yes - if anything you already see it with open weights models and neoclouds. If I have a bunch of chips (Groq, Cerebras) or even just a big pot of money to go into the inheritance biz with, I can just start serving open models without needing to negotiate a deal with any of the labs.
January 21, 2026 at 2:18 PM
Smaller models will continue to get better, as will consumer GPUs. But specialized inference hardware will get more optimized for cloud providers too. I don't think it's at all clear how the relative capabilities of local models and cloud-hosted models will change over the next few years.
January 21, 2026 at 8:43 AM
You need pretty big micro batches of activation vectors to amortize the memory latency of loading the weights, and you need a queue of micro batches to keep the stages of your spatial pipeline full. Or else most of the math hardware you bought is spending most of its time sitting there idle.
January 21, 2026 at 8:31 AM
One underappreciated aspect of inference for leading edge models is that they're just so big that it's vastly cheaper to serve many users in parallel. Even if you had all the weights and everything for opus 4.5, you probably couldn't afford the hardware to run it at usable speed.
January 21, 2026 at 8:31 AM
Yeah - for me I find the “canvas” UI a good place to start - “I want to <X>, help me write a design doc” and then “can we make <detail Y> scale better?” “use a message queue to decouple <Z> and <W> instead of having them call each other directly” etc.

Similar vibe as whiteboarding with a colleague.
January 20, 2026 at 8:32 PM
You're wrong about this. Here's why: …
January 20, 2026 at 8:22 PM