Bart Wronski πŸ‡ΊπŸ‡¦πŸ‡΅πŸ‡Έ
bartwr.bsky.social
Bart Wronski πŸ‡ΊπŸ‡¦πŸ‡΅πŸ‡Έ
@bartwr.bsky.social
Engineering, Computer Graphics, Art, DSP, ML
Culture, Techno, Industrial, and Electronic Music.

Research Scientist at NVIDIA.
Ex Google Research, Ex games (Sony, Ubisoft, CD Projekt).
Politically leftist. He/they.

https://linktr.ee/bartwronsk
You change nothing (assuming separate tokenication/detokenization).
Everything is a set of tokens, and a) all tokens interact with all tokens (unless mask), b) neighbors depend only on contents / embedding!
You can plug in the same model to any problem, and with enough compute, it will work. 2/2
January 15, 2026 at 5:29 AM
Transformers are an incredible neural network architecture. So many strong, beneficial characteristics.
But one stands out to explain their success: no matter what problem or domain you work on - audio, image, video, text, time series, point clouds, voxels...
The architecture stays *identical*. 1/2
January 15, 2026 at 5:29 AM
Reposted by Bart Wronski πŸ‡ΊπŸ‡¦πŸ‡΅πŸ‡Έ
Hey I made a new thing - it's called Gradientspace Graph, and it's a C#-based NodeGraph Programming system that also supports inline C# and Python "Code Nodes". The NodeGraph Engine is MIT Open-Source and the Editor is Free. More details here: www.gradientspace.com/tutorials/20...
Gradientspace Graph Beta β€” gradientspace
I have released an initial version of Gradientspace Graph (GSGraph), a new C#-based NodeGraph Programming environment that also supports inline text-coding in C# and Python (and LLM-based CodeNode gen...
www.gradientspace.com
January 13, 2026 at 7:36 PM
Reposted by Bart Wronski πŸ‡ΊπŸ‡¦πŸ‡΅πŸ‡Έ
Reposted by Bart Wronski πŸ‡ΊπŸ‡¦πŸ‡΅πŸ‡Έ
Fantastic material! Every student should see it, not only in computer science.
Science is an inherently social and subjective process - if everyone understood it, there would be fewer disappointments, and, paradoxically, more trust in science.("Failures" of science are our imperfect human failures).
January 11, 2026 at 1:49 AM
Fantastic material! Every student should see it, not only in computer science.
Science is an inherently social and subjective process - if everyone understood it, there would be fewer disappointments, and, paradoxically, more trust in science.("Failures" of science are our imperfect human failures).
January 11, 2026 at 1:49 AM
Reposted by Bart Wronski πŸ‡ΊπŸ‡¦πŸ‡΅πŸ‡Έ
cseweb.ucsd.edu/~tzli/novelt...
I gave an internal talk at UCSD last year regarding "novelty" in computer science research. In it I "debunked" some of the myth people seem to have about what is good research in computer science these days. People seemed to like it, so I thought I should share.
cseweb.ucsd.edu
January 9, 2026 at 5:21 PM
Reposted by Bart Wronski πŸ‡ΊπŸ‡¦πŸ‡΅πŸ‡Έ
New blog post is finally up: (Ab)using Shader Execution Reordering.

A bit of outside the box usage of SER (for better or worse).

debaetsd.github.io/posts/ser/
(Ab)using Shader Execution Reordering - Dieter's Blog
Notes on creative usage of shader execution reordering
debaetsd.github.io
January 8, 2026 at 6:59 PM
It's not a person... but 100% LLM slop. There might not even be a person in the loop, just some agentic experiment. :/
January 9, 2026 at 1:09 AM
But with vibe coding agents, creating a starting point that I can fill with my "real" "program" is literally just seconds. Then, I teach myself as I go and progressing my other tasks. True game-changer.
So yeah, I expect everything with UI other than games and similar soon to be pure web tech. 4/4
January 7, 2026 at 5:42 AM
As someone not familiar with any of those technologies, I avoided using them, too much entry barrier. Could I learn it in a month? Sure, but I would not work on what I wanted to achieve. And the knowledge "rots" when I don't use it regularly. 3/N
January 7, 2026 at 5:42 AM
For simple stuff, this is legit N times less work than in any native frameworks, esp. compiled languages. Looks much better out of the box, easy to style, runs anywhere, and the ecosystem and community support is N times larger.
And I also agree that LLMs/"vibe coding" is the nail in the coffin. 2/N
January 7, 2026 at 5:42 AM
This might annoy some of my colleagues (game developers and low-level tinkerers that get furious about Electron et al. memory, latency, and CPU usage): I agree and started to realize it even before vibe coding.
It's not just "fashion" to wrap everything in web frameworks. 1/N
after 2.5 years of vibe coding, my biggest takeaway? native apps are dead. iterating for the web is so much faster, has better tooling, and lower overhead. low-latency, multi-projector, 3d, spatial audio, custom hardwareβ€”ai will continue to have trouble with these.
January 7, 2026 at 5:42 AM
Impostor - literally trained to fool us that it produces coherent plausible answers. It can look like truth (and sometimes it is!) but it's all about faking.
December 30, 2025 at 4:56 PM
Reposted by Bart Wronski πŸ‡ΊπŸ‡¦πŸ‡΅πŸ‡Έ
I wrote a blog post describing state of GPU market and what does it mean for support of new GPU features
asawicki.info/articles/sta...
Thanks to @asawicki.info for letting me publish on his blog
State of GPU Hardware (End of Year 2025)
asawicki.info
December 29, 2025 at 2:16 PM
In Halide, schedule = THE algorithm. The rest is closer to declarative programming.
And BTW, this is exactly my belief and point - we won't become obsolete, just move to higher level.
Writing clear constraints, requirements, and definitions (so that AI output is useful) requires extreme expertise.
December 28, 2025 at 4:21 PM
I'm not saying it outperforms all experts, but that it can in future, with enough compute. Even math problems and algorithm design can be automated (like this new marginally better asymptotic bound matrix multiplication algorithm).
But also dismissing Halide auto tuning is unfair.
December 28, 2025 at 4:21 PM
I've seen this with Halide, which has way less code and people writing it than RTL (a few dozen people worldwide, a handful true experts?). ML autoscheduler was beating the language authors who are also world's top domain optimization experts.

Again, this is nothing like cheap scalable LLM slop.
December 28, 2025 at 3:37 PM
IMO hoping that some domain is special and more difficult because less people do it is pure copium. We will all be outperformed by agents that can spam, compile, and profile 1000 variants of code in parallel and iteratively improve and synthesize the best variants.
December 28, 2025 at 3:32 PM
As I said I'm no expert but those papers and internal models supposedly outperform human experts. I've seen this with CUDA or Halide where I can confirm the claims.
Cheap models in VS Code that give an answer in a few seconds are nothing like agentic RL experts. (That iterate with a profiler)
December 28, 2025 at 3:29 PM
Btw., "we need massive training data" to solve a new domain is a misconception about ML from years ago when supervised learning reigned. *All* LLMs are capable of "zero-shot learning". This was the revolution of GPT-2. Add to it GRPO RL - used in all models this year - and you get domain experts.
December 28, 2025 at 5:27 AM
I don't know *anything*about RTL,but IIUC it is already happening: research.nvidia.com/publication/...
Things like ML-assisted layout were used in production years ago at Google as well: research.google/blog/chip-de...
(This paper was questioned but IIUC Google rebutted criticism and stands by it)
Spec2RTL-Agent: Automated Hardware Code Generation from Complex Specifications Using LLM Agent Systems | Research
Despite recent progress in generating hardware RTL code with LLMs, existing solutions still suffer from a substantial gap between practical application scenarios and the requirements of real-world RTL...
research.nvidia.com
December 28, 2025 at 5:27 AM
I generally treat anything saying "the US is too x/y/z to have a/b/c" as absolute bullshit and excuse-seeking exceptionalism. Health care? Gun control? Rail? Public transit? All are possible.
Yes some things might have to change but we Americans are not special snowflakes, those are changeable.
December 27, 2025 at 4:55 PM
Reposted by Bart Wronski πŸ‡ΊπŸ‡¦πŸ‡΅πŸ‡Έ
I highly recommend watching this segment, not just because the CBS News execs and the White House didn’t want you to, but because these men were tortured and they deserved to have their voices heard.
!! Here’s a link to full video of the 60 Minutes segment that Bari Weiss killed last minute, via @jasonparis.bsky.social:

is.gd/paU8Ko

(It was uploaded to the Global TV app in Canada, seemingly by accident, and has now been taken down)
December 22, 2025 at 10:46 PM
Reposted by Bart Wronski πŸ‡ΊπŸ‡¦πŸ‡΅πŸ‡Έ
I got the first post about direct lighting material occlusion up on my blog. On this one I go over the commonly used micro-occlusion approach and its limitations, and I start digging into micro-shadowing as an improvement.

irradiance.ca/posts/micros...
December 20, 2025 at 9:31 PM