Jon Barron
@jonbarron.bsky.social
3.6K followers 150 following 250 posts
AI researcher at Google DeepMind. Synthesized views are my own. 📍SF Bay Area 🔗 http://jonbarron.info This feed is a partial mirror of https://twitter.com/jon_barron
Posts Media Videos Starter Packs
Pinned
jonbarron.bsky.social
Here's a recording of my 3DV keynote from a couple weeks ago. If you're already familiar with my research, I recommend skipping to ~22 minutes in where I get to the fun stuff (whether or not 3D has been bitter-lesson'ed by video generation models)

www.youtube.com/watch?v=hFlF...
Radiance Fields and the Future of Generative Media
YouTube video by Jon Barron
www.youtube.com
Reposted by Jon Barron
paulgavrikov.bsky.social
Is basic image understanding solved in today’s SOTA VLMs? Not quite.

We present VisualOverload, a VQA benchmark testing simple vision skills (like counting & OCR) in dense scenes. Even the best model (o3) only scores 19.8% on our hardest split.
Reposted by Jon Barron
hpdailyrant.bsky.social
Here’s what I’ve been working on for the past year. This is SkyTour, a 3D exterior tour utilizing Gaussian Splat. The UX is in the modeling of the “flight path.” I led the prototyping team that built the first POC. I was the sole designer and researcher on the project, one of the 1st inventors.
jonbarron.bsky.social
Ah cool, then why is that last bit true?
jonbarron.bsky.social
I don't see how the last sentence follows logically from the two prior sentences.
jonbarron.bsky.social
Be sure to do a dedication where you thank a ton of people, it's kind plus it feels good.

Besides that I'd just do a staple job of your papers. Doing new stuff in a thesis is usually a mistake, unless you later submit it as a paper or post it online somewhere. Nobody reads past the dedication.
jonbarron.bsky.social
This thread rules
Reposted by Jon Barron
niessner.bsky.social
🚀🚀🚀Announcing our $13M funding round to build the next generation of AI: 𝐒𝐩𝐚𝐭𝐢𝐚𝐥 𝐅𝐨𝐮𝐧𝐝𝐚𝐭𝐢𝐨𝐧 𝐌𝐨𝐝𝐞𝐥𝐬 that can generate entire 3D environments anchored in space & time. 🚀🚀🚀

Interested? Join our world-class team:
🌍 spaitial.ai

youtu.be/FiGX82RUz8U
SpAItial AI: Building Spatial Foundation Models
YouTube video by SpAItial AI
youtu.be
Reposted by Jon Barron
uoftcompsci.bsky.social
📺 Now available: Watch the recording of Aaron Hertzmann's talk, "Can Computers Create Art?" www.youtube.com/watch?v=40CB...
@uoftartsci.bsky.social
jonbarron.bsky.social
Here's a recording of my 3DV keynote from a couple weeks ago. If you're already familiar with my research, I recommend skipping to ~22 minutes in where I get to the fun stuff (whether or not 3D has been bitter-lesson'ed by video generation models)

www.youtube.com/watch?v=hFlF...
Radiance Fields and the Future of Generative Media
YouTube video by Jon Barron
www.youtube.com
jonbarron.bsky.social
yeah those fisher kernel models were surprisingly gnarly towards the end of their run.
jonbarron.bsky.social
yep absolutely. Super hard to do, but absolutely the best approach if it works.
jonbarron.bsky.social
If you want you can see the models that AlexNet beat in the 2012 imagenet competition, they were quite huge, here's one: www.image-net.org/static_files.... But I think the better though experiment is to imagine how large a shallow model would have to be to match AlexNet's capacity (very very huge)
www.image-net.org
jonbarron.bsky.social
One pattern I like (used in DreamFusion and CAT3D) is to "go slow to go fast" --- generate something small and slow to harness all that AI goodness, and then bake that 3D generation into something that renders fast. Moving along this speed/size continuum is a powerful tool.
jonbarron.bsky.social
It makes sense that radiance fields trended towards speed --- real-time performance is paramount in 3D graphics. But what we've seen in AI suggests that magical things can happen if you forgo speed and embrace compression. What else is in that lower left corner of this graph?
jonbarron.bsky.social
And this gets a bit hand-wavy, but NLP also started with shallow+fast+big n-gram models, then moved to parse trees etc, and then on to transformers. And yes, I know, transformers aren't actually small, but they are insanely compressed! "Compression is intelligence", as they say.
jonbarron.bsky.social
In fact, it's the *opposite* of what we saw in object recognition. There we started with shallow+fast+big models like mixtures of Gaussians on color, then moved to more compact and hierarchical models using trees and features, and finally to highly compressed CNNs and VITs.
jonbarron.bsky.social
Let's plot the trajectory of these three generations, with speed on the x-axis and model size on the y-axis. Over time, we've been steadily moving to bigger and faster models, up and to the right. This is sensible, but it's not the trend that other AI fields have been on...
jonbarron.bsky.social
Generation three swapped out those voxel grids for a bag of particles, with 3DGS getting the most adoption (shout out to 2021's pulsar though). These models are larger than grids, and can be tricky to optimize, but the upside for rendering speed is so huge that it's worth it.
jonbarron.bsky.social
The second generation was all about swapping out MLPs for a giant voxel grid of some kind, usually with some hierarchy/aliasing (NGP) or low-rank (TensoRF) trick for dealing with OOMs. These grids are much bigger than MLPs, but they're easy to train and fast to render.
jonbarron.bsky.social
A thread of thoughts on radiance fields, from my keynote at 3DV:

Radiance fields have had 3 distinct generations. First was NeRF: just posenc and a tiny MLP. This was slow to train but worked really well, and it was unusually compressed --- The NeRF was smaller than the images.
jonbarron.bsky.social
Here's Bolt3D: fast feed-forward 3D generation from one or many input images. Diffusion means that generated scenes contain lots of interesting structure in unobserved regions. ~6 seconds to generate, renders in real time.

Project page: szymanowiczs.github.io/bolt3d
Arxiv: arxiv.org/abs/2503.14445
jonbarron.bsky.social
I made this handy cheat sheet for the jargon that 6DOF math maps to for cameras and vehicles. Worth learning if you, like me, are worried about embarrassing yourself in front of a cinematographer or naval admiral.
jonbarron.bsky.social
It's certainly a shocking result, but I think concluding that "Sora learned 3D consistency" isn't a totally well-founded claim. We have no real idea what any models learn under the hood, and it should be possible for models to produce plausible videos without actually doing anything "in 3D".
jonbarron.bsky.social
But if there's an actual physical conference with talks and posters, there would need to be an actual cutoff date for submissions to be presented at this year's conference. Wouldn't that become the de facto deadline, even in a theoretically rolling system?