James Tompkin
@jamestompkin.bsky.social
1.3K followers 230 following 16 posts
📸 jamestompkin.com and visual.cs.brown.edu 📸
Posts Media Videos Starter Packs
jamestompkin.bsky.social
📷📷=> ↗️? Need 3D scene flow from _two_ images, single camera, with the goal of generalized performance? Inference code and model weights now out for the CVPR 2025 ZeroMSF method github.com/NVlabs/zero-...
By Yiqing Liang lynl7130.github.io with Abhishek Badki, Hang Su, and Orazio Gallo at NVIDIA.
GitHub - NVlabs/zero-msf: [CVPR 2025] ZeroMSF: Zero-shot Monocular Scene Flow Estimation in the Wild
[CVPR 2025] ZeroMSF: Zero-shot Monocular Scene Flow Estimation in the Wild - NVlabs/zero-msf
github.com
jamestompkin.bsky.social
AI for Content Creation workshop @ #CVPR2025 - Grand Ballroom A1 - 4pm - panel on "Open Source in AI and the Creative Industry" - with @magrawala.bsky.social (Stanford), Cherry Zhao (Adobe), Ishan Misra (Meta) and @jonbarron.bsky.social (Google) - go go!
jamestompkin.bsky.social
The AI for Content Creation workshop is kicking off today at #CVPR2025 - Grand Ballroom A1 - @magrawala.bsky.social Kai Zhang (Adobe), Charles Herrmann (Google), Mark Boss (Stability AI), Yutong Bai (UC Berkeley), Cherry Zhao (Adobe), Ishan Misra (Meta) and @jonbarron.bsky.social ! See you soon!
jamestompkin.bsky.social
Thanks to the org team: @junyanz.bsky.social @lingjieliu.bsky.social Deqing Sun, Lu Jiang, Fitsum Reda, and Krishna Kumar Singh!
jamestompkin.bsky.social
The AI for Content Creation workshop #CVPR2025 is accepting paper submissions. ai4cc.net Deadline March 21st 2025 midnight PST. 4 page extended abstracts, 8 pagers, and previously published work (ECCV, NeurIPS, even CVPR)! Many topics 📷📹🎬🎲✒️📃🖼️👗👔🏢 - come spend the day with us!
AI4CC 2025
ai4cc.net
Reposted by James Tompkin
anaghmalik.bsky.social
📢📢📢 Submit to our workshop on Physics-inspired 3D Vision and Imaging at #CVPR2025!

Speakers 🗣️ include Ioannis Gkioulekas, Laura Waller, Berthy Feng, @shwbaek.bsky.social and Gordon Wetzstein!

🌐 pi3dvi.github.io

You can also just come hangout with us at the workshop @cvprconference.bsky.social!
jamestompkin.bsky.social
ICCV 2025 #ICCV2025 Workshop proposals deadline is tomorrow midnight anywhere on earth! iccv.thecvf.com/Conferences/... If you have any questions, send us an email! The chairs are happy to help. See you in Hawaii? 🏖️
2025 ICCV Call For Workshops
iccv.thecvf.com
jamestompkin.bsky.social
Thanks, but I just twiddle my thumbs - it's all Nick and Aaron : )
jamestompkin.bsky.social
We prioritize simplicity and performance over functionality. As a minimal baseline, our model does only basic image generation, lacking many features required for downstream tasks. Think of it as DCGAN in 2025 rather than something feature-rich like StyleGAN. We hope this helps further GAN research!
jamestompkin.bsky.social
Given the well-behaved loss, we move away from the 2015-ish architecture in StyleGAN and implement G and D with a minimalist yet modern architecture---a simplified ConvNeXt. With the two components combined, we obtain a simple GAN baseline that is stable to train and surpasses StyleGAN performance.
jamestompkin.bsky.social
To further GAN research, we first improve the GAN loss to alleviate mode dropping and non-convergence. This makes GAN optimization sufficiently easy that we can now discard existing GAN tricks w/o training failure. The dependence on outdated GAN-specific architectures is also eliminated.
jamestompkin.bsky.social
GANs are often criticized for their training instability, and it is often believed that GANs cannot work w/o many engineering tricks. They use outdated network architectures without modern backbone advances. These supposed weaknesses resulted in the abandonment of GAN research in favor of diffusion.
jamestompkin.bsky.social
Can GANs compete in 2025? In 'The GAN is dead; long live the GAN! A Modern GAN Baseline', we show that a minimalist GAN w/o any tricks can match the performance of EDM with half the size and one-step generation - github.com/brownvc/r3gan - work of Nick Huang, @skylion.bsky.social, Volodymyr Kuleshov
jamestompkin.bsky.social
Need evaluation and insight into why monocular dynamic scene reconstruction is difficult especially with Gaussian splats? Need apples-to-apples comparison of basic motion models on a scene with controlled camera and object motion? Here you go.
ericzzj.bsky.social
Monocular Dynamic Gaussian Splatting is Fast and Brittle but Smooth Motion Helps

Yiqing Liang, Mikhail Okunev, Mikaela Angelina Uy, Runfeng Li, Leonidas Guibas, @jamestompkin.bsky.social, Adam W. Harley

tl;dr: benchmark for monocular dynamic GS

arxiv.org/abs/2412.04457
jamestompkin.bsky.social
Hey that's us! Let me know if anyone has any questions : )
jamestompkin.bsky.social
But what if you _really_ like reflections? Local Gaussian Density Mixtures updates lumigraphs by optimizing mixtures of per-view volumes for 🌟maximum shine🌟 #SIGGRAPHAsia2024 xchaowu.github.io/papers/lgdm/... First author Xiuchao Wu is graduating soon and is looking for a job!
Reposted by James Tompkin
niladridutt.bsky.social
Created a starter pack for researchers working in inverse graphics, 3D vision, and geometry processing.

Would love your help to expand this list!

go.bsky.app/9uEdjzb
Reposted by James Tompkin
chrisoffner3d.bsky.social
Welcome to all new arrivals here on Bluesky! :) Here's a starter pack of people working on computer vision.
go.bsky.app/PkAKJu5
Reposted by James Tompkin
vdeschaintre.bsky.social
Converted my Graphics Research list to a starter pack (not sure what's the difference though). Let me know who we are missing here :)

Here goes! go.bsky.app/ApQNTt2