📢📢 Exciting news!
Our paper, "Exploring Diffusion Transformer Designs via Grafting," has been accepted as an Oral at #NeurIPS2025, with only 77 out of 21k submissions receiving this honor.
📄Paper: arxiv.org/abs/2506.05340
🌎Website: grafting.stanford.edu
🧑🏻💻Code: github.com/keshik6/graf...
Our paper, "Exploring Diffusion Transformer Designs via Grafting," has been accepted as an Oral at #NeurIPS2025, with only 77 out of 21k submissions receiving this honor.
📄Paper: arxiv.org/abs/2506.05340
🌎Website: grafting.stanford.edu
🧑🏻💻Code: github.com/keshik6/graf...
Exploring Diffusion Transformer Designs via Grafting
Exploring Diffusion Transformer Designs via Grafting
grafting.stanford.edu
Strefer: our new work for auto-generating instruction data on space–time–focused video tasks: spatiotemporal reasoning, space-time reference understanding, etc. for Video LLMs
✅ Auto & scalable
✅ Fine-grained, space–time–grounded queries
✅ Effective
📄: arxiv.org/abs/2509.03501
🌐: strefer.github.io
✅ Auto & scalable
✅ Fine-grained, space–time–grounded queries
✅ Effective
📄: arxiv.org/abs/2509.03501
🌐: strefer.github.io
Strefer: Empowering Video LLMs with Space-Time Referring and Reasoning via Synthetic Instruction Data
Next-generation AI companions must go beyond general video understanding to resolve spatial and temporal references in dynamic, real-world environments. Existing Video Large Language Models (Video LLM...
arxiv.org
Check out a new episode of The AI Research Lab - Explained on Multimodal AI.
Had a blast creating this with the @salesforce.com team!
youtu.be/r98jGdLtO6Q
Had a blast creating this with the @salesforce.com team!
youtu.be/r98jGdLtO6Q
What is Multimodal AI? | The AI Research Lab - Explained
YouTube video by Salesforce
youtu.be
Congrats Chaitanya on winning the BEST PAPER AWARD 🥇 🏆
Check out details of our work:
arxiv.org/abs/2504.12513
Check out details of our work:
arxiv.org/abs/2504.12513
Our first #cvpr2025 poster is up!
🕐Come check it out right now until 13:00
“AdaVid: Adaptive Video-Language Pretraining”
🪧ExHall D Poster # 203
📝 arxiv.org/abs/2504.12513
🕐Come check it out right now until 13:00
“AdaVid: Adaptive Video-Language Pretraining”
🪧ExHall D Poster # 203
📝 arxiv.org/abs/2504.12513
Just finished a day at the #CVPR2025 Area Chair workshop. Lots of interesting discussions and ideas, reconnection with colleagues and friends.
Had the chance to present our ViUnit poster to fellow ACs. If you missed it, come to our Sunday poster session.
See details in the 🧵⬇️
Had the chance to present our ViUnit poster to fellow ACs. If you missed it, come to our Sunday poster session.
See details in the 🧵⬇️
If you're at #CVPR2025, please stop by my posters and say hello! I'd love to chat about our work and all things computer vision. See you in Nashville! 👋
Last but not least, presenting "ViUniT: Visual Unit Tests for More Robust Visual Programming" #CVPR2025
🗓️ Sun Jun 15, 10:30AM-12:30PM
📍 ExHall D Poster #346
🔗 Paper: arxiv.org/abs/2412.08859
📝 Blog: www.niebles.net/blog/2025/vi...
#VisualProgramming #RobustAI
🗓️ Sun Jun 15, 10:30AM-12:30PM
📍 ExHall D Poster #346
🔗 Paper: arxiv.org/abs/2412.08859
📝 Blog: www.niebles.net/blog/2025/vi...
#VisualProgramming #RobustAI
ViUniT: Visual Unit Tests for More Robust Visual Programming
Programming based approaches to reasoning tasks have substantially expanded the types of questions models can answer about visual scenes. Yet on benchmark visual reasoning data, when models answer cor...
arxiv.org
Next, "Re-thinking Temporal Search for Long-Form Video Understanding" #CVPR2025
🗓️ Fri Jun 13, 4PM-6PM
📍 ExHall D Poster #306
🔗 Paper: arxiv.org/abs/2504.02259
🌐 Website: longvideohaystack.github.io
💻 Code: github.com/LongVideoHay...
📊 Data: huggingface.co/datasets/LVH...
#VideoUnderstanding
🗓️ Fri Jun 13, 4PM-6PM
📍 ExHall D Poster #306
🔗 Paper: arxiv.org/abs/2504.02259
🌐 Website: longvideohaystack.github.io
💻 Code: github.com/LongVideoHay...
📊 Data: huggingface.co/datasets/LVH...
#VideoUnderstanding
Re-thinking Temporal Search for Long-Form Video Understanding
Efficiently understanding long-form videos remains a significant challenge in computer vision. In this work, we revisit temporal search paradigms for long-form video understanding and address a fundam...
arxiv.org
I'll also be presenting multiple papers at #CVPR2025! First up: "AdaVid: Adaptive Video-Language Pretraining".
🗓️ Thu Jun 12, 12:00-13:00PM
📍 ExHall D Poster #202
🔗 Paper: arxiv.org/abs/2504.12513
🌐 Website: chaitanya100100.github.io/AdaVid/
#VideoLanguage #Pretraining
🗓️ Thu Jun 12, 12:00-13:00PM
📍 ExHall D Poster #202
🔗 Paper: arxiv.org/abs/2504.12513
🌐 Website: chaitanya100100.github.io/AdaVid/
#VideoLanguage #Pretraining
AdaVid: Adaptive Video-Language Pretraining
Contrastive video-language pretraining has demonstrated great success in learning rich and robust video representations. However, deploying such video encoders on compute-constrained edge devices rema...
arxiv.org
Kicking things off on June 11th by participating in the #CVPR2025 Area Chair workshop! Eager to connect with fellow ACs and colleagues. Let's make this an impactful conference!
Excited to attend #CVPR2025 in Nashville! 🤠 Looking forward to a fantastic week of cutting-edge computer vision research and connecting with the community.
@cvprconference.bsky.social
@cvprconference.bsky.social
Read the full post for more details: "Level up your Agents: Teaching Vision-Language Models to Play by the Rules".
blog: www.niebles.net/blog/2025/vl...
arxiv: arxiv.org/abs/2505.03181
Work with Jake Grigsby, Michael Ryoo and Yuke Zhu
#AI #MachineLearning #DeepLearning
blog: www.niebles.net/blog/2025/vl...
arxiv: arxiv.org/abs/2505.03181
Work with Jake Grigsby, Michael Ryoo and Yuke Zhu
#AI #MachineLearning #DeepLearning
Level up your Agents: Teaching Vision-Language Models to Play by the Rules | Juan Carlos Niebles
We explore how Vision-Language Models can be improved for interactive decision-making by using our new reinforcement learning technique called Advantage-Filtered Supervised Fine-Tuning.
www.niebles.net
This RL approach effectively aligns VLMs with the demands of interactive decision-making. It's a powerful new pathway for developing more capable and adaptable visual agents using readily available VLM tech.
We tested our approach on PaliGemma, xGen-MM, and MoonDream2 across Gym Cards, BabyAI, and MiniWoB. Results? Substantial improvements in valid action syntax accuracy and task success rates, even starting from noisy data!
This approach works great for offline-to-online fine-tuning, learning from static datasets (even random actions!) and then smoothly transitioning to online learning where the agent gathers new data to refine its policy. Self-improvement is key!
AFSFT helps VLMs overcome challenges like strict action syntax and suboptimal data. It learns from demonstrations and filters out tokens that would lead to invalid syntax or poor choices, even penalizing invalid syntax.
Enter Reinforcement Learning (RL)! Our paper introduces an "offline-to-online" RL technique called Advantage-Filtered Supervised Fine-Tuning (AFSFT) that allows VLMs to learn through trial and error, improving even with imperfect initial data.
Traditional supervised fine-tuning (SFT) has limits – it can't go beyond its training data, and imperfect datasets mean replicating flaws. What if we don't have perfect examples or a good initial VLM?
The catch? VLMs can struggle with the precise rules and structured outputs many agent tasks require, unlike LLMs which excel at function calling and specific syntax. Think describing a button vs. knowing the exact command to click it.
Large Language Models (LLMs) are great for agents, but what happens when we give them "eyes"? VLMs extend this power to process visual info, opening up new possibilities like robotic control and automating tasks by "seeing" your screen.
Just dropped a new blog post: "Level up your Agents: Teaching Vision-Language Models to Play by the Rules"! We're exploring how to make Vision-Language Models (VLMs) even smarter at interactive tasks.
blog: www.niebles.net/blog/2025/vl...
arxiv: arxiv.org/abs/2505.03181
#multimodalAI #agents #VLM
blog: www.niebles.net/blog/2025/vl...
arxiv: arxiv.org/abs/2505.03181
#multimodalAI #agents #VLM
Check out this great intro to Large Action Models, the key engine powering the AI Agent revolution. 🤖
By @salesforce.com AI Research’s Shelby Heinecke.
See video here:
youtube.com/watch?v=vlvv...
By @salesforce.com AI Research’s Shelby Heinecke.
See video here:
youtube.com/watch?v=vlvv...
What Are Large Action Models? | The AI Research Lab - Explained
YouTube video by Salesforce
youtube.com
Reposted by: Juan Carlos Niebles
@salesforce.com #AI Research has a new series called "AI Explained."
🎬 "The AI Research Lab - Explained" debuts with our groundbreaking work on Large Action Models! Sr. Mgr Shelby Heinecke reveals how we're training these specialized models to generate precise, executable actions. t.co/XLhlN2EZyk
🎬 "The AI Research Lab - Explained" debuts with our groundbreaking work on Large Action Models! Sr. Mgr Shelby Heinecke reveals how we're training these specialized models to generate precise, executable actions. t.co/XLhlN2EZyk
https://bit.ly/4kfipp4
t.co
Reposted by: Juan Carlos Niebles, Czech Republic
Behind every great conference is a team of dedicated reviewers. Congratulations to this year’s #CVPR2025 Outstanding Reviewers!
cvpr.thecvf.com/Conferences/...
cvpr.thecvf.com/Conferences/...
Will AI be a "bicycle for the mind" boosting our creativity, or could it overshadow our own abilities? 🤔
📝 My latest blog explores this fascinating question!
Read more here: www.niebles.net/blog/2025/cr...
#AI #creativity #artificialintelligence
📝 My latest blog explores this fascinating question!
Read more here: www.niebles.net/blog/2025/cr...
#AI #creativity #artificialintelligence
www.niebles
You are lucky. I still need to chase reviewers and have had to assign 12 emergency reviews for my pile!
With AI models trained in colossal datasets, does the traditional concept of “generalization” (performing well on *unseen* data) still hold?
My latest blog outlines this critical question. Join the discussion! #AI #MachineLearning #Generalization
www.niebles.net/blog/2025/ga...
My latest blog outlines this critical question. Join the discussion! #AI #MachineLearning #Generalization
www.niebles.net/blog/2025/ga...