Juan Carlos Niebles
@jcniebles.bsky.social
530 followers 58 following 70 posts
Computer Vision, MultiModal AI Agents, Video AI Research Director at salesforceairesearch.com Adjunct Professor at cs.stanford.edu & svl.stanford.edu 🔗 www.niebles.net
Posts Media Videos Starter Packs
Pinned
jcniebles.bsky.social
📢📢 Exciting news!

Our paper, "Exploring Diffusion Transformer Designs via Grafting," has been accepted as an Oral at #NeurIPS2025, with only 77 out of 21k submissions receiving this honor.

📄Paper: arxiv.org/abs/2506.05340
🌎Website: grafting.stanford.edu
🧑🏻‍💻Code: github.com/keshik6/graf...
Exploring Diffusion Transformer Designs via Grafting
Exploring Diffusion Transformer Designs via Grafting
grafting.stanford.edu
jcniebles.bsky.social
📢📢 Exciting news!

Our paper, "Exploring Diffusion Transformer Designs via Grafting," has been accepted as an Oral at #NeurIPS2025, with only 77 out of 21k submissions receiving this honor.

📄Paper: arxiv.org/abs/2506.05340
🌎Website: grafting.stanford.edu
🧑🏻‍💻Code: github.com/keshik6/graf...
Exploring Diffusion Transformer Designs via Grafting
Exploring Diffusion Transformer Designs via Grafting
grafting.stanford.edu
jcniebles.bsky.social
Strefer: our new work for auto-generating instruction data on space–time–focused video tasks: spatiotemporal reasoning, space-time reference understanding, etc. for Video LLMs

✅ Auto & scalable
✅ Fine-grained, space–time–grounded queries
✅ Effective

📄: arxiv.org/abs/2509.03501
🌐: strefer.github.io
Strefer: Empowering Video LLMs with Space-Time Referring and Reasoning via Synthetic Instruction Data
Next-generation AI companions must go beyond general video understanding to resolve spatial and temporal references in dynamic, real-world environments. Existing Video Large Language Models (Video LLM...
arxiv.org
jcniebles.bsky.social
Check out a new episode of The AI Research Lab - Explained on Multimodal AI.

Had a blast creating this with the @salesforce.com team!

youtu.be/r98jGdLtO6Q
What is Multimodal AI? | The AI Research Lab - Explained
YouTube video by Salesforce
youtu.be
jcniebles.bsky.social
Congrats Chaitanya on winning the BEST PAPER AWARD 🥇 🏆

Check out details of our work:

arxiv.org/abs/2504.12513
jcniebles.bsky.social
Our first #cvpr2025 poster is up!

🕐Come check it out right now until 13:00

“AdaVid: Adaptive Video-Language Pretraining”

🪧ExHall D Poster # 203

📝 arxiv.org/abs/2504.12513
jcniebles.bsky.social
Just finished a day at the #CVPR2025 Area Chair workshop. Lots of interesting discussions and ideas, reconnection with colleagues and friends.

Had the chance to present our ViUnit poster to fellow ACs. If you missed it, come to our Sunday poster session.

See details in the 🧵⬇️
jcniebles.bsky.social
If you're at #CVPR2025, please stop by my posters and say hello! I'd love to chat about our work and all things computer vision. See you in Nashville! 👋
jcniebles.bsky.social
Kicking things off on June 11th by participating in the #CVPR2025 Area Chair workshop! Eager to connect with fellow ACs and colleagues. Let's make this an impactful conference!
jcniebles.bsky.social
Excited to attend #CVPR2025 in Nashville! 🤠 Looking forward to a fantastic week of cutting-edge computer vision research and connecting with the community.
@cvprconference.bsky.social
jcniebles.bsky.social
This RL approach effectively aligns VLMs with the demands of interactive decision-making. It's a powerful new pathway for developing more capable and adaptable visual agents using readily available VLM tech.
jcniebles.bsky.social
We tested our approach on PaliGemma, xGen-MM, and MoonDream2 across Gym Cards, BabyAI, and MiniWoB. Results? Substantial improvements in valid action syntax accuracy and task success rates, even starting from noisy data!
jcniebles.bsky.social
This approach works great for offline-to-online fine-tuning, learning from static datasets (even random actions!) and then smoothly transitioning to online learning where the agent gathers new data to refine its policy. Self-improvement is key!
jcniebles.bsky.social
AFSFT helps VLMs overcome challenges like strict action syntax and suboptimal data. It learns from demonstrations and filters out tokens that would lead to invalid syntax or poor choices, even penalizing invalid syntax.
jcniebles.bsky.social
Enter Reinforcement Learning (RL)! Our paper introduces an "offline-to-online" RL technique called Advantage-Filtered Supervised Fine-Tuning (AFSFT) that allows VLMs to learn through trial and error, improving even with imperfect initial data.
jcniebles.bsky.social
Traditional supervised fine-tuning (SFT) has limits – it can't go beyond its training data, and imperfect datasets mean replicating flaws. What if we don't have perfect examples or a good initial VLM?
jcniebles.bsky.social
The catch? VLMs can struggle with the precise rules and structured outputs many agent tasks require, unlike LLMs which excel at function calling and specific syntax. Think describing a button vs. knowing the exact command to click it.
jcniebles.bsky.social
Large Language Models (LLMs) are great for agents, but what happens when we give them "eyes"? VLMs extend this power to process visual info, opening up new possibilities like robotic control and automating tasks by "seeing" your screen.
jcniebles.bsky.social
Just dropped a new blog post: "Level up your Agents: Teaching Vision-Language Models to Play by the Rules"! We're exploring how to make Vision-Language Models (VLMs) even smarter at interactive tasks.

blog: www.niebles.net/blog/2025/vl...

arxiv: arxiv.org/abs/2505.03181
#multimodalAI #agents #VLM
jcniebles.bsky.social
Check out this great intro to Large Action Models, the key engine powering the AI Agent revolution. 🤖

By @salesforce.com AI Research’s Shelby Heinecke.

See video here:
youtube.com/watch?v=vlvv...
What Are Large Action Models? | The AI Research Lab - Explained
YouTube video by Salesforce
youtube.com
Reposted by Juan Carlos Niebles
baxterkb.bsky.social
@salesforce.com #AI Research has a new series called "AI Explained."
🎬 "The AI Research Lab - Explained" debuts with our groundbreaking work on Large Action Models! Sr. Mgr Shelby Heinecke reveals how we're training these specialized models to generate precise, executable actions. t.co/XLhlN2EZyk
https://bit.ly/4kfipp4
t.co
Reposted by Juan Carlos Niebles
cvprconference.bsky.social
Behind every great conference is a team of dedicated reviewers. Congratulations to this year’s #CVPR2025 Outstanding Reviewers!

cvpr.thecvf.com/Conferences/...