Andrei Bursuc
@abursuc.bsky.social
5.1K followers 360 following 460 posts
Research Scientist at valeo.ai | Teaching at Polytechnique, ENS | Alumni at Mines Paris, Inria, ENS | AI for Autonomous Driving, Computer Vision, Machine Learning | Robotics amateur ⚲ Paris, France 🔗 abursuc.github.io
Posts Media Videos Starter Packs
Pinned
abursuc.bsky.social
1/ Can open-data models beat DINOv2? Today we release Franca, a fully open-sourced vision foundation model. Franca with ViT-G backbone matches (and often beats) proprietary models like SigLIPv2, CLIP, DINOv2 on various benchmarks setting a new standard for open-source research.
abursuc.bsky.social
Hear! Hear!
eugenevinitsky.bsky.social
Whatever you do, don’t burn out. It’s terrible and it’s hard to fully recover, like a knee injury that makes you too scared to try jumping again
Reposted by Andrei Bursuc
valeoai.bsky.social
“Has anyone heard about DUSt3R?”
All hands and hearts up in the room.
Honored to welcome @gabrielacsurka.bsky.social today to speak about the amazing work @naverlabseurope.bsky.social towards 3D Foundation Models
abursuc.bsky.social
Not quite my living room, but rather my neighbor’s Louie. He has always been an expert in attracting and welcoming guests at his place(s) 🙃
In this pic it’s the ceiling of Théâtre Montansier in Versailles.
abursuc.bsky.social
Ah, no! I was actually thinking of the value of knowing when and how to stay silent and listen to the others instead.
abursuc.bsky.social
The unreasonable effectiveness of mastering the skill of placing <EOS> in real life
abursuc.bsky.social
The outstanding reviewer award is definitely one of my favorite type of awards as good reviewing helps so much to advance our community and research forward.
Congrats to all great reviewers out there whether rewarded this time or not! #iccv2025
valeoai.bsky.social
Congratulations to our lab colleagues who have been named Outstanding Reviewers at #ICCV2025 👏

Andrei Bursuc @abursuc.bsky.social
Anh-Quan Cao @anhquancao.bsky.social
Renaud Marlet
Eloi Zablocki @eloizablocki.bsky.social

@iccv.bsky.social
iccv.thecvf.com/Conferences/...
2025 ICCV Program Committee
iccv.thecvf.com
Reposted by Andrei Bursuc
valeoai.bsky.social
Congratulations to our lab colleagues who have been named Outstanding Reviewers at #ICCV2025 👏

Andrei Bursuc @abursuc.bsky.social
Anh-Quan Cao @anhquancao.bsky.social
Renaud Marlet
Eloi Zablocki @eloizablocki.bsky.social

@iccv.bsky.social
iccv.thecvf.com/Conferences/...
2025 ICCV Program Committee
iccv.thecvf.com
abursuc.bsky.social
Some night runs seem more like sightseeing than running
Reposted by Andrei Bursuc
csprofkgd.bsky.social
Here we go again 😅 This time I’m planning to take a more senior role to help mentor the next gen of publicity chairs. Please consider volunteering!
cvprconference.bsky.social
#CVPR2026 is looking for Publicity Chairs! The role includes working as part of a team to share conference updates across social media (X, Bluesky, etc.) and answering community questions.

Interested? Check out the self-nomination form in the thread.
Reposted by Andrei Bursuc
mkirchhof.bsky.social
Many treat uncertainty = a number. At Apple, we're rethinking this: LLMs should output strings that reveal all information of their internal distributions. We find that Reasoning, SFT, CoT can't do it - yet. To get there, we introduce the SelfReflect benchmark.

arxiv.org/pdf/2505.20295
Reposted by Andrei Bursuc
gillespuy.bsky.social
Update: ResearchGate has investigated the case, and, as far as I can see, all the suspicious papers (~200) have now been removed. Many thanks to the @researchgate.bsky.social team!
gillespuy.bsky.social
Discovered that our RangeViT paper keeps being cited in what might be LLM-generated papers. Number of citations increased rapidly in the last weeks. Too good to be true.

Papers popped up on different platforms, but mainly on ResearchGate with ~80 papers in just 3 weeks.
[1/]
abursuc.bsky.social
A nice side-effect of working with students every day?
Reposted by Andrei Bursuc
valeoai.bsky.social
CoRL 2025 is just around the corner in Seoul, Korea!

🤖 🚗

We're excited to present our latest research and connect with the community.

#CoRL2025
abursuc.bsky.social
In my brain every time I connect to overleaf during deadline periods, I see this
abursuc.bsky.social
A new season in full swing with a winning duo: kids do hockey, dad does overleaf
abursuc.bsky.social
Parents know that this is high bar for humans too.
But we’re aiming for super-human, aren’t we? 🙃
abursuc.bsky.social
The French often mention the roundabout at Arc de Triomphe as one of the ultimate tests for a self-driving vehicle.

I would say that the equivalent for humanoid robots would be to unwrap Chupa Chups Lollipop.

Take that Embodied AI!
abursuc.bsky.social
No time to shed tears post-NeurIPS when ICLR deadline is so close ...
abursuc.bsky.social
Congrats Christian et al.!
abursuc.bsky.social
Big congrats! Amazing work and team!
Reposted by Andrei Bursuc
trappmartin.bsky.social
Unfortunately, our submission to #NeurIPS didn’t go through with (5,4,4,3). But because I think it’s an excellent paper, I decided to share it anyway.

We show how to efficiently apply Bayesian learning in VLMs, improve calibration, and do active learning. Cool stuff!

📝 arxiv.org/abs/2412.06014
Post-hoc Probabilistic Vision-Language Models
Vision-language models (VLMs), such as CLIP and SigLIP, have found remarkable success in classification, retrieval, and generative tasks. For this, VLMs deterministically map images and text descripti...
arxiv.org
abursuc.bsky.social
In the last weeks we noticed that our RangeViT was getting increasingly numerous citations from unrelated papers uploaded to ResearchGate and other platforms.
Check out the investigation of @gillespuy.bsky.social on this issue that may impact other papers too.
gillespuy.bsky.social
Discovered that our RangeViT paper keeps being cited in what might be LLM-generated papers. Number of citations increased rapidly in the last weeks. Too good to be true.

Papers popped up on different platforms, but mainly on ResearchGate with ~80 papers in just 3 weeks.
[1/]
Reposted by Andrei Bursuc
davidpicard.bsky.social
If you're interested in human pose estimation and mesh recovery from LiDAR data, we have this massive survey: arxiv.org/abs/2509.12197
Salma and Nermin put a tremendous amount of work in it, there's everything: the tasks, all the methods organized, datasets, numbers, challenges and opportunities.
3D Human Pose and Shape Estimation from LiDAR Point Clouds: A Review
In this paper, we present a comprehensive review of 3D human pose estimation and human mesh recovery from in-the-wild LiDAR point clouds. We compare existing approaches across several key dimensions, ...
arxiv.org