Sophia Sirko-Galouchenko 🇺🇦
@ssirko.bsky.social
200 followers 320 following 7 posts
PhD student in visual representation learning at Valeo.ai and Sorbonne Université (MLIA)
Posts Media Videos Starter Packs
Reposted by Sophia Sirko-Galouchenko 🇺🇦
t-martyniuk.bsky.social
Another great event for @valeoai.bsky.social team: a PhD defense of Corentin Sautier.

His thesis «Learning Actionable LiDAR Representations w/o Annotations» covers the papers BEVContrast (learning self-sup LiDAR features), SLidR, ScaLR (distillation), UNIT and Alpine (solving tasks w/o labels).
Reposted by Sophia Sirko-Galouchenko 🇺🇦
t-martyniuk.bsky.social
So excited to attend the PhD defense of @bjoernmichele.bsky.social at @valeoai.bsky.social! He’s presenting his research results of the last 3 years in 3D domain adaptation: SALUDA (unsupervised DA), MuDDoS (multimodal UDA), TTYD (source-free UDA).
ssirko.bsky.social
6/n Benefits 💪
- < 9h on a single A100 gpu.
- Improves across 6 segmentation benchmarks
- Boosts performance for in-context depth prediction.
- Plug-and-play for different ViTs: DINOv2, CLIP, MAE.
- Robust in low-shot and domain shift.
ssirko.bsky.social
5/n Why is DIP unsupervised?

DIP doesn't require manually annotated segmentation masks for its post-training. To accomplish this, it leverages Stable Diffusion (via DiffCut) alongside DINOv2R features to automatically construct in-context pseudo-tasks for its post-training.
ssirko.bsky.social
4/n Meet Dense In-context Post-training (DIP)! 🔄

- Meta-learning inspired: adopts episodic training principles
- Task-aligned: Explicitly mimics downstream dense in-context tasks during post-training.
- Purpose-built: Optimizes the model for dense in-context performance.
ssirko.bsky.social
3/n Most unsupervised (post-)training methods for dense in-context scene understanding rely on self-distillation frameworks with (somewhat) complicated objectives and network components. Hard to interpret, tricky to tune.

Is there a simpler alternative? 👀
ssirko.bsky.social
2/n What is dense in-context scene understanding?

Formulate dense prediction tasks as nearest-neighbor retrieval problems using patch feature similarities between query and the labeled prompt images (introduced in @ibalazevic.bsky.social‬ et al.’s HummingBird; figure below from their work).
ssirko.bsky.social
1/n 🚀New paper out - accepted at #ICCV2025!

Introducing DIP: unsupervised post-training that enhances dense features in pretrained ViTs for dense in-context scene understanding

Below: Low-shot in-context semantic segmentation examples. DIP features outperform DINOv2!
Reposted by Sophia Sirko-Galouchenko 🇺🇦
paulcouairon.bsky.social
🚀Thrilled to introduce JAFAR—a lightweight, flexible, plug-and-play module that upsamples features from any Foundation Vision Encoder to any desired output resolution (1/n)

Paper : arxiv.org/abs/2506.11136
Project Page: jafar-upsampler.github.io
Github: github.com/PaulCouairon...
Reposted by Sophia Sirko-Galouchenko 🇺🇦
t-martyniuk.bsky.social
Our paper "LiDPM: Rethinking Point Diffusion for Lidar Scene Completion" got accepted to IEEE IV 2025!

tldr: LiDPM enables high-quality LiDAR completion by applying a vanilla DDPM with tailored initialization, avoiding local diffusion approximations.

Project page: astra-vision.github.io/LiDPM/
Reposted by Sophia Sirko-Galouchenko 🇺🇦
davidpicard.bsky.social
🔥🔥🔥 CV Folks, I have some news! We're organizing a 1-day meeting in center Paris on June 6th before CVPR called CVPR@Paris (similar as NeurIPS@Paris) 🥐🍾🥖🍷

Registration is open (it's free) with priority given to authors of accepted papers: cvprinparis.github.io/CVPR2025InPa...

Big 🧵👇 with details!
Reposted by Sophia Sirko-Galouchenko 🇺🇦
abursuc.bsky.social
This amazing team ❤️
valeoai.bsky.social
We've just had our annual gathering to get together and brainstorm on new exciting ideas and projects ahead -- stay tuned!
This is also an excellent occasion to fit all team members in a photo 📸
Reposted by Sophia Sirko-Galouchenko 🇺🇦
noagarciad.bsky.social
As I haven't found it out there yet, I made the Women in computer vision started pack.

Many more missing, please let me know how is already in bsky to add them!

go.bsky.app/BowzivT