F. Güney
@fguney.bsky.social
1.1K followers 500 following 110 posts
research on computer vision, teaching, and movies. tweets in TR, EN
Posts Media Videos Starter Packs
Reposted by F. Güney
kashyap7x.bsky.social
Our Autonomous Systems workshop series is back for its 3rd edition at CVPR 2025! Remember the queue in Seattle? Please mark the workshop in your CVPR registration so we have enough space this year! opendrivelab.com/cvpr2025/wor...
Reposted by F. Güney
abursuc.bsky.social
Our Workshop on Uncertainty Quantification for Computer Vision goes to @cvprconference.bsky.social this year!
We have a super line-up of speakers and a call for papers.
This is a chance for your paper to shine at #CVPR2025

⏲️ Submission deadline: 14 March
💻 Page: uncertainty-cv.github.io/2025/
fguney.bsky.social
thanks a lot!! any resources (slides, tutorials, etc.) you recommend?
fguney.bsky.social
similarly, if you were teaching deep stereo, how would you structure it, which methods/paradigms should not be skipped?
fguney.bsky.social
if one is teaching correspondence estimation, which methods proposed in the last years* should not be skipped?

*my dfn. of last years is since the DL revolution.
fguney.bsky.social
agreed. one of the messages from his talk is studying statistics, computer science, and economics together. I find his perspective on AI very realistic.
also, how he changed the direction of discussion on bias during the panel in the end, should be studied in textbooks.
eugenevinitsky.bsky.social
Michael Jordan's talk on the multi-agent and micro-econ perspective on AI and how we need a new vision of the future is killer:
www.youtube.com/live/W0QLq4q...
AI, Science and Society Conference - AI ACTION SUMMIT - DAY 1
YouTube video by IP Paris
www.youtube.com
fguney.bsky.social
in a privileged setting, RL with self-play seems to be the answer to planning for driving. impressive to see complex behavior learning without supervision and zero-shot generalization across benchmarks by GigaFlow. the next question is what about with perception, is it just a matter of computation?
fguney.bsky.social
well maybe there is a wider diamond hall ahead, exploration.
fguney.bsky.social
to me, the surprising part is the involvement of Elon Musk. I cannot stop thinking "I could be working for that man."
fguney.bsky.social
“..unsere heimat wieder gross”
I wish my German was not enough to understand this, what the hell is happening to Germany 🥹🥹
fguney.bsky.social
We introduce a flexible memory extension mechanism, allowing users to adapt based on FPS, frame count, and other data characteristics. Our model is fast and lightweight, requiring minimal GPU memory. (6/7)
fguney.bsky.social
Even without bidirectional information flow (unlike offline models), our approach achieves state-of-the-art results among comparable online and offline tracking models across multiple datasets. (5/7)
fguney.bsky.social
Unlike traditional methods relying on full temporal modeling, our model operates causally—processing frames without future information. We introduce two memory modules: (i) Spatial Memory, addressing feature drift; (ii) Context Memory, storing full tracking history. (4/7)
fguney.bsky.social
We simply process points as queries in a transformer decoder. Instead of regressing coordinates (as in dominant methods), we treat tracking as a classification problem, selecting the most likely patch per query and refining with local offsets. (3/7)
fguney.bsky.social
Unlike prior work focused on offline point tracking, we target online tracking on a frame-by-frame basis, making it ideal for real-time, streaming scenarios. At the core of our approach is a simple yet effective transformer-based model. (2/7)
fguney.bsky.social
since Görkay came up with Track-On, we've been waiting for this day:

Track with me, if it's just for the frame, maybe tomorrow, the occlusion will take you away! 🎵🎸😶‍🌫️

Happy to introduce our #ICLR2025 paper
"Track-On: Transformer-based Online Point Tracking with Memory": kuis-ai.github.io/track_on/
Track-On
Track-On
kuis-ai.github.io
fguney.bsky.social
open-source, open-weight, I'll take whichever because, unlike the ones who share nothing, a few at least make an effort and change the game for everyone.
I expect the effect of open stuff like Cosmos to be huge on physical AI. we are already trying it. also downloaded the DeepSeek, we shall see 🤞
fguney.bsky.social
I fully support your decision 🤗
fguney.bsky.social
I recently started taking walks, you know, for health reasons. but somehow all these walks are directed towards a cafe where I can have a cake as a reward for my walk 😅
fguney.bsky.social
I want to work closely with the Turkish industry, I think we can help each other. they just seem to miss the point like they see us as charity*. funny I feel the same about them after our international collaborations.

*or they see me as "just a girl standing in front of them, asking for money" 😂
fguney.bsky.social
just deactivated it, it really got out of hand.
fguney.bsky.social
I haven't made the full switch from X to bsky yet because of the Turkish community that is still active there but so many random accounts are rt'ing/following in the last few hours that it might be the time.
I think Elon Musk is trying to create artificial traffic with bots. it's pathetic really.