F. Güney
fguney.bsky.social
F. Güney
@fguney.bsky.social
research on computer vision, teaching, and movies.
tweets in TR, EN
last year after ECCV in Milano I joked about how expensive Italy felt and a few ppl kindly suggested I explore other parts of the country, so here I am taking the advice!
November 25, 2025 at 7:28 PM
Fatih is presenting our work Mapping like a Skeptic 🔎 tomorrow at #BMVC2025!

paper: arxiv.org/abs/2508.21689
code: github.com/Fatih-Erdoga...
November 24, 2025 at 8:48 PM
what a nice way to end an amazing 2-day workshop in Grenoble 😊
November 21, 2025 at 9:05 PM
just wanted to clarify that we always try to report results (for sure for main results, maybe not all ablations) over 3 runs* for online evaluations as mean and std. maybe it was a mistake to show barplots without mentioning this.
*probably not enough but still better than reporting only one run.
November 21, 2025 at 3:32 PM
Reposted by F. Güney
Andrea Vedaldi talks about 4D reconstruction, leveraging 4D generation, but also representations based on dynamic point maps which encode points for different viewpoints AND different time steps.

AI 4 Robotics workshop at @naverlabseurope.bsky.social
November 20, 2025 at 11:21 AM
Reposted by F. Güney
Fatma Guney on thinking fast and slow for autonomous driving: how to combine bigger models with higher latency with smaller models with lower latency by forecasting rich but stale features to the future.

AI 4 Robotics Workshop at @naverlabseurope.bsky.social
November 20, 2025 at 2:40 PM
EuroHPC wrote a piece about our love of GPUs 😊

A EuroHPC Success Story | Clear Vision for Self-Driving Cars
www.eurohpc-ju.europa.eu/eurohpc-succ...
October 23, 2025 at 12:00 PM
Reposted by F. Güney
Track-On2: Enhancing Online Point Tracking with Memory
By Görkay Aydemir, Weidi Xie, @fguney.bsky.social

TLDR: explicit memory for point tracking, holds decoder features, similar to what MUSt3R does for reconstruction.

arxiv.org/abs/2509.19115
September 24, 2025 at 6:47 AM
Reposted by F. Güney
Our Autonomous Systems workshop series is back for its 3rd edition at CVPR 2025! Remember the queue in Seattle? Please mark the workshop in your CVPR registration so we have enough space this year! opendrivelab.com/cvpr2025/wor...
April 19, 2025 at 5:55 PM
Reposted by F. Güney
Our Workshop on Uncertainty Quantification for Computer Vision goes to @cvprconference.bsky.social this year!
We have a super line-up of speakers and a call for papers.
This is a chance for your paper to shine at #CVPR2025

⏲️ Submission deadline: 14 March
💻 Page: uncertainty-cv.github.io/2025/
February 28, 2025 at 7:28 AM
feel free to nominate yourself or anyone else at KUIS AI talks: forms.gle/nbqhKsWbC9Swoqzi8
KUIS AI Talks Speaker Nomination Form
KUIS AI was established in 2020 jointly by Koç University and Is Bank. With its 17 AI faculty, 23 affiliated faculty from the schools of engineering, medicine, science, and other fields, and over 100 ...
forms.gle
February 18, 2025 at 6:53 AM
agreed. one of the messages from his talk is studying statistics, computer science, and economics together. I find his perspective on AI very realistic.
also, how he changed the direction of discussion on bias during the panel in the end, should be studied in textbooks.
Michael Jordan's talk on the multi-agent and micro-econ perspective on AI and how we need a new vision of the future is killer:
www.youtube.com/live/W0QLq4q...
AI, Science and Society Conference - AI ACTION SUMMIT - DAY 1
YouTube video by IP Paris
www.youtube.com
February 9, 2025 at 9:01 PM
in a privileged setting, RL with self-play seems to be the answer to planning for driving. impressive to see complex behavior learning without supervision and zero-shot generalization across benchmarks by GigaFlow. the next question is what about with perception, is it just a matter of computation?
February 9, 2025 at 12:18 PM
“..unsere heimat wieder gross”
I wish my German was not enough to understand this, what the hell is happening to Germany 🥹🥹
February 4, 2025 at 1:26 PM
since Görkay came up with Track-On, we've been waiting for this day:

Track with me, if it's just for the frame, maybe tomorrow, the occlusion will take you away! 🎵🎸😶‍🌫️

Happy to introduce our #ICLR2025 paper
"Track-On: Transformer-based Online Point Tracking with Memory": kuis-ai.github.io/track_on/
Track-On
Track-On
kuis-ai.github.io
February 3, 2025 at 8:25 AM
open-source, open-weight, I'll take whichever because, unlike the ones who share nothing, a few at least make an effort and change the game for everyone.
I expect the effect of open stuff like Cosmos to be huge on physical AI. we are already trying it. also downloaded the DeepSeek, we shall see 🤞
January 30, 2025 at 6:39 AM
I recently started taking walks, you know, for health reasons. but somehow all these walks are directed towards a cafe where I can have a cake as a reward for my walk 😅
January 29, 2025 at 1:07 PM
I want to work closely with the Turkish industry, I think we can help each other. they just seem to miss the point like they see us as charity*. funny I feel the same about them after our international collaborations.

*or they see me as "just a girl standing in front of them, asking for money" 😂
January 28, 2025 at 1:55 PM
I haven't made the full switch from X to bsky yet because of the Turkish community that is still active there but so many random accounts are rt'ing/following in the last few hours that it might be the time.
I think Elon Musk is trying to create artificial traffic with bots. it's pathetic really.
January 27, 2025 at 10:22 AM
AZ praised our work that I presented at VGG. I don’t know why I’m still such a fan girl, I’m certainly not proud, but it made me very happy 🥹 glad he is not on social media, pls don’t tell him 🤫😅
January 25, 2025 at 9:43 AM
#ICLR decisions are out! we have a paper 🥳
I’m not even gonna try to play it cool 😅
our first ICLR paper, super happy for my student Görkay, and we are extremely lucky to work with Weidi on this project.
stay tuned for a very simple online point tracking method with amazing results 🤩
January 22, 2025 at 4:49 PM
I love that something as valuable as NeuroNCAP is open-source, so I'll advertise it: github.com/atonderski/n...

a NeRF-based simulator trained on NuScenes to render new, unseen scenarios to stress-test e2e planners (UniAD and VAD) closed-loop in safety-critical scenarios.
GitHub - atonderski/neuro-ncap: NeuroNCAP benchmark for end-to-end autonomous driving
NeuroNCAP benchmark for end-to-end autonomous driving - atonderski/neuro-ncap
github.com
January 17, 2025 at 7:43 AM
Reposted by F. Güney
Excited to share that today our paper recommender platform www.scholar-inbox.com has reached 20k users! We hope to reach 100k by the end of the year.. Lots of new features are being worked on currently and rolled out soon.
January 15, 2025 at 10:03 PM
this is a very positive, nice way of thinking about progress but today I read a survey about 2024 papers I’ve been meaning to read 🥹
things are crazy fast right now, I have a hard time catching up with the existing work, let alone contribute my unique perspective 🥲
That is to say, I think it makes not much sense to fear what others might do. Do the research that you are interested in and I am sure it will be relevant as you bring a unique perspective to the field.
January 15, 2025 at 2:28 PM
for a while, I was very scared that I’d become irrelevant because I chose to live in Turkey. with all the recent progress on the topics that I’d like to work on, I realize that I’d be irrelevant anywhere. well, that’s a relief :/
January 15, 2025 at 8:27 AM