Evangelos Kazakos
@ekazakos.bsky.social
680 followers 450 following 630 posts
Postdoctoral researcher @ CIIRC, CTU, Prague working in vision & language. Also robotics noob. PhD from University of Bristol. Ex. Samsung Research (SAIC-C). I love coffee and plants. And socks.
Posts Media Videos Starter Packs
Pinned
ekazakos.bsky.social
Our work, GROVE, has been accepted to ICCV 2025! 🎉 This is collab. w. Cordelia Schmid & @josef-sivic.bsky.social.

We will release code, models and datasets within next 2 weeks.

We are also working on a search demo for the proposed datasets with user prompts!

I hope to see you all in Honolulu!
Reposted by Evangelos Kazakos
ducha-aiki.bsky.social
For those going to @iccv.bsky.social, welcome to our RANSAC tutorial on October 2025 with
- Daniel Barath
- @ericbrachmann.bsky.social
- Viktor Larsson
- Jiri Matas
- and me
danini.github.io/ransac-2025-...
#ICCV2025
ekazakos.bsky.social
To keep the tradition, the lineup is 🔥🔥🔥
gtolias.bsky.social
The Visual Recognition Group at CTU in Prague organizes the 50th Pattern Recognition and Computer Vision Colloquium with
Torsten Sattler, Paul-Edouard Sarlin, Vicky Kalogeiton, Spyros Gidaris, Anna Kukleva, and Lukas Neumann.
On Thursday Oct 9, 11:00-17:00.

cmp.felk.cvut.cz/colloquium/
Reposted by Evangelos Kazakos
shahabbakht.bsky.social
Interesting paper suggesting a mechanism for why in-context learning happens in LLMs.

They show that LLMs implicitly apply an internal low-rank weight update adjusted by the context. It’s cheap (due to the low-rank) but effective for adapting the model’s behavior.

#MLSky

arxiv.org/abs/2507.16003
Learning without training: The implicit dynamics of in-context learning
One of the most striking features of Large Language Models (LLM) is their ability to learn in context. Namely at inference time an LLM is able to learn new patterns without any additional weight updat...
arxiv.org
ekazakos.bsky.social
I've added a rule to add posts from this account always in the feed so no need for the tag 😉 But thanks a lot for caring about it mate! ❤️
Reposted by Evangelos Kazakos
sungkim.bsky.social
ModernVBERT

A vision-language encoder that matches the performance of models 10× its size on visual retrieval tasks!

📄 Paper: arxiv.org/abs/2510.01149
🌐 Blog: huggingface.co/blog/paultlt...
👨‍🍳 Finetuning Cookbook: colab.research.google.com/drive/1bT5LW...
🤗 Models: huggingface.co/ModernVBERT
ekazakos.bsky.social
If you install ROCm version of PyTorch for AMD, you can of course use it without any changes in your code and with very negligible differences in performance (without this meaning that CUDA performs better).
ekazakos.bsky.social
Glad to see people doing this! Add the #skyvision tag in your post to make sure that your post is included in the computer vision feed for @bsky.app. Link of feed: bsky.app/profile/did:...

Happy to collaborate with anyone interested for improving the feed!
Reposted by Evangelos Kazakos
tom-doerr.bsky.social
PyTorch computer vision toolkit with self-supervised learning, transformers, and detection models
Screenshot of the repository
ekazakos.bsky.social
Oh if you give me a beer I can start singing as well
ekazakos.bsky.social
Good, then we agree! 😊
ekazakos.bsky.social
I understand what Andrei is saying. 🙂 I replied to you about subjectivity. You cannot cast it as an RL tree because in real life rewards are subjective. What is rewarding for you might be a loss for me.
ekazakos.bsky.social
Oh I know how to do that very well!
ekazakos.bsky.social
So, conference submissions is our only real life now? 🙃
Reposted by Evangelos Kazakos
cpaxton.bsky.social
Robot installing solar panels
Reposted by Evangelos Kazakos
davidpicard.bsky.social
you have a "world model" that can do many - if not all - tasks.

I also think we've known that since 1961 when Cover and Hart proved that k-NN converges to the Bayes error when n→∞. Turns out that since our world is finite, n doesn't even need to approach ∞.

But is it a good research question? ...
ekazakos.bsky.social
Completely agree about data leakage, it's always the first thing I'm saying when people claim emergence.

But I disagree with the part of treating arXiv papers with minimum respect!
ekazakos.bsky.social
I do occasionally for grammar checking
ekazakos.bsky.social
I prefer a bit trickier access to engaging information that stays rather than easy access to boring information that disappears.
Reposted by Evangelos Kazakos
iaugenstein.bsky.social
Available #NLProc PhD positions:
- Explainable NLU, main supervisor: myself, start in Spring 2026 tinyurl.com/3uset3dm
- Trustworthy NLP, main supervisor: @apepa.bsky.social, start in Spring 2026 tinyurl.com/yxj8yk4m
- Open-topic: express interest via ELLIS, start in Autumn 2026 tinyurl.com/2hcxexyx
LinkedIn
This link will take you to a page that’s not on LinkedIn
lnkd.in
Reposted by Evangelos Kazakos
serge.belongie.com
@belongielab.org welcomes applications for a PhD position in CV/ML (fine-grained analysis of multimodal data, 2D/3D generative models, misinformation detection, self-supervised learning) / Apply though the ELLIS portal / Deadline 31-Oct-2025