Johan Edstedt
@parskatt.bsky.social
1.6K followers 400 following 1.8K posts
PhD student @ Linköping University I like 3D vision and training neural networks. Code: https://github.com/parskatt Weights: https://github.com/Parskatt/storage/releases/tag/roma
Posts Media Videos Starter Packs
parskatt.bsky.social
Another way to view interpolation is that your new datapoints should lie in the convex hull of your training data (typically not true for LLMs).
parskatt.bsky.social
Yes, this seems like a more accurate description.

The distinction between interpolation and extrapolation or compression is a bit diffuse.

For me interpolation is connected with linear combinations of samples, which I don't think is a very accurate description for LLMs.
parskatt.bsky.social
Yes, most of the time (unless RL).
parskatt.bsky.social
Worth going!
gtolias.bsky.social
The Visual Recognition Group at CTU in Prague organizes the 50th Pattern Recognition and Computer Vision Colloquium with
Torsten Sattler, Paul-Edouard Sarlin, Vicky Kalogeiton, Spyros Gidaris, Anna Kukleva, and Lukas Neumann.
On Thursday Oct 9, 11:00-17:00.

cmp.felk.cvut.cz/colloquium/
parskatt.bsky.social
I believe extrapolation (or possibly compression) is in general a much more suitable term than interpolation.
parskatt.bsky.social
The basic way to train a language model is next token (word) prediction.

This is a model for how language is generated (hence LM).
parskatt.bsky.social
Hinton, the random plumber of AI
shawnburks.bsky.social
Why are you citing someone from a completely unrelated field as a source. You might as well source some random plumber.
parskatt.bsky.social
LLMs are not autoencoders. Autoencoding implies output and input being the same.
New samples are not drawn by interpolation.
parskatt.bsky.social
From reading the replies here I now understand this
parskatt.bsky.social
Are people still saying ai is useless? I'm in an ai useful echochamber so i have no idea what's out there.
parskatt.bsky.social
Besides Carta Marina, this place has some other incredible originals (Celsius temperature measurements, Galileos descriptions of sunspots, Newtons Principia Mathematica)
parskatt.bsky.social
Went to Carolina Revidiva in Uppsala to see Carta Marina (my thesis cover) in person
parskatt.bsky.social
I'm mostly talking to myself (although I do see this quite often in monodepth papers).
parskatt.bsky.social
A good visualization typically gives you a new perspective, and might reveal limitations in your model that numbers don't show.

I would however be very careful with using this for judging whether one model is "better" than another.
parskatt.bsky.social
I'd like to reiterate this point, but even for outputs with 3 or less channels (xyz, depth, warps, etc).

What you end up selecting for is aesthetic appeal, although that might be good for getting papers accepted...
parskatt.bsky.social
Slightly obvious take:

Good features have more than 3 active PCs per image. You can't really judge whether they're good by visual inspection (unless you have really good hyperspectral vision).
parskatt.bsky.social
Seems you can see the impact of covid on the curve, and that we still haven't come back to the earlier trajectory.
parskatt.bsky.social
groupchat?
if not super personal i just post on bsky and hope people don't get too offended
parskatt.bsky.social
Transformers are nice because they make it less likely that you completely fuck everything up.
parskatt.bsky.social
Architecture search from scratch is terrifying.
Search space is basically infinite, and any of your choices could cripple the model.
parskatt.bsky.social
hehe well there should be a tallest reviewer award, not that I would win, just interested.
parskatt.bsky.social
(slightly better formatted)
iccv.bsky.social
There’s no conference without the efforts of our reviewers. Special shoutout to our #ICCV2025 outstanding reviewers 🫡

iccv.thecvf.com/Conferences/...
2025 ICCV Program Committee
iccv.thecvf.com