andrew haun
@andrewhaun.bsky.social
190 followers 120 following 160 posts
scientist at UW-Madison: vision science, psychophysics, visual neuroscience, consciousness, integrated information theory https://sites.google.com/site/amhaun01/
Posts Media Videos Starter Packs
andrewhaun.bsky.social
i'd be interested to hear reactions/feedback!
Reposted by andrew haun
andrewhaun.bsky.social
this is really neat, but couldn't one argue that *all the low-level properties have changed*. every contrast has been rotated and translated - content at every position has been replaced with different content.

to see what's "the same" one has to rotate the full image, which is pretty "high level"
andrewhaun.bsky.social
but then i wonder. i'm trying it out right now - if i think "this friday" it definitely feels i'm thinking of "the coming friday", but if i think "this monday", i'm thinking of yesterday. is the dividing line at the weekend? so "this" is about "this week", and the days are subdivisions of that?
andrewhaun.bsky.social
so you have "this" friday, which is the one that hasn't happened yet (that's "this past" or "last" friday), unless you're speaking in past tense, then "this" friday is "last" friday. then (in past tense), "next" friday would be "this" (coming) friday, since as always "next" is after "this".
andrewhaun.bsky.social
i have to fixate the yellow and focus really hard on it, and once i get it foregrounded it's hard to hold on to it (it is relatively easier upside down, yeah)
andrewhaun.bsky.social
reading the text is extremely disheartening because it's so familiar
andrewhaun.bsky.social
i mean, i get why these demos are interesting, but you know.. if "the rays are not coloured", then how could the paint be?

(point being, there aren't really 'blue things' or 'grey things', rather just 'blue appearances', etc)
mamassian.bsky.social
A beautiful painting from Georges de la Tour where the coat’s grey paint appears blue because of the dominant yellow colour in the canvas.

It took another two centuries for Eugène Chevreul to describe the laws of simultaneous contrast, used later by Vincent van Gogh, Sonia Delaunay, and others.
arte-mise.bsky.social
The new Georges de la Tour exhibition at the Jacquemart-André Museum has begun! A virtuoso technique used to meditate on the human condition.This coat of Saint Thomas contains no blue, only grey coulours; it is an optical illusion created by the adjacent yellow.
andrewhaun.bsky.social
reminds me of the rodney dangerfield joke, how he finally married a good woman who loves him for his money and fame, not for who he is on the inside.
andrewhaun.bsky.social
oh i have certainly asked AIs to explain code to me that was entirely written *by me*
andrewhaun.bsky.social
I'm pretty sure that when I was in college in the late 90s, I had a philosophy professor who regularly joked about Howdy Doody and it was fine, so I think one should be free to cite broadly and deeply.
andrewhaun.bsky.social
but there's no integration of one with the other. feeding chatGPT an image as a "stimulus" is equivalent to "telling" the Anton's pt about an image. neither of them sees anything, yet both have high-level descriptions *of* seeing, so they behave as though they see.
andrewhaun.bsky.social
the difference between chatgpt and the Anton's pt, is that the chatbot has peripheral tools to "tell" it about the image, i.e. translating image features into a language-style description. this might make us think it "sees" the image it describes.
andrewhaun.bsky.social
like, the logic of language and language-like ideas is completely different from the logic (such as it is) of low-level vision. a pt with Anton's can talk very sensibly about any topic, but when they need to talk about the structure of "what is seen" they confabulate.
andrewhaun.bsky.social
These LLM AIs are all suffering from a sort of synthetic Anton's syndrome - they have absolutely no capacity for spatial experience (or simulation of it) and yet are unaware of it - they have all the "high level" stuff you get from seeing, so why not behave as though it's *all* really there?
tomerullman.bsky.social
oh, nice write up in The Register about Illusion Illusions:

www.theregister.com/2025/08/19/v...

original paper here:

arxiv.org/abs/2412.18613
andrewhaun.bsky.social
great thread but this is my favorite part
rahaeli.bsky.social
No doubt a State possesses legitimate power to protect children from harm, but that does not include a free-floating power to restrict the ideas to which children may be exposed." Brown v. Entertainment Merchants Ass'n
andrewhaun.bsky.social
At first I read this blurb and got all defensive and nitpicky about the obvious misconception of "seeing in N colors" - like "no, color vision is N-d but not in N colors" - but then I thought about it a moment and I think I really like this way of putting it
andrewhaun.bsky.social
importantly, rotation invariance is *not* a "low level" phenomenon in visual processing, i think you have to get all the way to IT cortex to find it. so what is "preserved" in this transformation requires something very high-level to recognize.
andrewhaun.bsky.social
(rotation invariance, especially of whole images, is not something one finds in "low levels" of visual processing, not by a long shot)
andrewhaun.bsky.social
this is really neat, but couldn't one argue that *all the low-level properties have changed*. every contrast has been rotated and translated - content at every position has been replaced with different content.

to see what's "the same" one has to rotate the full image, which is pretty "high level"
andrewhaun.bsky.social
assertions in the paper that "its the *same image*, just *rotated*" are i think badly misguided. one could easily argue that with this manipulation *all the low-level properties have changed*. every contrast has been rotated and translated, content at every position replaced with different content.
andrewhaun.bsky.social
this method is neat and i want to learn to use it; but seems to me that the study is basically a demonstration that orientation is really important for recognition (like.. yeah!?)
talboger.bsky.social
On the left is a rabbit. On the right is an elephant. But guess what: They’re the *same image*, rotated 90°!

In @currentbiology.bsky.social, @chazfirestone.bsky.social & I show how these images—known as “visual anagrams”—can help solve a longstanding problem in cognitive science. bit.ly/45BVnCZ