Derek Arnold
@visnerd.bsky.social
390 followers 170 following 48 posts
Vision Scientist, Aphant
Posts Media Videos Starter Packs
visnerd.bsky.social
I so wish : )

Failing that - I'll raise a glass to your continued good health at that time : )
visnerd.bsky.social
If the DV are RTs - it would be important to control for local image contrasts. If the DV is recognition, controlling for ~all image properties is futile, as these are what we recognize. If you want to know what properties we rely on, well that is a different question (its some of them)
visnerd.bsky.social
The problem - if there is one, is you didn't control for oriented contrast energy, spatial frequency content, local or long range curvature ect ect... Rotating an image causes big changes in these properties. Deciding if control is futile or sensible depends on context.
visnerd.bsky.social
Exactly - attention kicks in and re-weights image properties ect - but as you say, the images are cool, and I want a coaster : )
visnerd.bsky.social
If you are worried that detection or RTs might be related to contrast diffs ect - sure, control for that type of thing. But I think claiming to control for ~all image stats is futile if you still want to be able to recognize things in the image
visnerd.bsky.social
Obv it depends on context, but if you control for all image stats, you could not recognize - as that depends on image stats we have learnt to associate with meaning. So I never understand when papers claim to have controlled for image stats - as they haven't if people can still recognize things
visnerd.bsky.social
Bluesky is not a great platform for nuance : )

I also find it really hard to follow conversations here, and think people should use tildes more often

If there is any disagreement - it is with the idea that controlling for low-level confounds is a sensible goal.
visnerd.bsky.social
The info that is mapped to semantics is correlated image structure, that is changed when the images are reoriented. So it is a super cool demo of anagram images (I want a coaster), but it does not show that 'high-level' effects are driven by identical stimuli. You have to change the stimuli
visnerd.bsky.social
You just taught me a word : )

Will be looking for opportunities to refer to 'elides" : )
visnerd.bsky.social
Replications (stroop-like interference, aesthetics shaped by knowledge) tapped understanding entrained by recognition of correlated image structure. This doesn't seem much different to 'house' and 'horse' having different meanings. The task that didn't (a visual search) had a detection component.
visnerd.bsky.social
Yes, but even with multi-stable images attention acts to re-weight how we process the different features of the image. But at least that is all happening within the brain/mind, and does not rely on people detecting reliable image features
visnerd.bsky.social
I think attempts to dissociate the cognitive processes entrained by an image from all low-level image properties are misguided, as it is ultimately some minimal set of correlated low-level image properties that allow us to recognize anything (including faces as faces, and rabbits as rabbits)
visnerd.bsky.social
Only images that are multi-stable without any change (in orientation or anything else) could be said to dissociate meaning from the image structure, but even there attention kicks in, and our brains re-weight the encoded image structure by via selective processing.
visnerd.bsky.social
The words 'house' and 'horse' are similar, but due to subtle image structure differences it is no surprise they trigger very different associations. Similarly your images are only subtly different when rotated, but they are different and it is not surprising they can trigger different associations
visnerd.bsky.social
I think you are attempting the impossible. To extract meaning from an image, we detect correlations between image structure and high-level meaning. It is telling that all your tasks that have positive results are high-level, and the only null result comes from a search task that involves detection
visnerd.bsky.social
By changing the orientation of the image, you have changed the low-level image statistics. Once again you have shown that you cannot manipulate high-level visual properties while holding all low-level content constant
visnerd.bsky.social
Obviously I have no idea in this space.
visnerd.bsky.social
When I dream, I am Fully emmersed and embodied in the scene. At least until I guess I am dreaming, which I can conform as I don’t have any sense of touch.

People’s descriptions of imagery don’t sound like that? They sound like they are more selective no?
visnerd.bsky.social
In my mind, if you have to close your eyes you are not a projector. Whatever representation your brain can generate clearly cannot be projected into the world you are seeing.
visnerd.bsky.social
The idea of a projected experience that you can only have if you close your eyes makes no sense to me. The very idea to me is that you can experience an imagined thing as existing in the external world that you are currently seeing. Some people very clearly describe being able to do that.
visnerd.bsky.social
This line of thought inspired this project. If you think that mental rotation (MR) tasks are a reliable metric of people's propensity to visualise, you might draw the conclusion you outline. Instead, our data suggest MR tasks are not a reliable metric of people's propensity to visualise.
visnerd.bsky.social
New Publication: Mental rotation is a weak measure of people’s propensity to visualise

This was a fun project : )

Turns out that the main task used to measure the capacity of people to visualize for many decades isn't really fit for that purpose!

authors.elsevier.com/a/1lS9S_NzVj...
authors.elsevier.com
visnerd.bsky.social
New Paper: The vividness of visualisations & autistic trait expression are not strongly associated

Our evidence suggests Autism / Aphantasia links have been overstated by a circular logic (as the most popular Autistic trait measure has questions about imagery).
authors.elsevier.com/c/1kYhW3lcz4...
authors.elsevier.com
visnerd.bsky.social
That looks a bit like a drop bear
visnerd.bsky.social
Thanks mate 😀