Lukas Vogelsang
@lukasvogelsang.bsky.social
130 followers 220 following 15 posts
Simons Postdoctoral Fellow in Pawan Sinha's Lab at MIT. Experimental and computational approaches to vision, time, and development. Just joined Bluesky!
Posts Media Videos Starter Packs
Reposted by Lukas Vogelsang
Reposted by Lukas Vogelsang
vayzenb.bsky.social
My paper with @stellalourenco.bsky.social ‬is now out in Science Advances!

We found that children have robust object recognition abilities that surpass many ANNs. Models only outperformed kids when their training far exceeded what a child could experience in their lifetime

doi.org/10.1126/scia...
Fast and robust visual object recognition in young children
The visual recognition abilities of preschool children rival those of state-of-the-art artificial intelligence models.
doi.org
lukasvogelsang.bsky.social
Thank you, Martin! Hope to see you at TRF4!
lukasvogelsang.bsky.social
On Tuesday, I will present on new tests of Gestalt processing capabilities in children treated for blindness late in life (Poster session: 13.30-16.30) and, on Friday, @marinv.bsky.social will present on adaptive initial degradations extended to the time domain (Talk: 12.40; Poster: 14.00-17.00)!
lukasvogelsang.bsky.social
Looking forward to #CCN2025! Please come say hi!

On Monday, as part of the ‘From Child to Machine Learning’ Satellite sites.google.com/view/child2m... (3-6pm), I look forward to give a talk about our past work probing the hypothesis that visual degradations in early development may be adaptive.
CCN 2025 Satellite Event
Background The human visual system is full of optimisations—mechanisms designed to extract the most useful information from a constant stream of incoming data. The field of neuro-AI has made significa...
sites.google.com
lukasvogelsang.bsky.social
5/ Thank you also for sharing your recent preprint, which I am looking forward to reading in greater detail shortly!
lukasvogelsang.bsky.social
4/ Second: indeed, there is a mapping from the 1000 categories to the 16 broader classes, based on Geirhos' 2019 code that you can find here: github.com/rgeirhos/tex....
lukasvogelsang.bsky.social
3/ Also, if I'm understanding it correctly, looking at Supplemental Figure 8E in Jang & Tong (2024) (www.nature.com/articles/s41...), it seems that the baseline AlexNet (red) had a shape bias of 0.4 as well, and strong blur (purple) then brought it up to around 0.7!
lukasvogelsang.bsky.social
2/ First: while any absolute values were certainly not the focus of our paper, we did check the baseline and found that it was really quite similar to Geirhos et al. (2019), who had >0.4 for AlexNet. Below is a screenshot from their paper (similar to our Supplementary Fig 14 you shared before).
lukasvogelsang.bsky.social
1/ Thanks, @sushrutthorat.bsky.social and @zejinlu.bsky.social! Great to hear from you, and I look forward to meeting at CCN! Two brief points below that might be helpful:
lukasvogelsang.bsky.social
5/ Moreover, developmentally inspired training also led networks toward a stronger bias for global shape processing, potentially driven by magnocellular-like units. Together, this has implications for neuroscience and the design of more robust and human-like computational vision systems.
lukasvogelsang.bsky.social
4/ While not ruling out the role of phylogenetic dispensation ('nature'), our results demonstrate the possibility of an experience-driven ('nurture') route to part of the emergence of this fundamental organizing principle in the mammalian visual system.
lukasvogelsang.bsky.social
3/ Training deep networks with such joint developmental regimens revealed that the temporal confluence in the progression of spatial frequency and color sensitivities significantly shapes some neuronal response properties characteristic of the division of parvo- and magnocellular systems.
lukasvogelsang.bsky.social
2/ Previously, we examined AID (reviewed in doi.org/10.1016/j.dr...) separately for the domains of visual acuity (doi.org/10.1073/pnas...), color sensitivity (doi.org/10.1126/scie...), and even prenatal hearing (doi.org/10.1111/desc...). Here, we consider visual acuity and color sensitivity jointly.