Zejin Lu
@zejinlu.bsky.social
79 followers 92 following 18 posts
PhD Student @FU_Berlin co-supervised by Prof. Radoslaw M. Cichy and Prof. Tim Kietzmann, interested in machine learning and cognitive science.
Posts Media Videos Starter Packs
Pinned
zejinlu.bsky.social
Now out in Nature Human Behaviour @nathumbehav.nature.com : “End-to-end topographic networks as models of cortical map formation and human visual behaviour”. Please check our NHB link: www.nature.com/articles/s41...
zejinlu.bsky.social
If you are interested in development and development-inspired NeuroAI, and are coming to CCN this year,
come join the workshop with us on 1️⃣1️⃣ Monday, Aug 11
🕒 3:00 – 6:00 pm
📍 Room A2.11
Register here: sites.google.com/view/child2m...
(You can also come by my poster to chat!)
zejinlu.bsky.social
Hi Lukas, very interesting work! Is it possible to know the shape bias in the Geirhos way? He reports the average shape bias across categories (see his plot code here: github.com/bethgelab/mo...).
It would be even better if we could also know the average shape bias of each model across seeds:)!
model-vs-human/modelvshuman/plotting/plot.py at master · bethgelab/model-vs-human
Benchmark your model on out-of-distribution datasets with carefully collected human comparison data (NeurIPS 2021 Oral) - bethgelab/model-vs-human
github.com
zejinlu.bsky.social
🚨 Preprint alert! Excited to share my second PhD project: “Adopting a human developmental visual diet yields robust, shape-based AI vision” -- a nice case showing that biology, neuroscience, and psychology can still help AI :)! arxiv.org/abs/2507.03168
zejinlu.bsky.social
In conclusion, All-TNNs are an exciting new class of networks for modelling primate vision, which address questions that are beyond the scope of CNNs and their topographic derivatives. 12/12
zejinlu.bsky.social
Next, we will use All-TNNs to explore the various factors of how smooth maps emerge from model training, without the need for a secondary smoothness loss. Possible avenues include wiring length optimization, energy constraints, local inhibition, or top-down connectivity patterns. 11/12
zejinlu.bsky.social
Can TNNs expand to self-supervised objectives? Yes, to a degree. We show that training All-TNNs with SimCLR yields smooth topography and category-independent spatial biases. However, SimCLR training fails to reproduce the structure of human-like category-specific spatial biases. 10/12
zejinlu.bsky.social
We show that these behavioural accuracy maps are structured and exhibit category-specific effects. Importantly, All-TNNs better reproduce these spatial structures of human visual biases than CNNs and other control models. 9/12
zejinlu.bsky.social
To study the impact of topography on behaviour, we conducted a human psychophysical experiment to quantify object recognition performance across spatial locations. This provided us with category-specific spatial accuracy maps for humans. 8/12
zejinlu.bsky.social
Similarly, All-TNNs allocate energy expenditure to task-relevant input regions, using an order of magnitude less “metabolic” cost than CNNs! And the smoother the topography, the greater the energy efficiency of the network! Energy efficiency was not explicitly optimised for. 7/12
zejinlu.bsky.social
Interestingly, All-TNNs exhibit a form of foveation, and allocate more processing resources to spatial regions rich in task-relevant information. 6/12
zejinlu.bsky.social
Upon training, topographical features reminiscent of the ventral stream emerge in All-TNNs, including smooth orientation selectivity maps in the first layer, and category-based selectivity clusters for tools, scenes, and faces in the last layer. 6/12
zejinlu.bsky.social
All-TNNs overcome this limitation. In All-TNNs 1) each unit has its own local RF, 2) units in each layer are arranged on a 2D “cortical sheet” without weight sharing, and 3) feature selectivity varies smoothly across space by encouraging similar selectivity in neighboring units. 5/12
Overall network architecture of All-TNNs
zejinlu.bsky.social
Yet, their reliance on weight sharing, i.e., detecting identical features across visual space, renders them unable to model central aspects of biological vision, such as the origin of topography and its relation to behaviour. 4/12
zejinlu.bsky.social
Background: CNNs are commonly used to model primate vision, and have been successful at predicting neural activity and at accounting for complex visual behaviour. 3/12
zejinlu.bsky.social
With Adrien Doerig(@adriendoerig.bsky.social) , Victoria Bosch (@initself.bsky.social), Daniel Kaiser (@dkaiserlab.bsky.social), Radoslaw Martin Cichy and Tim C Kietzmann (@timkietzmann.bsky.social). 2/12
zejinlu.bsky.social
In this work, we introduce All-Topographic Neural Networks (All-TNNs)—ANNs that drop weight sharing and learn on a smooth “cortical sheet,” capturing both human-like neural topography and visual biases in behaviour. 2/12
zejinlu.bsky.social
Now out in Nature Human Behaviour @nathumbehav.nature.com : “End-to-end topographic networks as models of cortical map formation and human visual behaviour”. Please check our NHB link: www.nature.com/articles/s41...