Amr Farahat
@amr-farahat.bsky.social
150 followers 390 following 27 posts
MD/M.Sc/PhD candidate @ESI_Frankfurt and IMPRS for neural circuits @MpiBrain. Medicine, Neuroscience & AI https://amr-farahat.github.io/
Posts Media Videos Starter Packs
Pinned
amr-farahat.bsky.social
🧵 time!
1/15
Why are CNNs so good at predicting neural responses in the primate visual system? Is it their design (architecture) or learning (training)? And does this change along the visual hierarchy?
🧠🤖
🧠📈
https://doi.org/10.6084/m9.figshare.106794.v3
Reposted by Amr Farahat
mariusschneider.bsky.social
🚨Our NeurIPS 2025 competition Mouse vs. AI is LIVE!

We combine a visual navigation task + large-scale mouse neural data to test what makes visual RL agents robust and brain-like.

Top teams: featured at NeurIPS + co-author our summary paper. Join the challenge!

Whitepaper: arxiv.org/abs/2509.14446
Mouse vs. AI: A Neuroethological Benchmark for Visual Robustness and Neural Alignment
Visual robustness under real-world conditions remains a critical bottleneck for modern reinforcement learning agents. In contrast, biological systems such as mice show remarkable resilience to environ...
arxiv.org
Reposted by Amr Farahat
seeingwithsound.bsky.social
Visual image reconstruction from brain activity via latent representation www.annualreviews.org/content/jour... by @ykamit.bsky.social et al.; mental imagery, #neuroscience
Psychological measurement of subjective visual experiences through image reconstruction. (a) Mapping of brain, stimulus, and mind. Dots represent instances of visual experience (e.g., an image, perception, and corresponding brain activity). Veridical perception assumes that the mind accurately represents stimuli. The brain–mind mapping is considered fixed, while the brain–stimulus relationship is empirically identified. (b) Nonveridical perception (e.g., mental imagery, attentional modulation, and illusions) occurs when perceived content diverges from physical properties. The fixed brain–mind mapping and decoders trained on brain activity under veridical conditions allow the reconstruction of mental content as an image. (c) Reconstruction of mental imagery is achieved using models trained on brain activity from natural images.
Reposted by Amr Farahat
imagingneurosci.bsky.social
New paper in Imaging Neuroscience by Tom Dupré la Tour, Matteo Visconti di Oleggio Castello, and Jack L. Gallant:

The Voxelwise Encoding Model framework: A tutorial introduction to fitting encoding models to fMRI data

doi.org/10.1162/imag...
Reposted by Amr Farahat
moonliyp.bsky.social
(1/6) Thrilled to share our triple-N dataset (Non-human Primate Neural Responses to Natural Scenes)! It captures thousands of high-level visual neuron responses in macaques to natural scenes using #Neuropixels.
amr-farahat.bsky.social
Yes indeed. It probably has something to do with learning dynamics that favors increasing the complexity gradually. Or it could be that the loss landscape has edges between high and low complexity volumes
amr-farahat.bsky.social
In AlexNet, however, the first layers are the most predictive. That's because they have bigger filters at earlier layers (see Miao and Tong 2024)
amr-farahat.bsky.social
V1 is usually predicted by more intermediate layers than early layers but it depends on the architecture of the model. In Cadena et al 2019 block3_conv1 in VGG19 was the most predictive. Early layers in VGG have very small receptive fields which makes it difficult to capture V1-like features.
amr-farahat.bsky.social
This was the most predictive layer of V1 in the VGG16 model. Same for IT, it was block4_conv2.
amr-farahat.bsky.social
and then starts increasing again with further training to fit the target function. This is the most likely explanation for the initial drop in V1 prediction.
amr-farahat.bsky.social
We also observed in separate experiments on the simple CNN models that the complexity of the models "resets" to a low value (lower than its random-weight complexity) after the first training epoch (likely using the linear part of the activation function)
amr-farahat.bsky.social
Thanks for your interest! Object recognition performance increases directly starting from the first training epoch and nevertheless V1 prediction drops considerably so this drop supports the non significance of object recognition training for V1.
amr-farahat.bsky.social
The legend of the left plot was missing!
amr-farahat.bsky.social
15/15
It is also important to use various ways to assess model strengths and weaknesses, not just one like prediction accuracy.
amr-farahat.bsky.social
14/15

Our results also emphasize the importance of rigorous controls when using black box models like DNNs in neural modeling. They can show what makes a good neural model, and help us generate hypotheses about brain computations
amr-farahat.bsky.social
13/15
Our results suggest that the architecture bias of CNNs is key to predicting neural responses in the early visual cortex, which aligns with results in computer vision, showing that random convolutions suffice for several visual tasks.
amr-farahat.bsky.social
12/15
We found that random ReLU networks performed the best among random networks and only slightly worse than the fully trained counterpart.
amr-farahat.bsky.social
11/15
Then we tested for the ability of random networks to support texture discrimination, a task known to involve early visual cortex. We created Texture-MNIST, a dataset that allows for training for two tasks: object (Digit) recognition and texture discrimination
amr-farahat.bsky.social
10/15
We found that trained ReLU networks are the most V1-like concerning OS. Moreover, random ReLU networks were the most V1-like among random networks and even on par with other fully trained networks.
amr-farahat.bsky.social
9/15
We quantified the orientation selectivity (OS) of artificial neurons using circular variance and calculated how their distribution deviates from the distribution of an independent dataset of experimentally recorded v1 neurons
amr-farahat.bsky.social
8/15
ReLU was introduced to DNN models inspired by sparsity of biological neural systems and the i/o function of biological neurons.
To test its biological relevance, we looked for characteristic of early visual processing: orientation selectivity and the capacity to support texture discrimination
amr-farahat.bsky.social
7/15
Importantly, these findings hold true both for firing rates in monkeys and human fMRI data, suggesting their generalizability.