ammar i marvi
@aimarvi.bsky.social
59 followers 160 following 13 posts
vision lab @ harvard [i am] unbearably naive
Posts Media Videos Starter Packs
Reposted by ammar i marvi
kosakowski.bsky.social
My lab at USC is recruiting!
1) research coordinator: perfect for a recent graduate looking for research experience before applying to PhD programs: usccareers.usc.edu REQ20167829
2) PhD students: see FAQs on lab website dornsife.usc.edu/hklab/faq/
Reposted by ammar i marvi
nblauch.bsky.social
What shapes the topography of high-level visual cortex?

Excited to share a new pre-print addressing this question with connectivity-constrained interactive topographic networks, titled "Retinotopic scaffolding of high-level vision", w/ Marlene Behrmann & David Plaut.

🧵 ↓ 1/n
Reposted by ammar i marvi
jfeather.bsky.social
Super excited for our #VSS2025 symposium tomorrow, "Model-optimized stimuli: more than just pretty pictures".
Join us to talk about designing and using synthetic stimuli for testing properties of visual perception!

May 16th @ 1-3PM in Talk Room #2

More info: www.visionsciences.org/symposia/?sy...
VSS SymposiaSymposia – Vision Sciences Society
www.visionsciences.org
Reposted by ammar i marvi
ebonawitz.bsky.social
Whelp. See you later 1.8 Million in NSF research funds -- all designed to better understand learning mechanisms in early childhood so we can develop effective early childhood educational interventions.

Proud of Harvard for standing up to fascism, though.

We will persist.
a man in a blue shirt and tie is pointing at a woman and saying " too legit to quit " .
Alt: a man in a blue shirt and tie is pointing at a woman and saying " too legit to quit " .
media.tenor.com
Reposted by ammar i marvi
jfeather.bsky.social
We are presenting our work “Discriminating image representations with principal distortions” at #ICLR2025 today (4/24) at 3pm! If you are interested in comparing model representations with other models or human perception, stop by poster #63. Highlights in 🧵
openreview.net/forum?id=ugX...
Discriminating image representations with principal distortions
Image representations (artificial or biological) are often compared in terms of their global geometric structure; however, representations with similar global structure can have strikingly...
openreview.net
aimarvi.bsky.social
in sum, we used dominant components of the neural response to get an **axis-sensitive** measure of similarity.

this work fits into a broader look at (R)epresentational alignment (cf. work by @taliakonkle.bsky.social, @itsneuronal.bsky.social, @sucholutsky.bsky.social, & others)

12/n
aimarvi.bsky.social
we also used connectivity matrices to capture behaviorally-relevant information. a lot like rsa! but with a sparse coding structure

11/n
aimarvi.bsky.social
using sca and a few different pre-trained models, we found markedly higher alignment to the ventral stream.

rotationally invariant methods were less sensitive to this finding, providing an answer to question 2: DNNs are more similar to the ventral stream along a native axis of neural tuning

10/n
aimarvi.bsky.social
the resulting matrices represent the activity of sparse sub-populations of neurons/units and, unlike some methods, are quite sensitive to rotations in neural space

you can thus interpret sca as measuring similarity along a specific set of tuning axes

9/n
aimarvi.bsky.social
we applied the same decomposition to DNN activations and used them in a method we call **sparse component alignment** (sca). sca compares representations at the population level using image x image connectivity matrices.

(see the paper for complete derivation)

8/n
aimarvi.bsky.social
these response profiles gave an answer to question 1: there are interpretable and functionally-distinct representations across the brain's three visual pathways.

nice to see! but also heightens the mystery of question 2: why aren't these differences picked up by standard similarity metrics

7/n
aimarvi.bsky.social
we reproduced well-known category selectivity in the ventral stream (for faces, scenes, bodies, etc).

new in this paper we also found components in the lateral stream (groups of people, implied motion, hand actions, scenes, & reachspaces) and dorsal stream (implied motion & scenes)

6/n
aimarvi.bsky.social
to find out, we used a data-driven method to identify dominant components of the neural response to natural images.

some of the most consistent components had pretty clear selectivities, which we cross-validated with behavioral saliency ratings

5/n
aimarvi.bsky.social
this left us with two big questions:

1. what distinguishes visual representations in the dorsal, ventral, & lateral streams?
2. why does alignment to DNNs often fail to reflect these differences?

4/n
aimarvi.bsky.social
yet neural networks trained to perform a single task seem to model all three pathways pretty well. so perhaps the representations in these streams are not so different after all?

3/n
aimarvi.bsky.social
it's commonly thought that the brain processes distinct visual information along separate functional pathways (the dorsal, ventral, & lateral streams)

2/n
Reposted by ammar i marvi
A 19-year-old university student from Gaza I know, who is brilliant and charming and usually upbeat, just wrote back: "we are not OK. The bombing is continuous and non-stop. Please pray for us. Happy Eid to you."
Reposted by ammar i marvi
I’m hiring a full-time lab tech for two years starting May/June. Strong coding skills required, ML a plus. Our research on the human brain uses fMRI, ANNs, intracranial recording, and behavior. A great stepping stone to grad school. Apply here:
careers.peopleclick.com/careerscp/cl...
......
Technical Associate I, Kanwisher Lab
MIT - Technical Associate I, Kanwisher Lab - Cambridge MA 02139
careers.peopleclick.com