Fatemeh_Hadaeghi
@fatemehhadaeghi.bsky.social
100 followers 180 following 23 posts
NeuroAI Researcher @ICNS_Hamburg, PhD in Biomedical Engineering
Posts Media Videos Starter Packs
Pinned
fatemehhadaeghi.bsky.social
🚨 New preprint!
“A Computational Perspective on the No-Strong-Loops Principle in Brain Networks”
www.biorxiv.org/content/10.1...

Over the past 3 years, we’ve been investigating why cortical networks avoid strong reciprocal loops — and what this means for computation.
fatemehhadaeghi.bsky.social
I would also like to thank prominent figures in the field, Sara Solla, Petra Vertes, @kenmiller.bsky.social, @bendfulcher.bsky.social, @jlizier.bsky.social, @danakarca.bsky.social, MarcusKaiser, Gorka Zamora-López, and Patrick Desrosiers, who provided feedback during lab visits and conferences.
fatemehhadaeghi.bsky.social
This work has been in progress for 3+ years.
Grateful to my co-authors, Claus Hilgetag, @kayson.bsky.social, and Moein Khajehnejad for their invaluable contributions,
fatemehhadaeghi.bsky.social
Implications:
🧠 Neuroscience — functional rationale for the evolutionary suppression of strong reciprocal motifs.
🤖 NeuroAI — reciprocity as a tunable design parameter in recurrent & neuromorphic networks.
fatemehhadaeghi.bsky.social
So why does the brain avoid strong loops?
Because reciprocity systematically hurts computation.

Suppressing strong loops preserves:
working memory
representational diversity
stable but flexible dynamics
fatemehhadaeghi.bsky.social
We validated this on empirical connectomes (macaque long-distance, macaque visual cortex, marmoset).

Result: the same pattern.
Strong reciprocity consistently undermines memory and representational richness.
fatemehhadaeghi.bsky.social
Spectral analysis explains why:

- Higher reciprocity → larger spectral radius (instability risk).
- Narrower spectral gap → less dynamical diversity.
- Lower non-normality → weaker transient amplification.

Together → compressed dynamical range.
fatemehhadaeghi.bsky.social
Interestingly, hierarchical modular networks consistently outperformed random counterparts, but only when reciprocity was low. However, the comparative advantages of network topologies shift with reciprocity, sparsity, and weight distribution
fatemehhadaeghi.bsky.social
Findings (robust across sizes, densities, architectures):
- Increasing reciprocity (link as well as strength reciprocity) reduces memory capacity.
- Representation becomes less diverse (lower kernel rank).
- Effects are strongest in ultra-sparse networks.
fatemehhadaeghi.bsky.social
Methods:
- Reservoir computing to isolate structure from learning.
- Networks of 64–256 nodes, in both ultra-sparse and sparse regimes.
- Topologies: small-world, hierarchical modular, core–periphery, hybrid, and nulls.
- Metrics: memory capacity, kernel rank, spectral analysis.
fatemehhadaeghi.bsky.social
In earlier work (www.biorxiv.org/content/10.1...), we developed Network Reciprocity Control (NRC): algorithms that adjust reciprocity (link + strength) while preserving network structure.

In this study, we apply NRC to systematically test how reciprocity shapes computation in recurrent networks.
fatemehhadaeghi.bsky.social
The no-strong-loops principle:
Across species (macaque, marmoset, mouse), strong reciprocal (symmetric) connections are rare.

This asymmetry is well known anatomically.
But what are its computational consequences?
fatemehhadaeghi.bsky.social
🚨 New preprint!
“A Computational Perspective on the No-Strong-Loops Principle in Brain Networks”
www.biorxiv.org/content/10.1...

Over the past 3 years, we’ve been investigating why cortical networks avoid strong reciprocal loops — and what this means for computation.
Reposted by Fatemeh_Hadaeghi
alexfornito.bsky.social
Interested in Network hubs, cortical hierarchies, and gradients? Ever wonder where they come from? Check our latest review, where we cover different approaches to mapping hubs, models for their evolution, and mechanisms for how they develop:

osf.io/preprints/os...
Reposted by Fatemeh_Hadaeghi
dionnecargy.bsky.social
Have you ever been transfixed by the colour palette of the Melbourne tram network?

Well I have... so I made a package called {melbourne}! So far it includes a colour palette called "melb_trams()". More to come, stay tuned!

🔗 Check it out: github.com/dionnecargy/...

#RStats #DataScience #Rcoding
GitHub - dionnecargy/melbourne: A package love letter to Melbourne
A package love letter to Melbourne. Contribute to dionnecargy/melbourne development by creating an account on GitHub.
github.com
Reposted by Fatemeh_Hadaeghi
kayson.bsky.social
Good morning folks. If you’re around #OCNS2025, maybe come by today for a chat about optimal communication in brain networks? ✨
fatemehhadaeghi.bsky.social
Wonderful work by Shrey and Kayson in developing an XAI method for exploring the contribution of any computational unit (e.g., nodes, experts, communities, filters) within neural networks. It enables analysis of both their influence on each other and their overall impact on task performance.
Reposted by Fatemeh_Hadaeghi
tatiabu.bsky.social
✨Excited to share that our new paper is now out in iScience!✨

🧠 We show that people can coordinate surprisingly well in novel interactions by violating others' expectations - without requiring deep, recursive reasoning about others’ beliefs.

📄 Read the full paper here: www.cell.com/iscience/ful...
Expectation violations as an effective alternative to complex mentalizing in novel communication
Neuroscience; Systems neuroscience; Social sciences
www.cell.com
Reposted by Fatemeh_Hadaeghi
marcusghosh.bsky.social
How should #multisensory signals be combined when they are structured in time?

To explore this, we:
* Introduce a new multisensory task
* Compare several models

TLDR:
* Prior models perform suboptimally
* Our new model performs ≈ an RNN, while using less than 1/10th the number of parameters.
neural-reckoning.org
Fusing multisensory signals across channels and time.

Now published at PLOS Comp Biol! 🎉 With @swathianil.bsky.social and @marcusghosh.bsky.social.

journals.plos.org/ploscompbiol...

TLDR, when multisensory signals vary over time, neural architecture becomes important. Biggest not always best.
Reposted by Fatemeh_Hadaeghi
matteosaponati.bsky.social
1/ I am very excited to announce that our paper "The underlying structures of self-attention: symmetry, directionality, and emergent dynamics in Transformer training" is available on arXiv 💜

arxiv.org/abs/2502.10927

How is information encoded in self-attention matrices?How to interpret it?

⬇️
The underlying structures of self-attention: symmetry, directionality, and emergent dynamics in Transformer training
Self-attention is essential to Transformer architectures, yet how information is embedded in the self-attention matrices and how different objective functions impact this process remains unclear. We p...
arxiv.org
Reposted by Fatemeh_Hadaeghi
danielemarinazzo.bsky.social
PhD fellowship to work with me and Benedetta Franceschiello on the analysis and modelling of fast sampled fMRI data!

www.ugent.be/en/work/scie...
Doctoral fellow
www.ugent.be
Reposted by Fatemeh_Hadaeghi
mareikegann.bsky.social
Excited to share our latest preprint on connectivity modulation with dual-site tACS! We show that in-phase tACS at 20 Hz can disrupt fMRI connectivity between the primary motor cortices, but also affects connectivity with other motor regions.

www.biorxiv.org/content/10.1...
fatemehhadaeghi.bsky.social
Found a #bifurcation (bifurcated ?) #tree outside the Royal Botanic garden, #Sydney 😬
wodyetia bifurcata tree wodyetia bifurcata tree wodyetia bifurcata Tree