Chongwen Wang
chongwenwang.bsky.social
Chongwen Wang
@chongwenwang.bsky.social
Theoretical Neuroscience Researcher@UWashington Computational Neuroscience Center
Looking for Fall2026 Neurotheory PhD positions
Interests: statistical mechanics of real NNs, zebrafish, ephaptic coupling
Searching for mesoscale structure in the brain.
Reposted by Chongwen Wang
What is the physical basis of death? Can we manipulate cellular rules to avoid it? What does death mean for a machine? Is immortality possible? What does it mean for language, thought, or information to die? My new book @princetonupress.bsky.social is coming soon. press.princeton.edu/books/hardco...
October 15, 2025 at 12:30 AM
Reposted by Chongwen Wang
It's finally out!

Visual experience orthogonalizes visual cortical responses

Training in a visual task changes V1 tuning curves in odd ways. This effect is explained by a simple convex transformation. It orthogonalizes the population, making it easier to decode.

10.1016/j.celrep.2025.115235
February 2, 2025 at 9:59 AM
Reposted by Chongwen Wang
(1/30) New preprint! "Symmetries and continuous attractors in disordered neural circuits" with Larry Abbott and Haim Sompolinsky
bioRxiv: www.biorxiv.org/content/10.1...
Symmetries and Continuous Attractors in Disordered Neural Circuits
A major challenge in neuroscience is reconciling idealized theoretical models with complex, heterogeneous experimental data. We address this challenge through the lens of continuous-attractor networks...
www.biorxiv.org
January 29, 2025 at 6:26 PM
Reposted by Chongwen Wang
Neuroscientists, @erictopol.bsky.social has just declared w certainty that we know the number of “parameters” in the brain.

it seems to me that we don’t really even know what a parameter is in the brain, what the relevance of cell types is, how they line up with AI etc

Surprised by his certainty.
These are facts. Period
December 23, 2024 at 2:31 AM
Reposted by Chongwen Wang
Big paper from @paigel.bsky.social in our lab:

Sensory feedback is always crucial for proper development, right? Wrong!

Crazier still, the motor system is the slowest part of a developing reflex circuit!

Surprises abound in this bluetorial c’mon along….

www.science.org/doi/10.1126/...

1/19
Sensation is dispensable for the maturation of the vestibulo-ocular reflex
Vertebrates stabilize gaze using a neural circuit that transforms sensed instability into compensatory counterrotation of the eyes. Sensory feedback tunes this vestibulo-ocular reflex throughout life....
www.science.org
January 2, 2025 at 7:16 PM
Reposted by Chongwen Wang
Some wisdom from Milner & @action-brain.bsky.social:
December 28, 2024 at 8:29 PM
Reposted by Chongwen Wang
DAY 22: Advent of Comp Neuro 🎄🤖🧠🧪

Memories in a network can drift at the single neuron level, but persist at the population level! 🍎🍏

“Drifting assemblies for persistent memory: Neuron transitions and unsupervised compensation”
www.pnas.org/doi/full/10....
December 22, 2024 at 3:07 PM
Reposted by Chongwen Wang
What is or isn't a "BCI"?

We argue in Nature BME today that if the tech stimulates or records brain activity AND does computation, it's a BCI.

www.nature.com/articles/s41...

But we still need a way to discuss different types of BCI separately... 🧵

#BCI #BrainComputerInterface
An application-based taxonomy for brain–computer interfaces - Nature Biomedical Engineering
Naming brain–computer interfaces according to their intended application will assist stakeholders in the evaluation of the benefits and risks of neurotechnologies.
www.nature.com
December 23, 2024 at 4:04 PM
Reposted by Chongwen Wang
What papers do you like that demonstrate the use of 'orthogonal subspaces' for encoding information in neural populations? #neuroskyence #compneuro #neuroAI
December 23, 2024 at 5:37 PM
Reposted by Chongwen Wang
"Computational models of learning and synaptic plasticity"
Nice review by Danil Tyulmankov
arxiv.org/abs/2412.05501
Computational models of learning and synaptic plasticity
Many mathematical models of synaptic plasticity have been proposed to explain the diversity of plasticity phenomena observed in biological organisms. These models range from simple interpretations of ...
arxiv.org
December 13, 2024 at 4:25 PM
Reposted by Chongwen Wang
I'm struck by how grad students in psych and neuro programs aren't generally expected to know basic facts about animal learning. For the psychologists, it has an odor of behaviorism, and for the neuroscientists it has an odor of psychology. Yet it's so fundamental (in my view).
December 13, 2024 at 9:49 PM
Reposted by Chongwen Wang
Comp #Neuroskyence hivemind - I've noticed multiple eqs for RNN dynamics that vary in interpretation and implications due to placement of nonlinearity. Similar mixture in theory papers... @loradrian.bsky.social @jbarbosa.org @bio-emergent.bsky.social @matthijspals.bsky.social @kenmiller.bsky.social
December 11, 2024 at 6:42 PM
Reposted by Chongwen Wang
computational/theoretical neuroscientists, how important it is to you that you can be in the same building w/ experimentalists who are interested in conducting new studies to directly test your ideas / predictions?

or u think outside collaborators + analyzing existing datasets are just as well?
December 12, 2024 at 1:49 AM
Reposted by Chongwen Wang
How to find all fixed points in piece-wise linear recurrent neural networks (RNNs)?
A short thread 🧵

In RNNs with N units with ReLU(x-b) activations the phase space is partioned in 2^N regions by hyperplanes at x=b 1/7
December 11, 2024 at 1:32 AM
Reposted by Chongwen Wang
This honor would not have been possible without my incredible Ph.D. advisor, Prof. Wulfram Gerstner, and my amazing collaborators. Thank you all! An additional huge thanks must go to my dream Ph.D. committee!

Link to thesis: infoscience.epfl.ch/entities/pub...
Seeking the new, learning from the unexpected: Computational models of surprise and novelty in the brain
Human babies have a natural desire to interact with new toys and objects, through which they learn how the world around them works, e.g., that glass shatters when dropped, but a rubber ball does not. ...
infoscience.epfl.ch
December 11, 2024 at 11:26 AM
Reposted by Chongwen Wang
Help bluesky neurohivemind!

If you know of any computational / theoretical work modelling neuromodulators please share it 🙏 if you don't, please retweet!
December 7, 2024 at 6:06 PM
Reposted by Chongwen Wang
DAY 6: Advent of Comp Neuro 🎄🤖🧠🧪

This nifty theory predicts how intrinsic timescales depend on network structure, neuron properties and input. A nontrivial task!

“Microscopic theory of intrinsic timescales in spiking neural networks”
tinyurl.com/43esfrrk
By @avm.bsky.social and @albada.bsky.social
December 6, 2024 at 7:35 PM
Reposted by Chongwen Wang
This is super cool and useful.

But I’m struggling to see the leap from whole-brain connectomics to "building human-like AI systems." Help me here.

I’ve heard this idea a lot, but every time, I’m just not sure how it’s supposed to work.

#NeuroAI #neuroscience
🧪 E11 Bio is excited to share a major step towards brain mapping at 100x lower cost, making whole-brain connectomics at human & mouse scale feasible (🧠→🔬→💻). Critical for curing brain disorders, building human-like AI systems, and even simulating human brains.

Read more: e11.bio/news/roadmap
December 3, 2024 at 4:24 PM
Reposted by Chongwen Wang
Amazing brand-new visualization by @deanbuono.bsky.social of our work Nat Neurosci 2013
Visualizations from previous work with @rodlaje.bsky.social. The two output units in the top panel are driven by a chaotic RNN-and a small perturbation is injected at 0.16. In the lower panel the RNN is trained to "tame chaos"-creating a "dynamic attractor"-and a larger perturbation is injected.
December 2, 2024 at 8:05 PM
Finally I have the chance to be quiet and start to understand the theoretical calculations I'm interested in - maybe do some generalizations (at least until the 20th - seems some schools will be announcing interviews here)
December 2, 2024 at 1:06 AM
Reposted by Chongwen Wang
Here are some things that neuroscientists don't like to call "correlation":

1990s: "reverse correlation"
2000s: "granger causality"
2010s: "functional connectivity"
2020s: "subspace communication"
December 1, 2024 at 9:54 AM
Reposted by Chongwen Wang
OK If we are moving to Bluesky I am rescuing my favourite ever twitter thread (Jan 2019).

The renamed:

Bluesky-sized history of neuroscience (biased by my interests)
December 1, 2024 at 8:29 PM
Finished all PhD applications except the two I wanted to go to the most. Since I entered college 5 years ago I dreamed of going to my favored place to study for a PhD, a long time has passed and my interests have changed a lot, but until now I am still nervous and anxious.
December 1, 2024 at 8:41 AM
Reposted by Chongwen Wang
Alex Pouget and I wrote a perspective a few years ago on Major Sources of Computational Complexity in Complex Decision-Making 🧠. We never got around to publishing it, and so now uploaded it to OSF Preprints: doi.org/10.31219/osf.... I hope some of you might find it useful.
OSF
doi.org
November 25, 2024 at 6:19 PM