Ido Aizenbud
@idoai.bsky.social
390 followers 520 following 37 posts
Computational Neuroscience PhD Student
Posts Media Videos Starter Packs
Pinned
idoai.bsky.social
Human cortical pyramidal neurons are larger, with more elaborate branching, and distinct nonlinear biophysical properties compared to rat cortical pyramidal neurons.

Are they more functionally complex? Could that boost the human brain’s computational power? and is that what makes us human? (1/11)
Reposted by Ido Aizenbud
neural-reckoning.org
Is anarchist science possible? As an experiment, we got together a large group of computational neuroscientists from around the world to work on a single project without top down direction. Read on to find out what happened. 🤖🧠🧪
Diagram of how the "collaborative modelling of the brain" (COMOB) project started. Starting material lead to group research or solo research, coming together in online workshops (monthly) in an iterative cycle, finishing with writing up together. The diagram is illustrated with colourful cartoon blob characters.
idoai.bsky.social
Looking ahead, we reckon the next leap will come from combining high-res EM of entire human neurons with vol-imaging tools. Hybrid biophysical/AI models promise to clarify how single-cell properties scale up to network dynamics—and ultimately to circuits underlying language, creativity, and memory.
idoai.bsky.social
Human neurons are more functionally complex: their richer morphology and stronger synaptic nonlinearities give them extra computational power. Fitting deep neural nets to match human input-output dynamics consistently required greater network depth than for rodent models.
idoai.bsky.social
Single human neurons are wired to perform nontrivial logical computations! Due to extensive dendritic branching and specialized vol-gated currents, HL2/3 PNs support ~25 independent NMDA-spike compartments—almost twice than rat neurons—and can implement XOR-like operations via dendritic Ca²⁺ spikes.
idoai.bsky.social
Dendritic load imposed on human dendrites accelerates EPSP propagation down the dendrites, while dendritic high‐density of h-channels enables faithful transfer of theta-band signals (that are associated with various learning and memory processes) from dendrites to soma.
idoai.bsky.social
Load is all you need. The extensive membrane surface area of the dendrites “loads” the soma with additional capacitance and conductance. This load imposed on the AIS makes action potentials at the soma remarkably “kinky” – with a steeper rise of voltage, yet sensitive to rapid input fluctuations.
idoai.bsky.social
We bring together decades of human tissue recordings, detailed biophysical models, and machine-learning techniques to try and answer these questions.

Here are some key insights:
idoai.bsky.social
What makes human pyramidal neurons uniquely suited for complex information processing? How can human neurons’ distinct properties contribute to our advanced cognitive abilities?
idoai.bsky.social
Thanks! What kind of complexity measurements do you refer to?
idoai.bsky.social
Proud to contribute to this large-scale, multi-lab, open-source collaboration led by
@LecoqJerome
and
@AllenInstitute
OpenScope to study predictive processing in the brain. Explore our review and planned studies on arXiv: arxiv.org/abs/2504.09614
idoai.bsky.social
Just got back from the GRC Dendrites meeting in Ventura, California! I presented my research on how single neurons can implement complex nonlinear functions — amazing discussions and brilliant minds all around. #Neuroscience #GRCdendrites #ComputationalNeuroscience
Reposted by Ido Aizenbud
tmoldwin.bsky.social
Now out in PLOS CB!

We propose a simple, perceptron-like neuron model, the calcitron, that has four sources of [Ca2+]...We demonstrate that by modulating the plasticity thresholds and calcium influx from each calcium source, we can reproduce a wide range of learning and plasticity protocols.
The calcitron: A simple neuron model that implements many learning rules via the calcium control hypothesis
Author summary Researchers have developed various learning rules for artificial neural networks, but it is unclear how these rules relate to the brain’s natural processes. This study focuses on the ca...
journals.plos.org
idoai.bsky.social
Great stuff! Can you add me as well?
idoai.bsky.social
Indeed, the human synapse has a strong effect on complexity, so even a rat morphology with human synapse will be much more complex than the same morphology with a rat synapse, as you can see in panels L and M, the effect is more pronounced within rat morphologies (meaning, smaller morphologies).
idoai.bsky.social
In this case, estimating the complexity of the neurons using the entropy of the weights is intersting, it is similar to checking whether there's a simpler DNN that will give the same perfect fit.
idoai.bsky.social
In principal, if you take a DNN that is expressive enough, it should perfectly fit the function of all neurons, so in this case, the FCI would be 0 for all of the neurons.
idoai.bsky.social
2. We have not tried to use Encoder-Decoder systems, but it is certainly possible and may be relevant.
idoai.bsky.social
1. We followed the architecture introduced in Beniaguev et al., 2021, and chose a depth of 3 that was enough to capture the variability in our model dataset. We also repeated some of the experiments with alternative depths of 2 and 7, and the results stayed similar (see discussion for more details).
idoai.bsky.social
Conceptually the same approach could be used to measure the functional complexity of biological neural networks, and actually of any model that we can feed with random inputs and simulate to get the outputs.
idoai.bsky.social
We argue this is a scalable approach to efficiently approximate the mutual information between the inputs and the outputs of a function (in this case, the function of a single neuron), therefore serving as a useful measure of functional complexity.
idoai.bsky.social
I'm not sure I understand your question, can you repeat it?
idoai.bsky.social
Size is indeed one of the important factors, but it is not the only one.