Mashbayar Tugsbayar
@tmshbr.bsky.social
530 followers 150 following 19 posts
PhD student in NeuroAI @Mila & McGill w/ Blake Richards. Top-down feedback and brainlike connectivity in ANNs.
Posts Media Videos Starter Packs
Reposted by Mashbayar Tugsbayar
charlottevolk.bsky.social
🚨 New preprint alert!

🧠🤖
We propose a theory of how learning curriculum affects generalization through neural population dimensionality. Learning curriculum is a determining factor of neural dimensionality - where you start from determines where you end up.
🧠📈

A 🧵:

tinyurl.com/yr8tawj3
The curriculum effect in visual learning: the role of readout dimensionality
Generalization of visual perceptual learning (VPL) to unseen conditions varies across tasks. Previous work suggests that training curriculum may be integral to generalization, yet a theoretical explan...
tinyurl.com
Reposted by Mashbayar Tugsbayar
lisaschmors.bsky.social
🧠🤖 Computational Neuroscience summer school IMBIZO in Cape Town is open for applications again!
 
💻🧬 3 weeks of intense coursework & projects with support from expert tutors and faculty
 
📈Apply until July 1st!

🔗https://imbizo.africa/
Reposted by Mashbayar Tugsbayar
imbizo.bsky.social
Want to spend 3 weeks in South Africa for an unforgettable summer school experience? Imbizo 2026 (imbizo.africa) student applications are OPEN! Lectures, new friends, and Noordhoek beach await. Apply by July 1!

More info and apply: imbizo.africa/apply/

#Imbizo2026 #CompNeuro
tmshbr.bsky.social
I love ResNet too, but I'm floored they're cited more than transformers, CNNs and the DSM V!
tmshbr.bsky.social
The model uses ReLU activation like standard DNNs and doesn’t spike. The way we modeled it, feedback would provide a very small amount of driving input but otherwise just gain-modulate neurons already activated by feedforward input.
tmshbr.bsky.social
We'd like to thank @elife.bsky.social and the reviewers for a very constructive review experience. As well, thanks to our funders, in particular HIBALL, CIFAR, and NSERC. This work was supported with computational resources by @mila-quebec.bsky.social and the Digital Research Alliance of Canada.
tmshbr.bsky.social
These results show that modulatory top-down feedback has unique computational implications. As such, we believe that top-down feedback should be incorporated into DNN models of the brain more often. Our code base makes that easy!
tmshbr.bsky.social
We found that top-down feedback, as implemented in our models, helps to determine the set of solutions available to the networks and the regional specializations that they develop.
tmshbr.bsky.social
To summarize, we built a codebase for creating DNNs with top-down feedback, and we used it to examine the impact of top-down feedback on audio-visual integration tasks.
tmshbr.bsky.social
The models were then trained to identify either the auditory or visual stimuli based on an attention cue. The visual bias not only persisted, but helped the brainlike model learn to ignore distracting audio more quickly than other models.
tmshbr.bsky.social
We found that the brain-based model still had a visual bias even after being trained on auditory tasks. But, this bias didn’t hamper the model’s overall performance, and it mimics a consistently observed human visual bias (Posner et al 1974)
tmshbr.bsky.social
Conversely, when trained on a similar set of auditory categorization tasks, the human brain-based model was the best at integrating helpful visual information to resolve auditory ambiguity.
tmshbr.bsky.social
Interestingly, compared to other models, the human brain-based model was particularly proficient at ignoring irrelevant audio stimuli that didn’t help to resolve ambiguities.
tmshbr.bsky.social
To test the impact of different anatomies of modulatory feedback, we compared the performance of a model based on human anatomy with identically sized models with different configurations of feedback/feedforward connectivity.
tmshbr.bsky.social
As an initial test, we wanted to see how using modulatory feedback could impact computation. To do this, we built an audio-visual model, based on human anatomy from the BigBrain and MICA-MICs datasets, and trained it to classify ambiguous stimuli.
tmshbr.bsky.social
Each brain region is a recurrent convolutional network, and can receive two different types of input: driving feedforward and modulatory feedback. With this code, users can input macroscopic connectivity to build anatomically constrained DNNs.
tmshbr.bsky.social
To model top-down feedback in neocortex, we built a freely available codebase that can be used to construct multi-input, topological, top-down and laterally recurrent DNNs that mimic neural anatomy. (github.com/masht18/conn... )
tmshbr.bsky.social
What does it mean to have “biologically-inspired top-down feedback”? In the brain, feedback does not drive pyramidal neurons directly, but it modulates the feedforward signal (both multiplicatively and additively), as described in Larkum et al 2004.
tmshbr.bsky.social
Top-down feedback is ubiquitous in the brain and computationally distinct, but rarely modeled in deep neural networks. What happens when a DNN has biologically-inspired top-down feedback? 🧠📈

Our new paper explores this: elifesciences.org/reviewed-pre...
Top-down feedback matters: Functional impact of brainlike connectivity motifs on audiovisual integration
elifesciences.org
Reposted by Mashbayar Tugsbayar
ninelk.bsky.social
Excited to share our new pre-print on bioRxiv, in which we reveal that feedback-driven motor corrections are encoded in small, previously missed neural signals.
Reposted by Mashbayar Tugsbayar
arnaghosh.bsky.social
Are you training self-supervised/foundation models, and worried if they are learning good representations? We got you covered! 💪
🦖Introducing Reptrix, a #Python library to evaluate representation quality metrics for neural nets: github.com/BARL-SSL/rep...
🧵👇[1/6]
#DeepLearning
Reposted by Mashbayar Tugsbayar
dlevenstein.bsky.social
At #Cosyne2025? Come by my poster today (3-047) to hear how sequential predictive learning produces a continuous neural manifold with the ability to generate replay during sleep, and spatial representations that "sweep" ahead to future positions. All from sensory information alone!
Reposted by Mashbayar Tugsbayar
oliviercodol.bsky.social
Very excited for the upcoming Cosyne in Montreal! I’ll be presenting my poster [2-126] Brain-like neural dynamics for behavioral control develop through reinforcement learning, on the Friday session at 13:15.

Feel free to drop by! The related pre-print is also out:
www.biorxiv.org/content/10.1...
Brain-like neural dynamics for behavioral control develop through reinforcement learning
During development, neural circuits are shaped continuously as we learn to control our bodies. The ultimate goal of this process is to produce neural dynamics that enable the rich repertoire of behavi...
www.biorxiv.org
Reposted by Mashbayar Tugsbayar
shahabbakht.bsky.social
📢 We have a new #NeuroAI postdoctoral position in the lab!

If you have a strong background in #NeuroAI or computational neuroscience, I’d love to hear from you.

(Repost please)

🧠📈🤖