Viet Anh Khoa Tran
@ktran.de
130 followers 910 following 8 posts
PhD student on Dendritic Learning/NeuroAI with Willem Wybo, at Emre Neftci's lab (@fz-juelich.de). ktran.de
Posts Media Videos Starter Packs
ktran.de
Our preprint has been accepted at #NeurIPS2025! 🎉

I will be presenting TMCL in just two weeks at the #BernsteinConference. Hope to see some of you there! @bernsteinneuro.bsky.social

Many thanks to my advisor Willem Wybo, and to Emre Neftci for the great support.
ktran.de
New #NeuroAI preprint on #ContinualLearning!

Continual learning methods struggle in mostly unsupervised environments with sparse labels (e.g. parents telling their child the object is an 'apple').
We propose that in the cortex, predictive coding of high-level top-down modulations solves this! (1/6)
Reposted by Viet Anh Khoa Tran
ktran.de
This research opens up an exciting possibility: predictive coding as a fundamental cortical learning mechanism, guided by area-specific modulations that act as high-level control over the learning process. (5/6)
ktran.de
Furthermore, we can dynamically adjust the stability-plasticity trade-off by adapting the strength of the modulation invariance term. (4/6)
ktran.de
Key finding: With only 1% labels, our method outperforms comparable continual learning algorithms both on the continual task and when transferred to other tasks.
Therefore, we continually learn generalizable representations, unlike conventional, class-collapsing methods (e.g. Cross-Entropy). (3/6)
ktran.de
Feedforward weights learn via view-invariant self-supervised learning, mimicking predictive coding. Top-down class modulations, informed by new labels, orthogonalize same-class representations. These are then consolidated into the feedforward pathway through modulation invariance. (2/6)
ktran.de
New #NeuroAI preprint on #ContinualLearning!

Continual learning methods struggle in mostly unsupervised environments with sparse labels (e.g. parents telling their child the object is an 'apple').
We propose that in the cortex, predictive coding of high-level top-down modulations solves this! (1/6)
ktran.de
Feedforward weights learn via view-invariant self-supervised learning, mimicking predictive coding. Top-down class modulations, informed by new labels, orthogonalize same-class representations. These are then consolidated into the feedforward pathway through modulation invariance. (2/6)