Adeel Razi
@adeelrazi.bsky.social
2.7K followers 1.4K following 86 posts
Computational Neuroscientist, NeuroAI, Causality. Monash, UCL, CIFAR. Lab: https://comp-neuro.github.io/
Posts Media Videos Starter Packs
Pinned
adeelrazi.bsky.social
🧵 How do psychedelics shape brain activity?

Our new paper presents the largest neuroimaging study of psilocybin to date—revealing how context structures psychedelic brain states.

Title: Psychedelics Align Brain Activity with Context
Paper: lnkd.in/gt-kMR6d

A thread 🧵👇 1/n
adeelrazi.bsky.social
This is BIG. Nobody was expecting this cure to work or even if it was possible ever to treat HD!
adeelrazi.bsky.social
Congratulations and looking forward to seeing what you do there!
adeelrazi.bsky.social
We are hiring for our psilocybin clinical trial.

1. Research Officer (testing and screening), 2 positions
careers.pageuppeople.com/513/cw/en/jo...

2. Research Officer (Peer Navigator)
careers.pageuppeople.com/513/cw/en/jo...

DM if you have questions.

@wellcomeleap.bsky.social
LinkedIn
This link will take you to a page that’s not on LinkedIn
lnkd.in
adeelrazi.bsky.social
Thank you very much @prelights.bsky.social and uMontreal Neuro team for coosing and covering our preprint.
prelights.bsky.social
Psilocybin restructures brain activity depending on the sensory context in which it is taken

Read this new #preLight by uMontreal Neuro preLighters Loïk Holdrinet, Nour Eltaani and Emma Clini talking about the #preprint of Devon Stoliker, @adeelrazi.bsky.social and the team.
Psychedelics Align Brain Activity with Context - preLights
Psilocybin restructures brain activity depending on the sensory context in which it is taken.
prelights.biologists.com
Reposted by Adeel Razi
misicbata.bsky.social
Integrating and interpreting brain maps | doi.org/10.1016/j.ti...

Imaging and recording technologies make it possible to map multiple biological features of the brain. How can these features be conceptually integrated into a coherent understanding of brain structure and function? ⤵️
adeelrazi.bsky.social
We are hiring for our brand new psilocybin clinical trial. Please DM if you have questions!

1. Senior Research Fellow (Clinical Trial Lead)

careers.pageuppeople.com/513/cw/en/jo...

2. Senior Clinical Psychotherapist

careers.pageuppeople.com/513/cw/en/jo...
adeelrazi.bsky.social
Very much forward to this @ohbmofficial.bsky.social
ohbmofficial.bsky.social
⭐ Speaker Spotlight: Adeel Razi
🗓️ June 25 | 🕥 10:30 – 11:15
We’re excited to feature Adeel Razi at OHBM 2025.
More information about Adeel in comments!
#OHBM2025
Reposted by Adeel Razi
stuartoldham.bsky.social
Do you like brain network hubs?🧠🌐✳️Do you like genes?🧬What about neurodevelopment?👶What if I told you the latest work by @garedaba.bsky.social and myself combined all of these?🤯🤯🤯

See Gareth's thread for a primer of our findings, then read the paper for the details!
www.biorxiv.org/content/10.1...
Reposted by Adeel Razi
anayebi.bsky.social
Can a Universal Basic Income (UBI) become feasible—even if AI fully automates existing jobs and creates no new ones?

We derive a closed-form UBI threshold tied to AI capabilities that suggests it's potentially achievable by mid-century even under moderate AI growth assumptions:
adeelrazi.bsky.social
Reg batchnorm: it's effective in many settings, but can be brittle in others, like when used with small batch sizes, non-i.i.d. data or models with stochasticity in the forward pass. In these cases, the running estimates of mean/variance can drift or misalign with test-time behaviour.

2/2
adeelrazi.bsky.social
Yes, absolutely, "noisy" was shorthand & it does depend on the surrogate. What I meant is that common surrogates can have high gradient variance, especially when their outputs saturate. That variance can hurt learning, particularly in deeper networks or those with binary/stochastic activations.
1/2
adeelrazi.bsky.social
of course, whenever you could!
adeelrazi.bsky.social
Why does KL divergence show up everywhere in machine learning?

Because it's not just a distance, it's the cost of believing your own model too much.

Minimizing KL = reducing surprise = optimizing variational free energy.

A silent principle behind robust inference.

5/6
adeelrazi.bsky.social
Our key innovation:

- A family of importance-weighted straight-through estimators (IW-ST), which unify and generalize previous methods.
- No need for backprop-through-noise tricks.
- No batch norm.

Just clean, effective training.

4/6
adeelrazi.bsky.social
We view training as Bayesian inference, minimizing KL divergence between a posterior and an amortized prior.

This lets us derive a principled loss from first principles—grounded in variational free energy, not heuristics.

3/6
adeelrazi.bsky.social
Binary/spiking neural networks are efficient and brain-inspired—but notoriously difficult to train.

Why? Discrete activations → non-differentiable.

Most current methods either approximate gradients or add noisy surrogates.

We do something different.

2/6
adeelrazi.bsky.social
If brains infer control by predicting their own actions,
should future AI do the same?

Instead of optimizing over actions,
let’s build agents that explain their sensations.

Intelligence may not be about control—but coherence.

#AgencyByInference
adeelrazi.bsky.social
Maybe intelligence isn’t about maximizing reward…
but minimizing surprise in a world we predictively model.

What if agency is not learned—but inferred?
adeelrazi.bsky.social
Why do brains rely on inference, uncertainty, and structure…

while AI systems chase rewards in unstructured worlds?

Are we missing something fundamental about how intelligence emerges?

#NeuroAI #InferenceOverOptimization
adeelrazi.bsky.social
Yes, but he told us how to quantify it. This was the game changer 😀