Ken Shirakawa
@kencan7749.bsky.social
16 followers 38 following 15 posts
Ph.D. candidate in Kyoto university and ATR/ Brain decoding / fMRI / neuroAI / neuroscience
Posts Media Videos Starter Packs
kencan7749.bsky.social
And here’s an experimental podcast-style of paper summary, generated via Notebook LM directed by me!
Link: notebooklm.google.com/notebook/9c8...
kencan7749.bsky.social
This project wouldn’t have been done without the support of all our lab members.
Huge thanks to co-authors, and especially to Prof. Kamitani ( @ykamit.bsky.social), for their invaluable support throughout this work!
kencan7749.bsky.social
Our paper goes further to formal analysis —including mathematical analysis, simulations, analysis of AI model representations, evaluation pitfalls, and meta-level insights into “realistic” reconstruction.

If this thread sparked your interest, please take a look at our paper!
kencan7749.bsky.social
So, how should we interpret these reconstruction methods? We argue they’re better understood as visualizations of decoded content, not true reconstructions.
Visualization itself also has value, but it’s crucial to recognize the huge gap between visualization and reconstruction.
kencan7749.bsky.social
Taken together, our results suggest recent diffusion-based reconstructions are a mix of classification into trained categories and hallucination by generative AIs.
This deviates fundamentally from genuine visual reconstruction, which aims to recover arbitrary visual experiences.
kencan7749.bsky.social
What about the Generator (diffusion model)?
We fed it true image features instead of predicted ones.
The outputs were semantically similar—but perceptually quite different.
It seems the Generator relies mainly on semantic features, with less focus on perceptual fidelity.
kencan7749.bsky.social
Given the overlap between training/test sets, can the Translator predict test stimuli effectively?

Careful identification analyses revealed a fundamental limitation in generalizing beyond the training distribution.

Translator, though a regressor, behaves more like a classifier.
kencan7749.bsky.social
We first check Latent features. UMAP visualization of NSD’s CLIP features revealed (A):

- distinct clusters (~40)
- substantial overlap between training and test sets

NSD test images were also perceptually similar to training images (B), unlike in carefully curated Deeprecon (C).
kencan7749.bsky.social
To better understand what was happening, we decomposed these methods into a Translator–Generator pipeline.

The Translator maps brain activity to the Latent features, and the Generator converts those features into images.

We analyzed each component in detail.
kencan7749.bsky.social
We tested whether these methods generalize beyond NSD.
They worked well on NSD (A), but performance severely dropped on Deeprecon (B).
The latest MindEye2 even generated training-set categories unrelated to test stimuli.
So what’s behind this generalization failure?
kencan7749.bsky.social
“Reconstruction” is often seen as recovering any instance from a space of interest.

Prior works (e.g., Miyawaki+ 2008, Shen+ 2019) pursued this goal.

Recent studies report realistic reconstructions from NSD using CLIP + diffusion models.

But—do they truly achieve this goal?
kencan7749.bsky.social
Our paper is now accepted at Neural Networks!

This work builds on our previous threads in X, updated with deeper analyses.

We revisit brain-to-image reconstruction using NSD + diffusion models—and ask: do they really reconstruct what we perceive?

Paper: doi.org/10.1016/j.ne...
🧵1/12
Redirecting
doi.org
Reposted by Ken Shirakawa
arxiv-cs-cv.bsky.social
Yukiyasu Kamitani, Misato Tanaka, Ken Shirakawa
Visual Image Reconstruction from Brain Activity via Latent Representation
https://arxiv.org/abs/2505.08429
Reposted by Ken Shirakawa
martinhebart.bsky.social
One big issue with some of the previous claims are that NSD, the massive 7T fMRI dataset of 1000s of images, might not be the right dataset to test these hypotheses. The reason is that it is built on MSCoCo and has too high similarity between training and test. arxiv.org/abs/2405.10078 16/n
arxiv.org
kencan7749.bsky.social
I’m currently concerned about what the brain’s encoding model predicts. Given that the target brain state is collected under naturalistic condition and the inputs of encoding model derived from a deep neural network, I am not sure what the predictions are actually represent.