Faruk Gulban
@ofgulban.bsky.social
390 followers 310 following 37 posts
High Resolution Magnetic Resonance Imaging | Github → http://github.com/ofgulban Youtube → http://youtube.com/@ofgulban Blog → http://thingsonthings.org Art → http://behance.net/ofgulban
Posts Media Videos Starter Packs
Pinned
ofgulban.bsky.social
I've always wanted to explore fMRI time series as a smooth, real-time movie. Now I can with LayNii IDA. Plus, with voxel-wise correlations on the fly.

Fast, intuitive, and surprisingly insightful.

40 ms TR fMRI data from @practicalfmri.bsky.social !

@layerfmri.bsky.social @afni-pt.bsky.social
Reposted by Faruk Gulban
mrprudence.bsky.social
'In 1953, while working a hotel switchboard, a college graduate named Shea Zellweger began a journey of wonder and obsession that would eventually lead to the invention of a radically new notation for logic' @cabinetmagazine.bsky.social

www.cabinetmagazine.org/issues/18/we...
Reposted by Faruk Gulban
mrprudence.bsky.social
Mapping the dynamics of phyllotaxis in Palms.

Illustration from Carl Friedrich Philipp von Martius’s Historia naturalis palmarum, issued in 10 parts, 1823-50
ofgulban.bsky.social
More on the blood motion artifact, and capturing it on 7 T MRI, see my 2022 paper: doi.org/10.1016/j.ne...

I call this artifact shortly "blood motion artifact", but in the past it was called "spatial misregistration of the vascular flow". See Larson et al. 1990, doi.org/10.2214/ajr....
ofgulban.bsky.social
One reason I developed LayNii IDA was to more easily explore my 0.35 mm multi echo human brain data. Here I’m observing blood motion artifacts across echos. The arterial signal *appear* to move across several millimeters. Best captured in short readout windows (e.g. ~3 ms readout windows in GRE).
Reposted by Faruk Gulban
mrprudence.bsky.social
The most primal generative experiences may be ones created by the visual cortex alone, or at least those involving the visual cortex in close collaboration with entheogenic triggers or light stimulation.

Subjective Visual Phenomena – Johann Purkinje, 1819
ofgulban.bsky.social
I've always wanted to explore fMRI time series as a smooth, real-time movie. Now I can with LayNii IDA. Plus, with voxel-wise correlations on the fly.

Fast, intuitive, and surprisingly insightful.

40 ms TR fMRI data from @practicalfmri.bsky.social !

@layerfmri.bsky.social @afni-pt.bsky.social
Reposted by Faruk Gulban
sitek.bsky.social
One more: Marshall Xu is presenting our latest updates in mapping brainstem vasculature at poster #1759, where he did some nifty image transformations + VesselBoost segmentation to get our best results yet. Take a look!
@sbollmann.bsky.social @ofgulban.bsky.social
ww6.aievolution.com/hbm2501/Abst...
ofgulban.bsky.social
Very cool! Looking forward to the blog post and the data.
ofgulban.bsky.social
Cool, would you mind sharing this data? I would like to explore the voxel-wise correlations in it, if possible (using LayNii IDA: youtu.be/ZFsBljNOcyw?...)
LayNii IDA Devlog #001 – A Meso-MRI GUI Is Born (Sort Of)
YouTube video by ofgulban
youtu.be
ofgulban.bsky.social
Demonstrating anatomy-functional data registration of 11.7 T (!) partial-coverage human fMRI data at 0.7 × 0.7 × 0.8 mm resolution to help my colleague Alejandro Monreal-Madrigal.

Who did a great work with this spiral readout acquisition. The data quality looks quite good 👏

youtu.be/Cgf8i-Lqrac
Let's Analyze E006 - Register 11.7 T human fMRI data
YouTube video by ofgulban
youtu.be
ofgulban.bsky.social
Devlog #1 for LayNii IDA, a GUI for meso-(f)MRI data

Featuring whole human brain MRI datasets at:
• 0.8 mm functional data
• 0.35 mm in vivo multi-echo anatomical data
• 0.075 mm ex vivo anatomical data

Chronicling the development journey of neuroimaging software.

youtu.be/ZFsBljNOcyw?...
LayNii IDA Devlog #001 – A Meso-MRI GUI Is Born (Sort Of)
YouTube video by ofgulban
youtu.be
Reposted by Faruk Gulban
xkcd-titletext.bsky.social
Title text: "If you think curiosity without rigor is bad, you should see rigor without curiosity."

Alt text: https://explainxkcd.com/3101#Transcript
ofgulban.bsky.social
Grateful to these friends and collaborators who show up in the personal stories behind this post. Science communication isn’t just about clarity, it’s about connection: @sitek.bsky.social @k4tj4.bsky.social @r3rt0.bsky.social @mholla.bsky.social @layerfmri.bsky.social
ofgulban.bsky.social
In this blog post I am exploring the strange tension between rigor, reach, and recognition in modern science.

What we gain (and lose) by just publishing the PDFs:
thingsonthings.org/just-publish...
Throughout history, people have shared ideas through the dominant communication tools of their time, cave paintings, handwritten letters, printed pamphlets, digital journals, blogs... What once seemed informal became formal. What once didn’t “count” eventually did. Today’s citation-worthy PDF may be tomorrow’s quaint artifact.
ofgulban.bsky.social
The real time voxel-wise correlations are inpired by AFNI's InstaCorr, and BrainVoyager's time course interaction interface, while being a completely independent implementation of the technique.

@layerfmri.bsky.social @afni-pt.bsky.social @bobcoxauthor.bsky.social @rainergoebel.bsky.social
ofgulban.bsky.social
New in LayNii v2.9.0: Introducing *LayNii IDA*, a high-performance tool for real-time interaction with ultra–high resolution MRI data. Built for speed, designed for discovery. Still early, but a big step toward the next era of 7T fMRI: higher resolution, larger datasets.

github.com/layerfMRI/La...
ofgulban.bsky.social
In neuroimaging, we do skull stripping, but not brain stripping. Feels like favoritism.
ofgulban.bsky.social
[Field Note, Entry 05]
- Refrain from repeating k-space experiment [Explosion_05]. What began as signal amplification now resembles an awakening. Frequencies folded in on themselves—revealing geometries that should not exist.
- Proceeding further may breach more than spatial coherence...
Reposted by Faruk Gulban
afni-pt.bsky.social
The result of a large (42 authors!) collaboration:
"Go Figure: Transparency in neuroscience images preserves context and clarifies interpretation"
arxiv.org/abs/2504.07824
TL;DR: The FMRI world can (and should) improve results interpretation and reproducibility *today*, via transparent thresholding.
ofgulban.bsky.social
I think ultra-high field MRI will be shifting toward ~0.35 mm voxels, even in routine use. This isn't just sharper images. We now directly see cortical layers, veins...

It’s a leap, not a tweak. A few already crossed this threshold, but now it's becoming practical.

More: doi.org/10.1101/2025...
Reposted by Faruk Gulban
layerfmri.bsky.social
New layer-fMRI paper discussing an omnipresent artifact in layer-fMRI EPI data: Fuzzy Ripples.
This artifact comes from short term gradient imperfections and represents the biggest limitation of layer-fMRI acquisition (wrt TRs, resolution, lower brain areas).

doi.org/10.1002/mrm....