David Haydock
@davidghaydock.bsky.social
42 followers 220 following 20 posts
Post-doc doing Neuroimaging @ucl.ac.uk Interested in Neurophenomenology, and how we can develop analysis methods that benefit it https://linktr.ee/davidghaydock
Posts Media Videos Starter Packs
Reposted by David Haydock
davidghaydock.bsky.social
Our review provides a roadmap for researchers working on EEG microstate syntax. We hope to make results more comparable and useful for future studies, and call on researchers in the field to associate microstates to a continuous signal.
davidghaydock.bsky.social
Beyond this, we point out that existing methods which try to associate EEG microstates with fMRI patterns make sweeping assumptions when using GLM models by averaging the EEG time series to single values per TR, heavily simplifying the EEG signal.
davidghaydock.bsky.social
We argue that this common criticism can be investigated without throwing away microstate analysis by studying microstates in a continuous space (such as a t-SNE embedding, or similar). Note the information lost in the microstate representation!
davidghaydock.bsky.social
A key issue highlighted (among others): is how microstate sequences are generated in and of themselves, in a “winner-takes-all” approach, where the complexity of the continuous EEG signal is simplified to a sequence of symbols.
davidghaydock.bsky.social
In our new review, we organise existing methods into clear categories and define how different studies construct microstates, define microstate sequences, and how they go about investigating a sequence once they have it.
davidghaydock.bsky.social
Studies on microstate syntax use a lot of different methods, and don’t always use the same process for defining the microstate sequence. Different terms are used for the same concepts, and documentation of the specifics of preprocessing and analysis steps can be lacking.
davidghaydock.bsky.social
Ever heard of EEG microstates? Usually defined as cluster centres of topography, the dynamics of MS sequences are referred to as "syntaxes". Our new review discusses syntax methods and show how they could be better associated with the underlying EEG signal: 🧵👇 www.sciencedirect.com/science/arti...
davidghaydock.bsky.social
Further than that actually - mind and world, perception and world, are just as inseparable.
davidghaydock.bsky.social
Our perception and the world aren't two separate things where one is guessing about the other - they're unified aspects of a single lived experience that co-emerge and define each other.
davidghaydock.bsky.social
No disrespect but saying that perception of the world is a brain-based best guess is like saying that a wave is the ocean's best guess about what water should do. Brain and world aren't separate things making predictions about each other - they're inseparable aspects of the same dynamic process.
davidghaydock.bsky.social
If you'd have written an original piece on the matter and given inputs and opinions then there would have been a healthy discussion of the subject.

Instead you've just created an argument about fair use and consent. Both of which you seem to be on the wrong side of.
davidghaydock.bsky.social
This isn't audience reaction, it's *author* reaction. That discrepancy is why you can't act as if use of LLMs is the same as writing an original piece.

In any other context where you wrote about the paper, the contents of the paper is what would be discussed, which is what journalism should be for.
davidghaydock.bsky.social
It's one thing to summarise it for yourself with a language model to get the gist of an article, but it's another thing entirely to re-present the article publicly using that summary.
davidghaydock.bsky.social
It makes more sense to me to identify the parts you'd want to run yourself. Relying on the summary of a language model instead of taking the time to absorb the source material inevitably means you will miss the finer detail. Like reading the abstract of a paper and thinking you understand the whole.
Reposted by David Haydock
lune-bellec.bsky.social
Picking a journal to publish our work sometimes felt like a headache. High fees and private, for-profit governance remain the (terrifying) norm. But this is changing: Aperture is a recent neuroimaging journal that is community-driven, low cost, and strives for high quality. See ⬇️
apertureohbm.bsky.social
⏱️ Speed up your path to publication!

At Aperture Neuro:
📜 Submission ➡️ Final decision: 100 days (median)
📚 Submission ➡️ Publication: 144 days (median)

🌟We are open access, low cost, and welcome innovative research formats.

🔗 Submit today and make an impact faster: apertureneuro.org/about
davidghaydock.bsky.social
Open science includes cool animations then
davidghaydock.bsky.social
Also, this needs some Dorian Concept as background music
davidghaydock.bsky.social
You need to put a watermark on these before someone nicks them and starts using them in TikTok's that have nothing to do with brain science
leechbrain.bsky.social
Reposting from Twitter #5
Reposted by David Haydock
micahgallen.com
IMO the big breakthroughs won’t just come from advancing temporal and spatial resolution, but from achieving naturalistic recordings during dynamic, embodied interactions. Until we can move away from highly restrictive and overly reductive passive stimulation we won’t fully grasp mechanism.
dickretired.bsky.social
Challenge: does FMRI have a future (apart from studies of development and ageing)? We want to know HOW the brain works and for that we need millisecond temporal resolution neuropixels, MEG, OPMs. After nearly 30 years of FMRI we know basically WHERE things happen.
Reposted by David Haydock
leechbrain.bsky.social
Brain video 25b: #blender #blender
Fight the scanner: part 2