Johannes Fahrenfort
@fahrenfort.bsky.social
1.6K followers 940 following 390 posts
Assistant Prof at VU Amsterdam. Neuroscience of consciousness, decision making. Computational modeling. Pet method: EEG. Critical of subjective measures. Co-PI in the http://consciousbrainlab.com with @svangaal.bsky.social and @timostein.bsky.social.
Posts Media Videos Starter Packs
fahrenfort.bsky.social
There is no eye tracking in this paper, just pupil size and the code is specifically about how to provide the biofeedback on pupil size. This is not difficult to set up once you have an eye tracker. I don't think they have "built" an eye tracker but ok. Also, where are the analysis scripts?
fahrenfort.bsky.social
Even if this is true, I think it's stupid. It's not a high-tech thing to provide feedback on pupil size, everybody can do this. The work is not the code here, the work is doing something useful/marketable with it. No mention of the analysis scripts either.
fahrenfort.bsky.social
Also, where is the analysis code? Surely there is a lot more to this paper than just the biofeedback algorithm?
fahrenfort.bsky.social
Who thinks this is an acceptable statement about Code Availability given the move towards Open Science? Are you out of your mind @ethz.ch? Today is 2025, not 2005. I'm also surprised that @natcomms.nature.com accepts such a statement. It is ridiculous really. Paper here: doi.org/10.1038/s414...
Code availability: The code of the pupil-based biofeedback algorithm cannot be made publicly available since it is proprietary software of ETHZurich and cannot be shared beyond the detailed description of the algorithm given in the methods section. However, researchers interested in verifying and reproducing our results can do so on location in a secured environment at the Neural Control of Movement Laboratory, ETH Zurich, upon signing a confidentiality agreement.
Reposted by Johannes Fahrenfort
dieworkwear.bsky.social
australian street style, 1973
fahrenfort.bsky.social
That would be a positive outcome
fahrenfort.bsky.social
Same. This was actually quite good.
fahrenfort.bsky.social
Just sign the damn petition Edwin!
fahrenfort.bsky.social
It's looking increasingly less likely that AI will take over the world and replace humanity in the process. Maybe GPT-6?
fahrenfort.bsky.social
The EU wants to ban words like "burger" for veggie burgers. What are we supposed to call it then? Veggie thing formerly known as burger? Go do something useful with your time idiots, this is why the UK left. Please sign petition and stop the meat lobby.
weplanet.yourmovement.org/p/noconfusio...
Stop the EU’s Ban on “Meaty” Words for Plant-Based Foods
Sign now
weplanet.yourmovement.org
fahrenfort.bsky.social
Sure, but I've also seen cases like this in published work, so still a bit tricky. But I agree it's understandable in pre-print stage.
Reposted by Johannes Fahrenfort
matthieu-mx.bsky.social
1/ Why are we so easily distracted? 🧠 In our new EEG preprint w/ Henry Jones, @monicarosenb.bsky.social and @edvogel.bsky.social we show that distractibility is associated w/ reduced neural connectivity — and can be predicted from EEG with ~80% accuracy using machine learning.
fahrenfort.bsky.social
This is a dumb take. Students already know how to use AI. Regardless, we need to teach them *about* AI, and they need to learn fundamental skills (find sources, reason, write, program) *without* AI. Only once such core skills are acquired can they even evaluate AI output.
matt94250.bsky.social
If you don’t teach your students how to use AI, you’re doing them a huge disservice because they won’t have jobs in the future.
Reposted by Johannes Fahrenfort
tedmccormick.bsky.social
If you don’t teach your students your subject, you’re not doing your job.

If you teach your students to ask AI first, you’re ensuring they’ll never be *needed* for any job.

You’re also guaranteeing that knowledge of your subject slowly dies.

Asking ChatGPT is gaining neither knowledge nor skills.
matt94250.bsky.social
If you don’t teach your students how to use AI, you’re doing them a huge disservice because they won’t have jobs in the future.
fahrenfort.bsky.social
I'm most surprises by the fact that this 55% doesn't think he's already back, this time wearing a red baseball cap?
Reposted by Johannes Fahrenfort
mariusz.cc
Another example of my favorite recent quote I saw somewhere: “the dumbest person you know is being told by AI they’re absolutely right”
Reposted by Johannes Fahrenfort
studenova.bsky.social
Simulations are fun! Especially with the right tools😉.
@willenjoy.bsky.social and I (with support from Mina Jamshidi) made a toolbox for simulating EEG/MEG data
meegsim.readthedocs.io
I put together a quick simulation using it for this short clip. Took me 10 minutes (no, really!)
#brainmovies
fahrenfort.bsky.social
This all sounds great, but I think it starts with requiring that people power their studies to detect Null effects. Pick a SESOI and do power analysis for a TOST rather than NHST. Publication will be much easier for such results whichever way the chip falls. Or go Bayes.
dx.plos.org/10.1371/jour...
Ending publication bias: A values-based approach to surface null and negative results
Sharing knowledge is a fundamental principle within the scientific community, yet null and negative results are still being underreported. This Consensus View discusses the problem of such publication...
dx.plos.org