Furl Lab
@furllab.bsky.social
240 followers 610 following 30 posts
We study the mind's mechanisms that give rise to decision making, social behaviour and face perception, guided by behavioural experiments, fMRI, EEG, computational models. Why do people make bad decisions? @rhulpsychology.bsky.social
Posts Media Videos Starter Packs
Pinned
furllab.bsky.social
💡 When making choices—like picking a flat, a job, or a romantic partner—when should we stop looking and commit? Our new study from @rhulpsychology.bsky.social published in
Communications Psychology explores how biased expectations about future options shape our decisions. 🧵👇
Reposted by Furl Lab
kevinjkircher.com
Sometimes I think about how from 1935-1975ish, Bell Labs produced an insane amount of revolutionary science and technology, including 11 Nobel Prizes, the transistor, UNIX, C, the laser, the solar cell, information theory, etc. The secret? Provide scientists with ample, steady, no-strings funding.
sites.stat.columbia.edu
Reposted by Furl Lab
telescoper.bsky.social
A meme for the modern university...
Meme showing a worker labelled "academic staff" digging a hole in the ground while 10 others look labelled with management titles such as "Director of Human Resources" look on. The caption underneath reads "The only way we can cut costs is to reduce the number of academic staff..."
Reposted by Furl Lab
francoisstock.bsky.social
Our #sEEG study is now published in Nature Communications: rdcu.be/eIkoG! 🧠
Key finding: We discovered neural evidence accumulation for visual perception that's independent of report preparation—recorded from >3000 channels across 3 experiments!
#Neuroscience #Consciousness #OpenAccess
Reposted by Furl Lab
modirshanechi.bsky.social
New in @pnas.org: doi.org/10.1073/pnas...

We study how humans explore a 61-state environment with a stochastic region that mimics a “noisy-TV.”

Results: Participants keep exploring the stochastic part even when it’s unhelpful, and novelty-seeking best explains this behavior.

#cogsci #neuroskyence
Reposted by Furl Lab
hakwan.bsky.social
detection d' is generally overestimated, coz we tend to be too lazy to collect the necessary data in order to correct for the unequal variance between target present vs absent distributions. turns out we can do this for free - using reaction times data. so, let's do it~

osf.io/preprints/ps...

🧠📈
OSF
osf.io
Reposted by Furl Lab
Reposted by Furl Lab
thomas-zhihao-luo.bsky.social
How does the brain decide? 🧠

Our new @nature.com paper shows that neural activity switches from an 'evidence gathering' to a 'commitment' state at a precise moment we call nTc.

After nTc, new evidence is ignored, revealing a neural marker for the instant when the mind is made up.

rdcu.be/eGUrv
Transitions in dynamical regime and neural mode during perceptual decisions - Nature
Simultaneous recordings were made of hundreds of neurons in the rat frontal cortex and striatum, showing that decision commitment involves a rapid, coordinated transition in dynamical regime and neura...
www.nature.com
furllab.bsky.social
Our new EEG + modeling work using the beads task is now preprinted osf.io/preprints/ps..., led by Christina Dimitriadou @rhulpsychology.bsky.social
OSF
osf.io
Reposted by Furl Lab
meikeramon.bsky.social
Interested in #EEG #FPVS #FaceLearning?

*An Ecological and Objective Neural Marker of Implicit Learning of Unfamiliar Identities*

Preprint 👉 osf.io/preprints/ps...

@bfh-ch.bsky.social University of Malta @snsf.ch
Reposted by Furl Lab
dengpan.bsky.social
🚨We believe this is a major step forward in how we study hippocampus function in healthy humans.

Using novel behavioral tasks, fMRI, RL & RNN modeling, and transcranial ultrasound stimulation (TUS), we demonstrate the causal role of hippocampus in relational structure learning.
furllab.bsky.social
statistician: But aren't you assuming normality?
Reposted by Furl Lab
gsoh31.bsky.social
This is good on the extraordinary alienation that has now grown up in academia. The gap between the values of lecturers and the universities that employ them are now a huge unbridgeable chasm. One stands for education, the other institutional self-interest. blog.matthewbarnard.phd/a-world-with...
A world without experts: alienation in academia
Thoughts on the level of alienation in academic work, and how as much as we cannot exit the economy, we must recognise the inherent value of academia.
blog.matthewbarnard.phd
Reposted by Furl Lab
iainnd.bsky.social
Look what they did to Notepad. Shut the fuck up. This is Notepad. You are not welcome here. Oh yeah "Let me use Copilot for Notepad". "I'm going to sign into my account for Notepad". What the fuck are you talking about. It's Notepad.
Windows Notepad, the native simple text editor, now has formatting options and a Copilot button.
Reposted by Furl Lab
tmarvan.bsky.social
It's a pity we have to write conclusions for papers and can't just fade them out, like musicians do.
Reposted by Furl Lab
ninamarkl.bsky.social
really fun to be reading critiques of "AI" from the 90s, 80s, 70s, and 60s that perfectly identify all the core questions the field still hasn't resolved and anticipate exactly where we're at now
furllab.bsky.social
In UK UG ed, students take exactly one subject chosen when 18 yo. One more reason that's not ideal. As a USA undergrad I had to evidence foreign language learning to get my psychology degree. I used my high school French classes but otherwise students in Illinois needed to take a foreign language.
Reposted by Furl Lab
jbyoder.org
I'm going to finish this manuscript by the end of the week, I told myself
Distracted Boyfriend meme: ME looking away from WRITING THE DISCUSSION SECTION to ogle MAKING NEW RESULTS FIGURES
Reposted by Furl Lab
p-hunermund.com
Academia is basically a collection of people who got lucky early on and mistook it for genius. doi.org/10.1073/pnas...
Reposted by Furl Lab
lhuntneuro.bsky.social
Our new paper is out! When navigating through an environment, how do we combine our general sense of direction with known landmark states? To explore this, @denislan.bsky.social used a task that allowed subjects (or neural networks) to choose either their next action or next state at each step.
plosbiology.org
How do humans navigate unfamiliar environments? @denislan.bsky.social @lhuntneuro.bsky.social @summerfieldlab.bsky.social show that humans & deep meta-learning networks combine ‘vector-based’ & ‘transition-based’ strategies for flexible navigation in similar ways @plosbiology.org 🧪 plos.io/45uSwNm
Task design and experimental set-up. Top left: underlying structure of the 8 × 8 grid, unseen by participants. Every state is represented by an image of an object, and these objects and their positions change on every trial. Top right: schematic diagram of the ‘map reading’ phase of each trial. Participants see a top–down view of the grid with objects obscured and successively click on blue squares to reveal ‘landmark’ objects at the location. After 16 clicks have been completed, a yellow square appears. Clicking on the yellow square reveals the ‘goal’ object for the trial. Bottom: schematic diagram of the navigation phase of each trial. Participants start in a random, previously unobserved location and are tasked with navigating to the ‘goal’ object they had just learnt about (displayed at the top). They can navigate in two ways. First, they could choose a direction to travel in by clicking on the corresponding arrow (highlighted yellow). This is analogous to using a ‘vector-based’ strategy. Alternatively, they could choose an adjacent state to travel to by clicking on one of the associated images (displayed in a random order; highlighted blue). This corresponds to using a ‘transition-based’ navigation strategy.
Reposted by Furl Lab
plosbiology.org
Reconstructing sounds from #fMRI data is limited by its temporal resolution. @ykamit.bsky.social &co develop a DNN-based method that aids reconstruction of perceptually accurate sound from fMRI data, offering insights into internal #auditory representations @plosbiology.org 🧪 plos.io/4fhNw1Z
Schematic overview of the proposed sound reconstruction pipeline. Left:  DNN feature extraction from sound. A deep neural network (DNN) extracts auditory features at multiple levels of complexity using a hierarchical framework. Right: Sound reconstruction. The reconstruction pipeline starts with decoding DNN features from fMRI responses using trained brain decoders. The audio generator then transforms these decoded features into the reconstructed sound.
Reposted by Furl Lab
lastpositivist.bsky.social
Thing I am an absolute complete total reactionary about: there has not actually been invented a better model of conveying information in a learning environment than the basic structure of a traditional lecture. A speaker standing in some sort of unique focal point for the attention of listeners...
Reposted by Furl Lab
tyrellturing.bsky.social
1/3) This may be a very important paper, it suggests that there are no prediction error encoding neurons in sensory areas of cortex:

www.biorxiv.org/content/10.1...

I personally am a big fan of the idea that cortical regions (allo and neo) are doing sequence prediction.

But...

🧠📈 🧪
Sensory responses of visual cortical neurons are not prediction errors
Predictive coding is theorized to be a ubiquitous cortical process to explain sensory responses. It asserts that the brain continuously predicts sensory information and imposes those predictions on lo...
www.biorxiv.org