Matthew O’Donohue
@matthewod.bsky.social
1.1K followers 1.8K following 58 posts
Postdoc - auditory neuroscience, neurodivergence, multisensory integration, musicianship. Arsenal and Animal Collective fan. He/him
Posts Media Videos Starter Packs
matthewod.bsky.social
Speech-in-noise perception gets better as listeners adapt to environment/room reverberation statistics! Another W for statistical learning. 😁

New paper from folks in our lab (led by Heivet Hernandez-Perez): elifesciences.org/reviewed-pre...
Listening to the room: disrupting activity of dorsolateral prefrontal cortex impairs learning of room acoustics in human listeners
elifesciences.org
Reposted by Matthew O’Donohue
matthewod.bsky.social
Ooh thanks, I haven’t read those other two! I look forward to reading them today instead of doing the research I’m paid for 😁
matthewod.bsky.social
Cool to see this! I only came across some of the single-trial memory stuff (e.g. Standing et al) recently, so this is timely for me!
matthewod.bsky.social
Great article Lewis, love to read you calling out the fascism going on in the US!
matthewod.bsky.social
You’re related to Petey? Nice, I’ve liked his music for a couple years now!
Reposted by Matthew O’Donohue
We have a PhD position for an upcoming project in metacognition research in Potsdam / Berlin. A great opportunity for those interested in cognitive modeling of confidence and EEG.

More information at coconeuro.github.io/phd2025

Kindly share this opportunity with potential candidates - Thanks!
Computation and Cognition @ HMU Potsdam
Computation and Cognition @ HMU Potsdam
coconeuro.github.io
Reposted by Matthew O’Donohue
jephpp.bsky.social
It’s all about attention in June's 50th anniversary articles tinyurl.com/3ywt8zec!

‪@chris-olivers.bsky.social‬
‪@sauter.bsky.social‬
APA PsycNet
tinyurl.com
Reposted by Matthew O’Donohue
manikyaalister.bsky.social
Happy to share some new theoretical work with @andyperfors.bsky.social
that will appear at CogSci this year! We argue (and demonstrate through simulations) that people can determine how much to trust other agents by thinking about how those agents have acquired their knowledge. 1/3
Figure showing an overview of the framework argued in the paper. People decide how much they can trust other agents by taking into account 1) features that indicate how the agents acquired their knowledge (their search process); 2) how many other agents agree, and what the features of those agents are; and 3) indications that reflect how knowable the topic is or how easy the space is to search.
Reposted by Matthew O’Donohue
andyperfors.bsky.social
Australian election called against Dutton and the Coalition. I almost want to cry a little - didn't realise how badly I needed this to happen. There was a lot at stake in this election but I know a lot of marginalised people are breathing a sigh of relief.

#auspol
Reposted by Matthew O’Donohue
jephpp.bsky.social
In May, our 50th Anniversary series features an invited review from Valenza et al, who trace the legacy of their seminal 1996 article, "Face preference at birth” and a readers’ perspective on perception of relations.
tinyurl.com/k2jmtbhw
APA PsycNet
tinyurl.com
matthewod.bsky.social
Two of the reviewers were fantastic - insightful yet fair. Thank you! 10/10 (end of thread but also me rating reviewers)
matthewod.bsky.social
Takeaways: musicians better in SJ tasks (but may be response bias) yet show > multisensory integration in a more objective RT task (but may be just better sustained attention, motivation, etc.). SJs show rapid recalibration but RTs don't, so RR probs not due to early sensory latency changes 9/10
matthewod.bsky.social
Multisensory gain (redundant target effect; RTE) across RT distribution is well predicted by Raab's (1962) race model, except when considering just the fastest RTs 8/10
matthewod.bsky.social
Multisensory integration (race model violations in simple RT task) do NOT correlate with simultaneity perception (Fig. 7). Yet the difference between unimodal auditory and visual RTs STRONGLY predicts the SOA of largest multisensory gain 7/10
matthewod.bsky.social
We get nice/sexy modality shift effects (previous trial modality affects simple RT on current trial). Paper has more details about novel modality shift effect analyses 6/10
matthewod.bsky.social
Yet simple RTs do not show such rapid recalibration (see paper for race model analysis showing same thing) 5/10
matthewod.bsky.social
Simultaneity judgements (SJs) are recalibrated according to the modality order of the previous stimulus 4/10
matthewod.bsky.social
And yet, musicians exhibited greater multisensory gains for simple RTs 3/10
matthewod.bsky.social
First, musicians were far less likely than non-musicians to report simultaneity between asynchronous flash-tone stimuli 2/10
matthewod.bsky.social
⚠️New paper! If you like multisensory temporal perception, this one has HEAPS for you. We looked at simultaneity perception, simple RTs, race model inequality, serial dependence (recalibration), modality shift costs, musicians, and more! Findings summarised below 1/10

psycnet.apa.org/fulltext/202...
APA PsycNet
psycnet.apa.org
matthewod.bsky.social
Cool! Must have been so satisfying to solve that mystery - I had a similar experience a few years ago but had to wade through Google search for it