Josh McDermott
@joshhmcdermott.bsky.social
630 followers 180 following 28 posts
Working to understand how humans and machines hear. Prof at MIT; director of Lab for Computational Audition. https://mcdermottlab.mit.edu/
Posts Media Videos Starter Packs
Reposted by Josh McDermott
lexidecker.bsky.social
Excited to share that I'm joining WashU in January as an Assistant Prof in Psych & Brain Sciences! 🧠✨!

I'm also recruiting grad students to start next September - come hang out with us! Details about our lab here: www.deckerlab.com

Reposts are very welcome! 🙌 Please help spread the word!
DeckerLab
www.deckerlab.com
Reposted by Josh McDermott
thomasserre.bsky.social
Brown’s Department of Cognitive & Psychological Sciences is hiring a tenure-track Assistant Professor, working in the area of AI and the Mind (start July 1, 2026). Apply by Nov 8, 2025 👉 apply.interfolio.com/173939

#AI #CognitiveScience #AcademicJobs #BrownUniversity
Apply - Interfolio {{$ctrl.$state.data.pageTitle}} - Apply - Interfolio
apply.interfolio.com
Reposted by Josh McDermott
diedrichsenjorn.bsky.social
Variance partitioning is used to quantify the overlap of two models. Over the years, I have found that this can be a very confusing and misleading concept. So we finally we decided to write a short blog to explain why.
@martinhebart.bsky.social @gallantlab.org
diedrichsenlab.org/BrainDataSci...
Reposted by Josh McDermott
dpwe.bsky.social
🔊New paper! Recomposer allows editing sound events within complex scenes based on textual descriptions and event roll representations. And we discuss the details that matter!

Work by the Sound Understanding folks
@GoogleDeepMind

arxiv.org/abs/2509.05256
Recomposer: Event-roll-guided generative audio editing
Editing complex real-world sound scenes is difficult because individual sound sources overlap in time. Generative models can fill-in missing or corrupted details based on their strong prior understand...
arxiv.org
joshhmcdermott.bsky.social
If you are attending the Kempner symposium I encourage you to check out @gelbanna.bsky.social 's poster on models and benchmarks of continuous speech perception. He has many interesting results.
gelbanna.bsky.social
At Frontiers in NeuroAI symposium @kempnerinstitute.bsky.social, I will be presenting a poster entitled "A Model of Continuous Phoneme Recognition Reveals the Role of Context in Human Speech Perception" (Poster #17).

Work done with @joshhmcdermott.bsky.social.

#NeuroAI2025

🧵1/4
Reposted by Josh McDermott
joshuasweitz.bsky.social
How bad will it be? Catastrophic.

Proposed cuts to #NSF, #NIH, and #NASA will set the US R&D landscape back 25 yrs+, cause economic and job loss now, and undermine innovations to come.

But, this is the WH's *proposed* budget.

Speak up now before it is too late.

(inflation adjusted $-s below)
NSF, NASA and NIH budgets per year, inflation adjusted from 2000-2025 along with the proposed cuts. NSF includes research component only. Massive cuts across all sectors, well below support spanning 25 years.
Reposted by Josh McDermott
jfeather.bsky.social
We are presenting our work “Discriminating image representations with principal distortions” at #ICLR2025 today (4/24) at 3pm! If you are interested in comparing model representations with other models or human perception, stop by poster #63. Highlights in 🧵
openreview.net/forum?id=ugX...
Discriminating image representations with principal distortions
Image representations (artificial or biological) are often compared in terms of their global geometric structure; however, representations with similar global structure can have strikingly...
openreview.net
Reposted by Josh McDermott
Reposted by Josh McDermott
I’m hiring a full-time lab tech for two years starting May/June. Strong coding skills required, ML a plus. Our research on the human brain uses fMRI, ANNs, intracranial recording, and behavior. A great stepping stone to grad school. Apply here:
careers.peopleclick.com/careerscp/cl...
......
Technical Associate I, Kanwisher Lab
MIT - Technical Associate I, Kanwisher Lab - Cambridge MA 02139
careers.peopleclick.com
Reposted by Josh McDermott
lauerlab.bsky.social
Now accepting applications for the summer 2025 cohort: STEMM opportunities for college students with Hearing loss to Engage in Auditory Research (STEMM-HEAR)

www.stemm-hear.bme.jhu.edu
Home - STEMM-HEAR
www.stemm-hear.bme.jhu.edu
Reposted by Josh McDermott
jfeather.bsky.social
Applications are open for the 2025 Flatiron Institute Junior Theoretical Neuroscience Workshop! A two-day workshop 7/10-7/11 in NYC for PhD students and postdocs. All travel paid. Apply by April 14th.🧠🗽🧑‍🔬http://jtnworkshop2025.flatironinstitute.org/
@flatironinstitute.org @simonsfoundation.org
JTN - 2025
JTN - 2025
jtnworkshop2025.flatironinstitute.org
Reposted by Josh McDermott
hankgreen.bsky.social
We are experiencing an assault on science unparalleled by anything I’ve seen in my life. It’s not one issue or another anymore, the entire institution is under attack by the most powerful individuals in the country.

This Friday, where will you be?

standupforscience2025.org
joshhmcdermott.bsky.social
If you are here at the last day of ARO, don’t miss Sagarika Alavilli’s talk on “Measuring and Modeling Multi-Source Environmental Sound Recognition”, happening at 9:45 in Ocean Ballroom 9 - 12.
joshhmcdermott.bsky.social
Two posters from our lab on deep auditory models, presented today at ARO:

T107 - “Modeling Normal and Impaired Hearing With Deep Neural Networks Optimized for Ecological Tasks” by Mark Saddler et al.

T138 - “Modeling Continuous Speech Perception Using Artificial Neural Networks” by Gasser Elbanna
joshhmcdermott.bsky.social
Two more posters from our lab are being presented today at ARO:

M133 - “Neural Network Models of Hearing Clarify Factors Limiting Cochlear Implant Outcomes” by Annesya Banerjee et al.

M166 - “Preferences for Loudness and Pitch Vary Across Cultures” by Malinda McPherson et al.
joshhmcdermott.bsky.social
And a talk by Ian Griffith at 4:15pm: “Human-Like Feature Attention Emerges in Task-Optimized Models of the Cocktail Party Problem”
joshhmcdermott.bsky.social
SU191 - “Deep Neural Network Models of Human Sound Localization Indicate Which Aspects of Localization Are Mediated by Explicit Binaural Processing” by Mathias Dietz et al.
joshhmcdermott.bsky.social
SU185 - “Cross-Culturally Shared Sensitivity to Harmonic Structure Underlies Aspects of Pitch Discrimination” by Malinda McPherson et al.
joshhmcdermott.bsky.social
If you are at ARO today, lots of stuff to see from our lab.

Posters:
SU181 - “Optimization Under Ecological Realism Reproduces Signatures of Human Speech Recognition” by Annika Magaro et al.

SU184 - “Texture Streaming in Auditory Scenes” by Jarrod Hicks
joshhmcdermott.bsky.social
If at ARO today, check out Lakshmi Govindarajan's poster on "Confidence in Sound Localization Reflects Calibrated Uncertainty Estimation". Number S147 if you are at the meeting.
Reposted by Josh McDermott
cantlonlab.bsky.social
Join us! Science Homecoming helps scientists reconnect with communities by writing about the importance of science funding in their hometown newspapers. We’ve mapped every small newspaper in the U.S. and provide resources to get you started. Help science get back home 🧪🔬🧬 🏠

sciencehomecoming.com
Science Homecoming
sciencehomecoming.com
joshhmcdermott.bsky.social
Please to announce the successful thesis defense of Dr. Jarrod Hicks! His thesis provides the first thorough exploration of auditory scene analysis with environmental sounds. I’m excited to see what he does next.