Human Language Processing Lab
banner
Human Language Processing Lab
@hlplab.bsky.social
OSF with all stimuli, data, & code as well as detailed supplementary information: osf.io/2asgw/overview. Linked Github repo: github.com/hlplab/Causa...
OSF
osf.io
December 12, 2025 at 5:51 PM
Congrats to @brainnotonyet.bsky.social alumni Shawn Cummings, @gekagrob.bsky.social & Menghan Yan. Out in JEP:LMC @apajournals.bsky.social: listeners compensate perception of spectral (acoustic) cues based on visually-evident consequences of a pen in mouth of the speaker! dx.doi.org/10.1037/xlm0...
APA PsycNet
dx.doi.org
December 12, 2025 at 5:46 PM
Very cool new accent-relatedness visualization, examples, and some insightful observations accent-explorer.boldvoice.com
How AI Hears Accents
accent-explorer.boldvoice.com
October 17, 2025 at 8:32 PM
Looking for researchers in computational neuroscience and cognition (incl. language, learning, development, decision-making) to join our faculty!
We’re hiring a tenure-track Assistant Prof in Computational Neuroscience/Cognition at
@uor-braincogsci.bsky.social! Join a Simons-supported cluster across Math/Physics/Biology/BCS. Apply by Nov 1, 2025: www.sas.rochester.edu/bcs/jobs/fac... #ComputationalNeuroscience #Cognition #FacultyJobs
Faculty Positions
Updated 09/08/2025
www.sas.rochester.edu
October 1, 2025 at 7:58 PM
Reposted by Human Language Processing Lab
Review starts 11/1: Asst. prof. (tenure track), human cognition, Brain and CogSci, U Rochester www.sas.rochester.edu/bcs/jobs/fac...
September 12, 2024 at 10:03 PM
New R library STM github.com/santiagobarr... by Santiago Barreda that implements Nearey & Assmann's PST model of vowel perception, and a fully Bayesian extension (the BSTM). Easy to use and to apply to your data. It's also what we used in our recent paper www.degruyterbrill.com/document/doi...
GitHub - santiagobarreda/STM: The 'STM' (Sliding Template Model) R Package
The 'STM' (Sliding Template Model) R Package. Contribute to santiagobarreda/STM development by creating an account on GitHub.
github.com
September 30, 2025 at 8:50 PM
As we write, Nearey & Assmann's PSTM presents a "groundbreaking idea [...], with far-reaching consequences for research from typology to sociolinguistics to speech perception … and few seem to know of it." We hope this paper can help change that! OSF osf.io/tpwmv/ 3/3
OSF | Sign in
osf.io
September 30, 2025 at 8:46 PM
Nearey & Assmann's PSTM (2007, www.google.com/books/editio...) remains the only fully incremental model of formant normalization, conducting joint inference over both the talker's normalization parameters (*who*'s talking) and the vowel category (*what* they are saying). 2/3
Experimental Approaches to Phonology
This wide-ranging survey of experimental methods in phonetics and phonology shows the insights and results provided by different methods of investigation, including laboratory-based, statistical, psyc...
www.google.com
September 30, 2025 at 8:42 PM
New work w/ Santiago Barreda: www.degruyterbrill.com/document/doi... .We reintroduce Nearey & Assmann's seminal probabilistic sliding template model (PSTM), visualize its workings, & find that it predicts human vowel perception with high accuracy, far outperforming other normalization models 1/3
Reintroducing and testing the Probabilistic Sliding Template Model of vowel perception
Normalization of the speech signal onto comparatively invariant phonetic representations is critical to speech perception. Assumptions about this process also play a central role in phonetics and phon...
www.degruyterbrill.com
September 30, 2025 at 8:40 PM
DL captures human speech perception both *qualitatively* & *quantitatively* (R2>96%) for over 400 combinations of exposure and test items. Yet, previous DL models fail to capture important limitations. Specifically, we find that DL seems to proceed by remixing prev experience 2/2
September 30, 2025 at 8:23 PM
Very excited about this: putting distributional learning (DL) models of adaptive speech perception to a strong, informative test sciencedirect.com/science/arti... by Maryann Tan. We use Bayesian ideal observers & adapters to assess whether DL predicts rapid changes in speech perception 1/2
September 30, 2025 at 8:23 PM
This has been a really eye-opening collaboration that made me realize how little I knew about the auditory system, the normalization of spectral information, & the consequences of making problematic assumptions about the perceptual basis of speech perception when building (psycho)linguistic models!
February 25, 2025 at 3:04 PM
This is the final paper from Anna Persson's thesis (www.researchgate.net/profile/Anna...) w/ Santiago Barreda (linguistics.ucdavis.edu/people/santi...).

Article & SI fully written in #rmarkdown. All data, experiment code, & analyses available on OSF osf.io/zemwn/ #reproducibility
Anna PERSSON | Lecturer | Doctor of Philosophy | Stockholm University, Stockholm | SU | Department of Swedish Language and Multilingualism | Research profile
Lecturer in Swedish as second language at the Department of Swedish Language and Multilingualism, Stockholm University.
www.researchgate.net
February 25, 2025 at 3:02 PM
Excited to see this out in JASA @asa-news.bsky.social: doi.org/10.1121/10.0.... provides a large-scale evaluation of formant normalization accounts as a model of vowel perception. @uor-braincogsci.bsky.social
February 25, 2025 at 3:02 PM
Together w/ @wbushong.bsky.social's recent paper bsky.app/profile/wbus..., this lays out the road ahead for careful research on information maintenance during speech perception. The discussion in Wednesday's paper identifies strong assumptions made in this line of work that might not be warranted.
Excited to share my new paper with @hlplab.bsky.social on the role of contextual informativity in spoken word recognition! Check it out in Journal of Experimental Psychology: Learning, Memory and Cognition here: psycnet.apa.org/fulltext/202...
APA PsycNet
psycnet.apa.org
February 25, 2025 at 2:47 PM
Data and code available on OSF osf.io/cypg3/
Bushong & Jaeger. Changes in informativity of sentential context affects its integration with subcategorical information about preceding speech
Hosted on the Open Science Framework
osf.io
February 25, 2025 at 2:41 PM
By comparing against ideal observer baselines, we identify a reliable, previously unrecognized pattern in listeners' responses that is unexpected under any existing theory. We present simulations that suggest that this pattern can emerge under ideal information maintenance w/ attentional lapses. 3/n
February 25, 2025 at 2:38 PM
We present Bayesian GLMMs, ideal observer analyses, two re-analyses of previous studies and two new experiments. All data clearly reject the idea that uncertainty maintenance during speech perception is limited to ambiguous inputs or short-lived. 2/n
Bicknell, Bushong, Tanenhaus, & Jaeger (2024). Maintenance of subcategorical information during speech perception: revisiting misunderstood limitations.
Accurate word recognition is facilitated by context. Some relevant context, however, occurs after the word. Rational use of such “right context” would require listeners to have maintained uncertainty ...
osf.io
February 25, 2025 at 2:36 PM
Now out: exciting work w/ Klinton Bicknell, @wbushong.bsky.social, & Mike Tanenhaus www.sciencedirect.com/science/arti.... It's a massive tour-de-force, revisiting several misunderstood 'limitations' of information maintenance during spoken language understanding. @uor-braincogsci.bsky.social
Maintenance of subcategorical information during speech perception: Revisiting misunderstood limitations
Accurate word recognition is facilitated by context. Some relevant context, however, occurs after the word. Rational use of such “right context” would…
www.sciencedirect.com
February 25, 2025 at 2:34 PM
February 25, 2025 at 2:24 PM
We also revisits long-held assumptions about how we study the maintenance of perceptual information during spoken language understanding. We discuss why most evidence for such maintenance is actually compatible with simpler explanations. 2/2
February 18, 2025 at 3:35 PM
New work by @wbushong.bsky.social out in JEP:LMC: listeners might strategically moderate maintenance of perceptual information during spoken language understanding based on the expected informativity of subsequent context. 1/2
Excited to share my new paper with @hlplab.bsky.social on the role of contextual informativity in spoken word recognition! Check it out in Journal of Experimental Psychology: Learning, Memory and Cognition here: psycnet.apa.org/fulltext/202...
APA PsycNet
psycnet.apa.org
February 18, 2025 at 3:35 PM
SFB-funded Collaborative Research Centre “Prominence in Language” at U Cologne, Germany offers junior & senior research fellowships for 1-6 months between 04-12/2025 (1800-2500 Euro/month) sfb1252.uni-koeln.de/en/ (20 projects in prosody, morphosyntax & semantics, text & discourse structure)
CRC Prominence in Language
sfb1252.uni-koeln.de
December 16, 2024 at 3:41 PM
Select Institute for Collaborative Innovation as your application unit. Apply by 1/28/25 career.admo.um.edu.mo
December 16, 2024 at 3:35 PM