Thomas Hikaru Clark
@thomashikaru.bsky.social
94 followers 140 following 7 posts
MIT Brain and Cognitive Sciences
Posts Media Videos Starter Packs
thomashikaru.bsky.social
5/7 The model also returns incremental surprisals (quantified as mean particle weight, here tested on sentences from Ryskin et al., 2021 @ryskin.bsky.social), which can be compared to a baseline LM. "Explainable errors" tend to be less surprising under our model than the baseline.
thomashikaru.bsky.social
4/7 The rich, interpretable output of the model includes posteriors over inferred errors at each word and over intended (latent) sentences. The model makes inferences that are consistent with the human noisy-channel inferences implied by Gibson et al., 2013.
thomashikaru.bsky.social
3/7 We combine a generative model of noisy production (LM prior + symbolic error model), with approximate, incremental Sequential Monte Carlo inference. This allows fine-grained control of the error types under consideration and inference dynamics.
thomashikaru.bsky.social
2/7 According to noisy-channel theory, humans interpret utterances non-literally using both linguistic priors and error likelihoods. However, how this works at a more algorithmic level is an open question, and something that implemented computational models can help us explore.
thomashikaru.bsky.social
1/7 If you're at CogSci 2025, I'd love to see you at my talk on Friday 1pm PDT in Nob Hill A! I'll be talking about our work towards an implemented computational model of noisy-channel comprehension (with @postylem.bsky.social, Ted Gibson, and @rplevy.bsky.social).