Landry Bulls
@landrybulls.bsky.social
410 followers 380 following 19 posts
I study the signals people use to make sense of each other during social interactions landrybulls.github.io
Posts Media Videos Starter Packs
Reposted by Landry Bulls
gabefajardo.bsky.social
I’m excited to share my 1st first-authored paper, “Distinct portions of superior temporal sulcus combine auditory representations with different visual streams” (with @mtfang.bsky.social and @steanze.bsky.social ), now out in The Journal of Neuroscience!
www.jneurosci.org/content/earl...
Fig. 1. a. Visual and auditory regions of interest (ROIs). b. Responses in a combination of visual (e.g., early dorsal visual stream; Fig. 1a, middle panel) and auditory regions were used to predict responses in the rest of the brain using MVPN. c. In order to identify brain regions that combine responses from auditory and visual regions, we identified voxels where predictions generated using the combined patterns from auditory regions and one set of visual regions jointly (as shown in Fig.  1b) are significantly more accurate than predictions generated using only auditory regions or only that set of visual regions.
Reposted by Landry Bulls
markthornton.bsky.social
Very excited to share @landrybulls.bsky.social's 1st lead-author preprint in my lab! Using datasets from MySocialBrain.org we measured people's beliefs about how mental states change in intensity over time, the dimensional structure of those beliefs, and their correlates: osf.io/preprints/ps... 🧵👇
landrybulls.bsky.social
Of course, none of this would have been at all possible without the amazing @dianatamir.bsky.social and my super-advisor @markthornton.bsky.social - thank you both!
landrybulls.bsky.social
These findings indicate that people’s beliefs about mental state intensity dynamics are incorporated into a wide variety of different domains and generalize across cultures and beyond the lab. Our findings may lay the basis for future research on how people acquire/use mental state concepts 😎
landrybulls.bsky.social
We then show that the structure of people's beliefs about intra-state dynamics reflect how people make other mental state judgments (namely, their conceptual similarity and transition probability) and how different mental state words are used in written text across a variety of cultures.
landrybulls.bsky.social
We go on to characterize how these mental states’ intensity profiles interrelate using a curve similarity metric called Frechet distance, which captures similarity between two curves’ overall shape while abstracting away from the specific indices that correspond to different curve features.
landrybulls.bsky.social
These temporal motifs clearly map on to interpretable psychological dimensions: high/low arousal, duration/ending abruptness, and perceptibility/traitlikeness. We discuss the relationship between the shape of each component's loading and its psychological correlates.
landrybulls.bsky.social
Using PCA, we found that three temporal motifs explained a large majority of the variance in people's drawn intensity profiles, with overall intensity, slope, and variability emerging as the three dimensions of people's beliefs about intra-state dynamics.
landrybulls.bsky.social
Using data collected in a curve-drawing task, we measured people's beliefs about these dynamics for individual mental states–these are called mental state intensity profiles. This low-d UMAP embedding shows the variability in average curves for different mental states.
landrybulls.bsky.social
Experiencing a mental state like joy, confusion, anger, or concentration is a dynamic process that ebbs and flows in intensity over time. A moment of shock may quickly come and go, a flow state might rise gradually and then vanish, or a spark of joy may rise to a crescendo before fading away.
landrybulls.bsky.social
Excited to share the preprint for my 1st 1st-author manuscript! @markthornton.bsky.social and I show that people hold robust, structured beliefs about how individual mental states unfold in intensity over time. We find that these beliefs are reflected in other domains of mental state understanding.
Reposted by Landry Bulls
markthornton.bsky.social
Today, SCRAP Lab returned (right) to the Path of Life Garden in Windsor, VT - the site of our first in-person get-together as a lab 5 years ago (left) - to welcome our newest member, graduate student @gabefajardo.bsky.social!
Original members of SCRAP Lab Current members of SCRAP Lab
Reposted by Landry Bulls
markthornton.bsky.social
After 5 years, I finally carved out time to turn this blog post on FDR (markallenthornton.com/blog/fdr-pro...) into a manuscript. The preprint features a much broader range of simulations showing how FDR promotes confounds, and how this effect compounds with publication bias: osf.io/preprints/ps...
Effect of confound mass on true positive rates under FDR correction. Confound mass represents how large a confound is in terms of the product of its voxel extent and effect size. Results are shown at differing combinations of true effect size, true effect voxel extent, and sample size. Inflated surface maps of meta-analytic z-statistics from Neurosynth for low-level confounds (top) and high-level cognitive tasks (bottom). Red reflects positive activations, blue reflects negative (de)activations, and darker colors indicate larger z-statistics. Maps are thresholded at |z| = 1 for visualization purposes. Effect of confound effect size on true positive rates for task effects under FDR correction. Colors indicate sample sizes: N = 25 in blue, N = 50 in green, and N = 100 in orange. Effect sizes are reflected by the darkness of each color, with light shades representing d = .2, medium d = .5, and dark d = .8. The task brain maps and confound brain maps referenced in each panel are shown in Figure 3. Effect of FDR-based publication bias on observed confound effects sizes. Simulated meta-analytic confound effect sizes are visualized through violin plots for each combination of task effect and confound effect examined in the neural data simulations. Meta-analyses featuring publication bias (orange) substantially inflate these effect size estimates in all cases, relative to meta-analyses featuring no publication bias (blue).
Reposted by Landry Bulls
markthornton.bsky.social
New paper from me at Cognition and Emotion! "Deep neural network models of emotion understanding" I discuss how deep nets can be used as cognitive models of emotion perception, prediction, and regulation: doi.org/10.1080/0269...

(h/t @ltjaql.bsky.social for the illustrations!)
Reposted by Landry Bulls
graceqmiao.bsky.social
Excited to share the DIMS Dashboard—a tool for displaying multimodal, extracted time series alongside the original video source! It’s designed to support and inspire a richer qualitative–quantitative research cycle.

Huge thanks to my amazing collaborators and mentors who made this possible! 🙌
wimpouw.bsky.social
Postprint: osf.io/987fm_v1 To appear in Proceedings of Cog Sci 2025

DIMS Dashboard for Exploring Dynamic Interactions and Multimodal Signals.

The interdisciplinary @graceqmiao.bsky.social in the lead here! Developing a dynamic dashboard for a quali-quanti social neuroscience research cycle!
Reposted by Landry Bulls
wimpouw.bsky.social
Postprint: osf.io/987fm_v1 To appear in Proceedings of Cog Sci 2025

DIMS Dashboard for Exploring Dynamic Interactions and Multimodal Signals.

The interdisciplinary @graceqmiao.bsky.social in the lead here! Developing a dynamic dashboard for a quali-quanti social neuroscience research cycle!
Reposted by Landry Bulls
tommybotch.bsky.social
New preprint! Thrilled to share my latest work with @esfinn.bsky.social -- "Sensory context as a universal principle of language in humans and LLMs"

osf.io/preprints/ps...
OSF
osf.io
Reposted by Landry Bulls
esfinn.bsky.social
Alternative title: "LLMs don't have ears (or eyes)"

What do humans and machines miss out on when processing language as purely written text, without all the embodied audiovisual richness that scaffolds language in daily human contexts?

Very proud of this elegant work from @tommybotch.bsky.social
tommybotch.bsky.social
New preprint! Thrilled to share my latest work with @esfinn.bsky.social -- "Sensory context as a universal principle of language in humans and LLMs"

osf.io/preprints/ps...
OSF
osf.io
Reposted by Landry Bulls
Reposted by Landry Bulls
markthornton.bsky.social
SCRAP Lab had a great time at #SANS2025! Can't wait till next year!
landrybulls.bsky.social
Welcome Gabe!!
gabefajardo.bsky.social
I'm THRILLED to announce that this fall, I will be joining the Psychological and Brain Sciences department at Dartmouth as a PhD student!!! I'll will be working with the amazing @markthornton.bsky.social and the SCRAP Lab! 🌲🧠
Reposted by Landry Bulls
gabefajardo.bsky.social
I'm THRILLED to announce that this fall, I will be joining the Psychological and Brain Sciences department at Dartmouth as a PhD student!!! I'll will be working with the amazing @markthornton.bsky.social and the SCRAP Lab! 🌲🧠
Reposted by Landry Bulls
dartmouthpbs.bsky.social
For the Dartmouth PBS graduate student visiting day this year, we introduced a new format based on the "Hot Ones" talk show: faculty and current graduate students ate nuggets with a series of increasing spicy hot sauces as they answered questions from prospective grad students! 🌶️🔥🥵
The panel Trying some nuggets Uh-oh Feeling the burn
Reposted by Landry Bulls
dklement.bsky.social
Transcribing multiple speakers with OpenAI’s Whisper? No problem.

Check out our recent work at BUT Speech@FIT in collaboration with CLSP JHU. It is fully open-sourced. Do not forget to try out our demo: pccnect.fit.vutbr.cz/gradio-demo

Read more in this thread 👇

[1/14]
Scheme of DiCoW target speaker ASR pipeline