Yichen Yuan
yichen-yuan.bsky.social
Yichen Yuan
@yichen-yuan.bsky.social
PhD candidate at Utrecht University • Interested in multisensory perception, working memory & attention
Reposted by Yichen Yuan
And now without bluesky making the background black...
August 24, 2025 at 8:57 PM
Huge thanks to Surya and Nathan for their help😊

Open access preprint: osf.io/preprints/so...
All materials and data: osf.io/54vms/

Feel free to reach out to me in case of any questions! 9/9
OSF
osf.io
December 5, 2024 at 12:03 PM
(2) Observers can flexibly prioritize one sense over the other, in anticipation of modality-specific interference, and use only the most informative sensory modality to guide behavior, while nearly ignoring other modalities (even when they convey substantial information). 8/9
December 5, 2024 at 12:03 PM
In these four experiments, we concluded that (1) observers use both hearing and vision when localizing static objects, but use only unisensory input when localizing moving objects and predicting motion under occlusion. 7/9
December 5, 2024 at 12:03 PM
In Exp. 3 the target did not move, but only briefly appeared as a static stimulus at the exact same endpoints as in Exp. 1 and 2. Here, a substantial multisensory benefit was found when participants localized static audiovisual target, showing near-optimal (MLE) integration. 6/9
December 5, 2024 at 12:03 PM
In Exp. 2, there was no occluder, so participants were required to simply report where the moving target disappeared from the screen. Here, although localization estimates were in line with MLE predictions, no multisensory precision benefits were found either. 5/9
December 5, 2024 at 12:03 PM
In these two experiments, we showed that participants do not seem to benefit from audiovisual information when tracking occluded objects, but flexibly prioritize one sense (V in Exp 1A and A in Exp 1B) over the other, in anticipation of modality-specific interference. 4/9
December 5, 2024 at 12:03 PM
We asked whether observers optimally weigh the auditory & visual components of audiovisual stimuli. We therefore compared the observed data to maximum likelihood estimation (MLE) model predictions, which weighs the unisensory inputs according to their uncertainty (variance). 3/9
December 5, 2024 at 12:03 PM
In Exp. 1A, moving targets (auditory, visual or audiovisual) were occluded by an audiovisual occluder, and their final locations had to be inferred from target speed and occlusion duration. Exp. 1B was identical to Experiment 1A except that a visual-only occluder was used. 2/9
December 5, 2024 at 12:03 PM