Journal of Cognitive Neuroscience (JoCN)
@jocn.bsky.social
1.4K followers 25 following 67 posts
peer reviewed journal published by MIT Press. JoCN publishes papers that bridge the gap between descriptions of information processing and specifications of brain activity. neuropsychology, experimental psychology, neurology, computational modeling, AI ..
Posts Media Videos Starter Packs
Reposted by Journal of Cognitive Neuroscience (JoCN)
jocn.bsky.social
Task-dependent Modulation Masking of 4 Hz Envelope Following Responses
Abstract The perception and recognition of natural sounds, like speech, rely on the processing of slow amplitude modulations. Perception can be hindered by interfering modulations at similar rates, a phenomenon known as modulation masking. Cortical envelope following responses (EFRs) are highly sensitive to these slow modulations, but it is unclear how modulation masking impacts these cortical envelope responses. To dissociate stimulus-driven and attention-driven effects, we recorded EEG responses to 4 Hz modulated noise in a two-way factorial design, varying the level of modulation masking and intermodal attention. Auditory stimuli contained one of three random masking bands in the stimulus envelope, at various proximities in modulation frequency to the 4 Hz target, or an unmasked reference condition. During EEG recordings, the same stimuli were presented while participants performed either an auditory or a visual change detection task. Attention to the auditory modality resulted in a general enhancement of sustained EFR responses to the 4 Hz target. In the visual task condition only, EFR 4 Hz power systematically decreased with increasing modulation masking, consistent with psychophysical masking patterns. However, during the auditory task, the 4 Hz EFRs were unaffected by masking and remained strong even with the highest degrees of masking. Rather than indicating a general bottom–up modulation selective process, these results indicate that the masking of cortical envelope responses interacts with attention. We propose that auditory attention allows robust tracking of masked envelopes, possibly through a form of glimpsing of the target, whereas envelope responses to task-irrelevant auditory stimuli reflect stimulus salience. © 2025 Massachusetts Institute of Technology You do not currently have access to this content.
dlvr.it
jocn.bsky.social
Enhanced Delta Band Neural Tracking of Degraded Fundamental Frequency Speech in Noisy Environments
Abstract Pitch variation of the fundamental frequency (F0) is critical to speech understanding, especially in noisy environments. Degrading the F0 contour reduces behaviorally measured speech intelligibility, posing greater challenges for tonal languages like Mandarin Chinese where the F0 pattern determines semantic meaning. However, neural tracking of Mandarin speech with degraded F0 information in noisy environments remains unclear. This study investigated neural envelope tracking of continuous Mandarin speech with three F0-flattening levels (original, flat-tone, and flat-all) under various signal-to-noise ratios (0, −9, and −12 dB). F0 contours were flattened at the word level for flat-tone and at the sentence level for flat-all Mandarin speech. Electroencephalography responses were indexed by the temporal response function in the delta (<4 Hz) and theta (4–8 Hz) frequency bands. Results show that delta-band envelope tracking is modulated by the degree of F0 flattening in a nonmonotonic manner. Notably, flat-tone Mandarin speech elicited the strongest envelope tracking compared with both original and flat-all speech, despite reduced F0 information. In contrast, the theta band, which primarily encodes speech signal-to-noise level, was not affected by F0 changes. In addition, listeners with better pitch-related music skills exhibited more efficient neural envelope speech tracking, despite being musically naive. These findings indicate that neural envelope tracking in the delta (but not theta) band is highly specific to F0 pitch variation and highlight the role of intrinsic musical skills for speech-in-noise benefits. © 2025 Massachusetts Institute of Technology You do not currently have access to this content.
dlvr.it
jocn.bsky.social
Bilingualism Is Associated with Significant Structural and Connectivity Alterations in the Thalamus in Adulthood
Abstract Language is a sophisticated cognitive skill that relies on the coordinated activity of cerebral cortex. Acquiring a second language creates intricate modifications in brain connectivity. Although considerable studies have evaluated the impact of second language acquisition on brain networks in adulthood, the results regarding the ultimate form of adaptive plasticity remain inconsistent within the adult population. Furthermore, due to the assumption that subcortical regions are not significantly involved in language-related tasks, the thalamus has rarely been analyzed in relation to other language-relevant cortical regions. Given these limitations, we aimed to evaluate the functional connectivity and volume modifications of thalamic subfields using magnetic resonance imaging (MRI) modalities following the acquisition of a second language. Structural MRI and fMRI data from 51 participants were collected from the OpenNeuro database. The participants were divided into three groups: monolingual (ML), early bilingual (EB), and late bilingual (LB). The EB group consisted of individuals proficient in both English and Spanish, with exposure to these languages before the age of 10 years. The LB group consisted of individuals proficient in both English and Spanish, but with exposure to these languages after the age of 14 years. The ML group included participants proficient only in English. Our results revealed that the ML group exhibited increased functional connectivity in all thalamic subfields (anterior, intralaminar-medial, lateral, ventral, and pulvinar) compared with the EB and LB groups. In addition, a significantly decreased volume of the left suprageniculate nucleus was found in the bilingual groups compared with the ML group. This study provides valuable evidence suggesting that acquiring a second language may be protective against dementia, due to its high plasticity potential, which acts synergistically with cognitive functions to slow the degenerative process. © 2025 Massachusetts Institute of Technology You do not currently have access to this content.
dlvr.it
jocn.bsky.social
Temporal Unfolding of Spelling-to-Sound Mappings in Visual (Pseudo)word Recognition
Abstract Behavioral research has shown that inconsistency in spelling-to-sound mappings slows visual word recognition and word naming. However, the time course of this effect remains underexplored. To address this, we asked skilled adult readers to perform a 1-back repetition detection task that did not explicitly involve phonological coding, in which we manipulated lexicality (high-frequency words vs. pseudowords) and sublexical spelling-to-sound consistency (treated as a dichotomous—consistent vs. inconsistent—and continuous dimension), while recording their brain electrical activity. The ERP results showed that the adult brain distinguishes between real and nonexistent words within 119–172 msec after stimulus onset (early N170), likely reflecting initial, rapid access to a primitive visuo-orthographic representation. The consistency of spelling-to-sound mappings exerted an effect shortly after the lexicality effect (172–270 msec; late N170), which percolated to the 353- to 475-msec range but only for real words. This suggests that, in expert readers, orthographic and phonological codes become available automatically and nearly simultaneously within the first 200 msec of the recognition process. We conclude that the early coupling of orthographic and phonological information plays a core role in visual word recognition by mature readers. Our findings support “quasiparallel” processing rather than strict cognitive seriality in early visual word recognition. © 2025 Massachusetts Institute of Technology You do not currently have access to this content.
dlvr.it
jocn.bsky.social
Distinguishing Neural Correlates of Prediction Errors on Perceptual Content and Detection of Content
Abstract Accounting for why discrimination between different perceptual contents is not always accompanied by conscious detection of that content remains a challenge for predictive processing theories of perception. Here, we test a hypothesis that detection is supported by a distinct inference within generative models of perceptual content. We develop a novel visual perception paradigm that probes such inferences by manipulating both expectations about stimulus content (stimulus identity) and detection of content (stimulus presence). In line with model simulations, we show that both content and detection expectations influence RTs on a categorization task. By combining a no-report version of our task with functional neuroimaging, we reveal that violations of expectations (prediction errors [PEs]) about perceptual content and detection are supported by visual cortex and pFC in qualitatively different ways: Within visual cortex, activity patterns diverge only on trials with a content PE, but within these trials, further divergence is seen for detection PEs. In contrast, within pFC, activity patterns diverge only on trials with a detection PE, but within these trials, further divergence is seen for content PEs. These results suggest rich encoding of both content and detection PEs and highlight a distributed neural basis for inference on content and detection of content in the human brain. © 2024 Massachusetts Institute of Technology You do not currently have access to this content.
dlvr.it
jocn.bsky.social
Functional Brain Networks Underlying Autobiographical Event Simulation: An Update
AbstractfMRI studies typically explore changes in the BOLD signal underlying discrete cognitive processes that occur over milliseconds to a few seconds. However, autobiographical cognition is a protracted process and requires fMRI tasks with longer trials to capture the temporal dynamics of the underlying brain networks. In the current study, we provided an updated analysis of the fMRI data obtained from a published autobiographical event simulation study, with a slow event-related design (34-sec trials), that involved participants recalling past, imagining past, and imagining future autobiographical events, as well as completing a semantic association control task. Our updated analysis using Constrained Principal Component Analysis for fMRI retrieved two networks reported in the original study: (1) the Default Mode Network, which activated during the autobiographical event simulation conditions but deactivated during the control condition, and (2) the Multiple Demand Network, which activated early in all conditions during the construction of the required representations (i.e., autobiographical events or semantic associates). Two novel networks also emerged: (1) the Response Network, which activated during the scale-rating phase, and (2) the Maintaining Internal Attention Network, which, while active in all conditions during the elaboration of details associated with the simulated events, was more strongly engaged during the imagination and semantic association control conditions. Our findings suggest that the Default Mode Network does not support autobiographical simulation alone, but it co-activates with the Multiple Demand Network and Maintaining Internal Attention Network, with the timing of activations depending on evolving task demands during the simulation process.
dlvr.it
jocn.bsky.social
Neural Correlates of the Musicianship Advantage to the Cocktail Party Effect
AbstractPrior research has indicated that musicians show an auditory processing advantage in phonemic processing of language. The aim of the current study was to elucidate when in the auditory cortical processing stream this advantage emerges in a cocktail-party-like environment. Participants (n = 34) were aged 18–35 years and deemed to be either a musician (10+ years experience) or nonmusician (no formal training). EEG data were collected while participants were engaged in a phoneme discrimination task. During the task, participants were asked to discern auditory “ba” and “pa” phonemes in two conditions: one with competing speech (target with distractor [TD]) and one without competing speech (target only). Behavioral results showed that musicians discriminated phonemes better under the TD condition than nonmusicians, whereas no performance differences were observed during the target only condition. Analysis of the EEG ERP showed musicianship-based differences at both early (N1) and late (P3) processing stages during the TD condition. Specifically, musicians exhibited decreased neural activity during the N1 and increased neural activity during the P3. Source localization of the P3 showed that musicians increased activity in the right superior/middle temporal gyrus. Results from this study indicate that musicians have a phonemic processing advantage specifically when presented in the context of distraction, which arises from a shift in neural activity from early (N1) to late (P3) stages of cortical phonemic processing.
dlvr.it
jocn.bsky.social
Saccades and Blinks Index Cognitive Demand during Auditory Noncanonical Sentence Comprehension
AbstractNoncanonical sentence structures pose comprehension challenges because they require increased cognitive demand. Prosody may partially alleviate this cognitive load. These findings largely stem from behavioral studies, yet physiological measures may reveal additional insights into how cognition is deployed to parse sentences. Pupillometry has been at the forefront of investigations into physiological measures of cognitive demand during auditory sentence comprehension. This study offers an alternative approach by examining whether eye-tracking measures, including blinks and saccades, index cognitive demand during auditory noncanonical sentence comprehension and whether these metrics are sensitive to reductions in cognitive load associated with typical prosodic cues. We further investigated how eye-tracking patterns differ across correct and incorrect responses, as a function of time, and how each related to behavioral measures of cognition. Canonical and noncanonical sentence comprehension was measured in 30 younger adults using an auditory sentence–picture matching task. We also assessed participants' attention and working memory. Blinking and saccades both differentiate noncanonical sentences from canonical sentences. Saccades further distinguish noncanonical structures from each other. Participants made more saccades on incorrect than correct trials. The number of saccades also related to working memory, regardless of syntax. However, neither eye-tracking metric was sensitive to the changes in cognitive demand that was behaviorally observed in response to typical prosodic cues. Overall, these findings suggest that eye-tracking indices, particularly saccades, reflect cognitive demand during auditory noncanonical sentence comprehension when visual information is present, offering greater insights into the strategies and neural resources participants use to parse auditory sentences.
dlvr.it
jocn.bsky.social
Neural Evidence for Feature-based Distractor Inhibition
AbstractInterference from a salient distractor is typically reduced when the appearance of the distractor follows either spatial or feature-based regularities. Although there is a growing body of literature on distractor location learning, the understanding of distractor feature learning remains limited. In the current study, we investigated distractor feature learning by using EEG measures. We assumed that learning benefits distractor handling, and we investigated the role of intertrial priming in distractor feature learning. Furthermore, we examined whether distractor feature learning influences later visual working memory (VWM) performance. Participants performed an adapted variant of the additional singleton task with a distractor that appeared more often in a specific color. The behavioral results provided additional evidence that observers can use distractor feature regularities to reduce distractor interference. At the neural level, we found a reduced PD with high-probability compared with low-probability distractors, suggesting that less suppression is required when the distractor appears in the more likely color. This reduced need for suppression was partly driven by intertrial priming. The PD elicited by repeated high-probability trials decreased over time, indicating that experience with the distractor reduced the need for suppression. In addition, the results showed that distractor feature learning did not affect VWM performance. Overall, our findings demonstrate that distractor feature learning decreases the interference of a salient distractor while also benefitting from intertrial priming processes, thereby improving attentional selection. In addition, it seems that learned distractor feature inhibition is not maintained in VWM when the task context is changed.
dlvr.it
jocn.bsky.social
Neural Associations between Inhibitory Control and Counterintuitive Reasoning in Science and Maths in Primary School Children
AbstractEmerging evidence suggests that inhibitory control (IC) plays a pivotal role in science and maths counterintuitive reasoning by suppressing incorrect intuitive concepts, allowing correct counterintuitive concepts to come to mind. Neuroimaging studies have shown greater activation in the ventrolateral and dorsolateral pFCs when adults and adolescents reason about counterintuitive concepts, which has been interpreted as reflecting IC recruitment. However, the extent to which neural systems underlying IC support science and maths reasoning remains unexplored in children. This developmental stage is of particular importance, as many crucial counterintuitive concepts are learned in formal education in middle childhood. To address this gap, fMRI data were collected while fifty-six 7- to 10-year-olds completed counterintuitive science and math problems, plus IC tasks of interference control (Animal Size Stroop) and response inhibition (go/no-go). Univariate analysis showed large regional overlap in activation between counterintuitive reasoning and interference control, with more limited activation observed in the response inhibition task. Multivariate similarity analysis, which explores fine-scale patterns of activation across voxels, revealed neural activation similarities between (i) science and maths counterintuitive reasoning and interference control tasks in frontal, parietal, and temporal regions, and (ii) maths reasoning and response inhibition tasks in the precuneus/superior parietal lobule. Extending previous research in adults and adolescents, this evidence is consistent with the proposal that IC, specifically interference control, supports children's science and maths counterintuitive reasoning, although further research will be needed to demonstrate the similarities observed do not reflect more general multidemand processes.
dlvr.it
jocn.bsky.social
Debunking the Myth of Excitatory and Inhibitory Repetitive Transcranial Magnetic Stimulation in Cognitive Neuroscience Research
AbstractRepetitive TMS (rTMS) is a powerful neuroscientific tool with the potential to noninvasively identify brain–behavior relationships in humans. Early work suggested that certain rTMS protocols (e.g., continuous theta-burst stimulation, intermittent theta-burst stimulation, high-frequency rTMS, low-frequency rTMS) predictably alter the probability that cortical neurons will fire action potentials (i.e., change cortical excitability). However, despite significant methodological, conceptual, and technical advances in rTMS research over the past few decades, overgeneralization of early rTMS findings has led to a stubbornly persistent assumption that rTMS protocols by their nature induce behavioral and/or physiological inhibition or facilitation, even when they are applied to nonmotor cortical sites or under untested circumstances. In this Perspectives article, we offer a “public service announcement” that summarizes the origins of this problematic assumption, highlighting limitations of seminal studies that inspired them and results of contemporary studies that violate them. Next, we discuss problems associated with holding this assumption, including making brain–behavior inferences without confirming the locality and directionality of neurophysiological changes. Finally, we provide recommendations for researchers to eliminate this misguided assumption when designing and interpreting their own work, emphasizing results of recent studies showing that the effects of rTMS on neurophysiological metrics and their associated behaviors can be caused by mechanisms other than binary changes in excitability of the stimulated brain region or network. Collectively, we contend that no rTMS protocol is by its nature either excitatory or inhibitory, and that researchers must use caution with these terms when forming experimental hypotheses and testing brain–behavior relationships.
dlvr.it
jocn.bsky.social
How Linguistic and Nonlinguistic Vocalizations Shape the Perception of Emotional Faces—An Electroencephalography Study
AbstractVocal emotions are crucial in guiding visual attention toward emotionally significant environmental events, such as recognizing emotional faces. This study employed continuous EEG recordings to examine the impact of linguistic and nonlinguistic vocalizations on facial emotion processing. Participants completed a facial emotion discrimination task while viewing fearful, happy, and neutral faces. The behavioral and ERP results indicated that fearful nonlinguistic vocalizations accelerated the recognition of fearful faces and elicited a larger P1 amplitude, whereas happy linguistic vocalizations accelerated the recognition of happy faces and similarly induced a greater P1 amplitude. In recognition of fearful faces, a greater N170 component was observed in the right hemisphere when the emotional category of the priming vocalization was consistent with the face stimulus. In contrast, this effect occurred in the left hemisphere while recognizing happy faces. Representational similarity analysis revealed that the temporoparietal regions automatically differentiate between linguistic and nonlinguistic vocalizations early in face processing. In conclusion, these findings enhance our understanding of the interplay between vocalization types and facial emotion recognition, highlighting the importance of cross-modal processing in emotional perception.
dlvr.it