心理学
颞上沟
颞叶皮质
音韵学
神经科学
功能磁共振成像
语言学
哲学
作者
Alice Van Audenhaege,Stefania Mattioni,Filippo Cerpelloni,Rémi Gau,Arnaud Szmalec,Olivier Collignon
标识
DOI:10.1101/2024.07.25.605084
摘要
Abstract Speech is a multisensory signal that can be extracted from the voice and the lips. Previous studies suggested that occipital and temporal regions encode both auditory and visual speech features but their precise location and nature remain unclear. We characterized brain activity using fMRI (in male and female) to functionally and individually define bilateral Fusiform Face Areas (FFA), the left Visual Word Form Area (VWFA), an audio-visual speech region in the left Superior Temporal Sulcus (lSTS) and control regions in bilateral Para-hippocampal Place Areas (PPA). In these regions, we performed multivariate patterns classification of corresponding phonemes (speech sounds) and visemes (lip movements). We observed that the VWFA and lSTS represent phonological information from both vision and sounds. The multisensory nature of phonological representations appeared selective to the anterior portion of VWFA, as we found viseme but not phoneme representation in adjacent FFA or even posterior VWFA, while PPA did not encode phonology in any modality. Interestingly, cross-modal decoding revealed aligned phonological representations across the senses in lSTS, but not in VWFA. A whole-brain cross-modal searchlight analysis additionally revealed aligned audio-visual phonological representations in bilateral pSTS and left somato-motor cortex overlapping with oro-facial articulators. Altogether, our results demonstrate that auditory and visual phonology are represented in the anterior VWFA, extending its functional coding beyond orthography. The geometries of auditory and visual representations do not align in the VWFA as they do in the STS and left somato-motor cortex, suggesting distinct multisensory representations across a distributed phonological network. Significance statement Speech is a multisensory signal that can be extracted from the voice and the lips. Which brain regions encode both visual and auditory speech representations? We show that the Visual Word Form Area (VWFA) and the left Superior Temporal Sulcus (lSTS) both process phonological information from speech sounds and lip movements. However, while the lSTS aligns these representations across the senses, the VWFA does not, indicating different encoding mechanisms. These findings extend the functional role of the VWFA beyond reading. An additional whole-brain approach reveals shared representations in bilateral superior temporal cortex and left somato-motor cortex, indicating a distributed network for multisensory phonology.
科研通智能强力驱动
Strongly Powered by AbleSci AI