模式
判决
深层语言处理
心理学
句子处理
理解力
认知心理学
模态(人机交互)
脑电图
计算机科学
语言学
自然语言处理
人工智能
神经科学
社会学
程序设计语言
哲学
社会科学
作者
Changfu Pei,Yuan Qiu,Fali Li,Xunan Huang,Yajing Si,Yuqin Li,Xiabing Zhang,Chunli Chen,Qiang Liu,Zehong Cao,Nai Ding,Shan Gao,Kimmo Alho,Dezhong Yao,Peng Xu
出处
期刊:Cerebral Cortex
[Oxford University Press]
日期:2022-09-30
卷期号:33 (8): 4740-4751
被引量:2
标识
DOI:10.1093/cercor/bhac376
摘要
Abstract Human language units are hierarchical, and reading acquisition involves integrating multisensory information (typically from auditory and visual modalities) to access meaning. However, it is unclear how the brain processes and integrates language information at different linguistic units (words, phrases, and sentences) provided simultaneously in auditory and visual modalities. To address the issue, we presented participants with sequences of short Chinese sentences through auditory, visual, or combined audio-visual modalities while electroencephalographic responses were recorded. With a frequency tagging approach, we analyzed the neural representations of basic linguistic units (i.e. characters/monosyllabic words) and higher-level linguistic structures (i.e. phrases and sentences) across the 3 modalities separately. We found that audio-visual integration occurs in all linguistic units, and the brain areas involved in the integration varied across different linguistic levels. In particular, the integration of sentences activated the local left prefrontal area. Therefore, we used continuous theta-burst stimulation to verify that the left prefrontal cortex plays a vital role in the audio-visual integration of sentence information. Our findings suggest the advantage of bimodal language comprehension at hierarchical stages in language-related information processing and provide evidence for the causal role of the left prefrontal regions in processing information of audio-visual sentences.
科研通智能强力驱动
Strongly Powered by AbleSci AI