耳鸣
神经影像学
计算机科学
模式
人工智能
体素
深度学习
杠杆(统计)
机器学习
心理学
医学
听力学
神经科学
社会科学
社会学
作者
Chieh-Te Lin,Sanjay Ghosh,Leighton B. Hinkley,Corby L. Dale,Ana Cláudia Silva de Souza,Jennifer Henderson Sabes,Christopher P. Hess,Meredith E. Adams,Steven W. Cheung,Srikantan S. Nagarajan
标识
DOI:10.1088/1741-2552/acab33
摘要
Abstract Objective: Subjective tinnitus is an auditory phantom perceptual disorder without an objective biomarker. Fast and efficient diagnostic tools will advance clinical practice by detecting or confirming the condition, tracking change in severity, and monitoring treatment response. Motivated by evidence of subtle anatomical, morphological, or functional information in magnetic resonance images of the brain, we examine data-driven machine learning methods for joint tinnitus classification (tinnitus or no tinnitus) and tinnitus severity prediction. Approach: We propose a deep multi-task multimodal framework for tinnitus classification and severity prediction using structural MRI (sMRI) data. To leverage complementary information multimodal neuroimaging data, we integrate two modalities of three-dimensional sMRI—T1 weighted (T1w) and T2 weighted (T2w) images. To explore the key components in the MR images that drove task performance, we segment both T1w and T2w images into three different components—cerebrospinal fluid, grey matter and white matter, and evaluate performance of each segmented image. Main results: Results demonstrate that our multimodal framework capitalizes on the information across both modalities (T1w and T2w) for the joint task of tinnitus classification and severity prediction. Significance: Our model outperforms existing learning-based and conventional methods in terms of accuracy, sensitivity, specificity, and negative predictive value.
科研通智能强力驱动
Strongly Powered by AbleSci AI