焦虑
模态(人机交互)
心理健康
面部表情
重性抑郁障碍
心理学
人工智能
计算机科学
医学
听力学
临床心理学
精神科
认知
作者
Zifan Jiang,Salman Seyedi,Emily Griner,Ahmed Abbasi,Ali Bahrami Rad,Hyeokhyen Kwon,Robert O. Cotes,Gari D. Clifford
标识
DOI:10.1109/jbhi.2024.3352075
摘要
Objective: Psychiatric evaluation suffers from subjectivity and bias, and is hard to scale due to intensive professional training requirements. In this work, we investigated whether behavioral and physiological signals, extracted from tele-video interviews, differ in individuals with psychiatric disorders. Methods: Temporal variations in facial expression, vocal expression, linguistic expression, and cardiovascular modulation were extracted from simultaneously recorded audio and video of remote interviews. Averages, standard deviations, and Markovian process-derived statistics of these features were computed from 73 subjects. Four binary classification tasks were defined: detecting 1) any clinically-diagnosed psychiatric disorder, 2) major depressive disorder, 3) self-rated depression, and 4) self-rated anxiety. Each modality was evaluated individually and in combination. Results: Statistically significant feature differences were found between psychiatric and control subjects. Correlations were found between features and self-rated depression and anxiety scores. Heart rate dynamics provided the best unimodal performance with areas under the receiver-operator curve (AUROCs) of 0.68–0.75 (depending on the classification task). Combining multiple modalities provided AUROCs of 0.72–0.82. Conclusion: Multimodal features extracted from remote interviews revealed informative characteristics of clinically diagnosed and self-rated mental health status. Significance: The proposed multimodal approach has the potential to facilitate scalable, remote, and low-cost assessment for low-burden automated mental health services.
科研通智能强力驱动
Strongly Powered by AbleSci AI