计算机科学
学习迁移
任务(项目管理)
心音
召回
可穿戴计算机
语音识别
人工智能
机器学习
可穿戴技术
音频信号处理
领域(数学分析)
音频信号
语音编码
工程类
医学
内科学
数学分析
哲学
嵌入式系统
语言学
系统工程
数学
作者
Tomoya Koike,Kun Qian,Qiuqiang Kong,Mark D. Plumbley,Björn W. Schuller,Yoshiharu Yamamoto
标识
DOI:10.1109/embc44109.2020.9175450
摘要
Cardiovascular disease is one of the leading factors for death cause of human beings. In the past decade, heart sound classification has been increasingly studied for its feasibility to develop a non-invasive approach to monitor a subject's health status. Particularly, relevant studies have benefited from the fast development of wearable devices and machine learning techniques. Nevertheless, finding and designing efficient acoustic properties from heart sounds is an expensive and time-consuming task. It is known that transfer learning methods can help extract higher representations automatically from the heart sounds without any human domain knowledge. However, most existing studies are based on models pre-trained on images, which may not fully represent the characteristics inherited from audio. To this end, we propose a novel transfer learning model pre-trained on large scale audio data for a heart sound classification task. In this study, the PhysioNet CinC Challenge Dataset is used for evaluation. Experimental results demonstrate that, our proposed pre-trained audio models can outperform other popular models pre-trained by images by achieving the highest unweighted average recall at 89.7 %.
科研通智能强力驱动
Strongly Powered by AbleSci AI