计算机科学
人工智能
语音识别
胎动
时间戳
特征(语言学)
支持向量机
特征提取
Mel倒谱
心跳
朴素贝叶斯分类器
模式识别(心理学)
胎头
后备箱
随机森林
计算机视觉
二元分类
超声波
分类器(UML)
接收机工作特性
音频信号
感知
延迟(音频)
自回归模型
胎龄
人工神经网络
作者
Kenneth Moise,Kelly Gaither,Anna Madden-Rusnak,Kathy Lowry,Emily Hutson,Danielle Bruns,Reinaldo Valero
标识
DOI:10.1097/aog.0000000000006228
摘要
OBJECTIVE: To evaluate whether machine learning could be used with audio recordings from a smartphone to detect fetal movements that create disruptions of the amniotic fluid environment. METHODS: We conducted a prospective study to simultaneously record fetal movements seen on ultrasound and audio recordings using a smartphone placed on the maternal abdomen and to compare these with maternal perception of fetal movements. Smartphone audio segments were preprocessed to reduce noise, window the signal, and divide them into typed 1-second audio snippets. These were subsequently converted into visual representations of their acoustic features known as Mel-frequency cepstral coefficients (MFCCs). Selected MFCCs were examined to evaluate how feature characteristics vary with gestational age and body mass index (BMI). Fetal movement detected on ultrasonography was considered the gold standard to estimate the accuracy of model prediction applied to tagged audio segments and maternal perception of movement. The area under the receiver operating characteristic curve (AUROC) was used to evaluate the accuracy of the binary classifier to detect the presence or absence of any fetal movement. Macro F1 scores were used to evaluate the accuracy of more refined movements (gross movement, breathing, and hiccups). Isolated trunk and limb movements were marked with a single timestamp, and continuous or repetitive gross fetal movements were annotated with a continuous timestamp spanning the duration of the activity. RESULTS: Overall, 136 participants were included; 30 patients were followed longitudinally, and 106 received only one study visit. Generalized additive models were applied to selected MFCCs and analyzed separately for cohort recordings and fetal movement types. Results revealed nonlinear associations with gestational age (adjusted P <.001) and maternal BMI (adjusted P <.001), informing algorithm refinement. In our final model adjusting for gestational age and maternal BMI, detection of fetal movement with smartphone audio recordings was noted to be highly accurate. Binary detection of the presence or absence of any fetal movement was clinically significant (AUROC 0.886, 95% CI, 0.883–0.888) compared with maternal perception of 3.0%. Gross fetal movement was detected at an accuracy of 64.0% (95% CI, 63.1–66.7%), whereas maternal perception of fetal movements yielded an accuracy of 18.0%. Similarly, the accuracy of audio recordings compared with ultrasound-detected fetal breathing movements was found to be 93.0% (95% CI, 92.0–94.2%) as compared with 3.0% for maternal perception. Finally, the accuracy of audio recordings for fetal hiccups was 73.0% (95% CI, 68.2–76.2%) compared with 32.0% for maternal perception. CONCLUSION: Audio-based assessment of fetal movement using a smartphone can reliably detect gross fetal movements, as well as fetal breathing and hiccups observed on ultrasonography, and proved superior to maternal perception of movements.
科研通智能强力驱动
Strongly Powered by AbleSci AI