情态动词
计算机科学
脚踝
特征(语言学)
人工智能
肌电图
集合(抽象数据类型)
模式识别(心理学)
计算机视觉
物理医学与康复
医学
语言学
哲学
病理
化学
高分子化学
程序设计语言
作者
Qiang Zhang,Ashwin Iyer,Ziyue Sun,Kang Kim,Nitin Sharma
标识
DOI:10.1109/tnsre.2021.3106900
摘要
For decades, surface electromyography (sEMG) has been a popular non-invasive bio-sensing technology for predicting human joint motion. However, cross-talk, interference from adjacent muscles, and its inability to measure deeply located muscles limit its performance in predicting joint motion. Recently, ultrasound (US) imaging has been proposed as an alternative non-invasive technology to predict joint movement due to its high signal-to-noise ratio, direct visualization of targeted tissue, and ability to access deep-seated muscles. This paper proposes a dual-modal approach that combines US imaging and sEMG for predicting volitional dynamic ankle dorsiflexion movement. Three feature sets: 1) a uni-modal set with four sEMG features, 2) a uni-modal set with four US imaging features, and 3) a dual-modal set with four dominant sEMG and US imaging features, together with measured ankle dorsiflexion angles, were used to train multiple machine learning regression models. The experimental results from a seated posture and five walking trials at different speeds, ranging from 0.50 m/s to 1.50 m/s, showed that the dual-modal set significantly reduced the prediction root mean square errors (RMSEs). Compared to the uni-modal sEMG feature set, the dual-modal set reduced RMSEs by up to 47.84% for the seated posture and up to 77.72% for the walking trials. Similarly, when compared to the US imaging feature set, the dual-modal set reduced RMSEs by up to 53.95% for the seated posture and up to 58.39% for the walking trials. The findings show that potentially the dual-modal sensing approach can be used as a superior sensing modality to predict human intent of a continuous motion and implemented for volitional control of clinical rehabilitative and assistive devices.
科研通智能强力驱动
Strongly Powered by AbleSci AI