可解释性
计算机科学
判别式
人工智能
透视图(图形)
机器学习
特征(语言学)
领域(数学)
可穿戴计算机
代表(政治)
班级(哲学)
编码器
特征学习
可视化
过程(计算)
嵌入
嵌入式系统
哲学
语言学
数学
政治
政治学
纯数学
法学
操作系统
作者
Chenlong Gao,Teng Zhang,Xinlong Jiang,Wuliang Huang,Yiqiang Chen,Jie Li
标识
DOI:10.1109/smartworld-uic-atc-scalcom-digitaltwin-pricomp-metaverse56740.2022.00091
摘要
Existing models have obtained satisfactory results on whether users have fallen or not. However, in our lives, due to the unpredictability of falls, the binary fall detection models cannot identify different patterns of falls and thus cannot take corresponding protective measures. Therefore, the recognition of more subtle falls has become an urgent challenge. Moreover, the complexity of fall detection models makes it difficult to explain the decision process. In this paper, we propose a new interpretable fine-grained fall detection network, called ProtoPLSTM. The model consists of three main modules: CNN-LSTM encoder backbone network, contextual enhancement module, and ProtoPNet-based network. The first module learns a multi-sensor feature representation through a carefully designed embedding network. The contextual enhancement module expands the receptive field to capture more discriminative features through contextual information and then mines the inter-class differences and intra-class associations of different kinds of falls. The last module globally explains why a fall is detected as some kind of fall from the prototype's perspective. Our experiments on SisFall, a publicly available dataset with the largest number of fall classes, demonstrate that the proposed method can outperform state-of-the-art methods while having interpretability. The visualization and qualitative analysis explained the important factors and decision-making processes for the occurrence of different kinds of falls from a prototypical perspective.
科研通智能强力驱动
Strongly Powered by AbleSci AI