计算机科学
卷积神经网络
人工智能
卷积码
模式识别(心理学)
语音识别
解码方法
自然语言处理
算法
作者
Tan-Hsu Tan,Yang-Lang Chang,Jun-Rong Wu,Yung-Fu Chen,Mohammad Alkhaleefah
标识
DOI:10.1109/jiot.2023.3294421
摘要
Convolutional neural networks (CNNs) have shown great promise in human activity recognition, but long-term dependencies in time series data can be difficult to capture using standard CNNs. This study introduces a new CNN architecture that incorporates a multi-head attention mechanism (CNN-MHA) to address this challenge. This mechanism is composed of several attention heads, each independently calculating attention weights for distinct segments of the input. The attention head outputs are then concatenated and processed through a fully connected layer to produce the final attention representation. The multi-head attention mechanism allows the network to focus on relevant features and maintain long-term dependencies in the input data. The proposed model is evaluated on the physical activity monitoring for aging people dataset (PAMAP2) from the UCI machine learning repository, which is preprocessed by cleaning, normalization, segmentation, and reshaping before splitting into training, validation, and testing sets. The experimental results demonstrate that the CNN-MHA model outperforms existing models, achieving F1-score of 95.7%. Particularly, the multi-head attention mechanism significantly improves the model's ability to recognize complex activity patterns. Furthermore, our model attained an average inference latency of 0.304 seconds, which can be crucial in real-time applications. The findings clearly demonstrate the substantial promise of the proposed CNN-MHA architecture for optimizing human activity recognition tasks, offering a powerful tool for advancing the state-of-the-art in this domain.
科研通智能强力驱动
Strongly Powered by AbleSci AI