计算机科学
足迹
人工智能
内存占用
推论
边缘设备
建筑
领域(数学)
深度学习
编码器
无人机
动作识别
动作(物理)
GSM演进的增强数据速率
机器学习
资源(消歧)
计算机网络
云计算
艺术
古生物学
遗传学
物理
数学
量子力学
纯数学
视觉艺术
生物
班级(哲学)
操作系统
作者
Mohammed El Amine Mokhtari,Elias Ennadifi,Matei Mancaş,Bernard Gosselin
标识
DOI:10.1145/3611659.3617205
摘要
We present ActioNet, a groundbreaking lightweight neural network architecture optimized for action recognition tasks, particularly in resource-constrained environments such as drones and edge devices. Utilizing a strategically modified 3D U-Net encoder followed by fully connected layers for fine-grained classification, ActioNet manages to achieve a promising validation accuracy of 72%. This is accomplished with a notably compact model size of just 46MB, making it uniquely suitable for devices with limited computational capabilities. Although ActioNet may not surpass state-of-the-art models in terms of sheer accuracy, it distinguishes itself through its fast inference times and small footprint. These attributes make real-time action recognition not only feasible but also efficient in constrained operational settings. We argue that ActioNet serves as a meaningful contribution to the emerging field of efficient deep learning and provides a solid foundation for future advancements in lightweight action recognition models.
科研通智能强力驱动
Strongly Powered by AbleSci AI