计算机科学
手语
可穿戴计算机
运动(物理)
符号(数学)
云计算
人工智能
语音识别
GSM演进的增强数据速率
深度学习
机器学习
嵌入式系统
数学分析
哲学
语言学
数学
操作系统
作者
Shiv Kumar Sharma,Rinki Gupta,Alok Kumar
标识
DOI:10.1016/j.eswa.2024.123147
摘要
Research on automatic translation of sign language to verbal languages has been progressively explored in recent years to assist speech and hearing-impaired people in communicating with non-signers. In this paper, a tiny machine learning (TinyML) solution is proposed for sign language recognition using a low-cost, wearable, internet-of things (IoT) device. A lightweight deep neural network is deployed on the edge device to interpret isolated signs from the Indian sign language using the time-series data collected from the motion sensors of the device. The scarcity of labeled training data is addressed by employing the deep transfer learning approach. Here, the knowledge gained from the data collected using the motion sensors of a different device is used to initialize the model parameters. The performance of the model is assessed in terms of classification accuracy and prediction time for different sampling rates and transferring schemes. The model achieves an average accuracy of 87.18% when all the parameters are retrained with just 4 observations of each sign recorded from the motion sensors of the proposed IoT device. The recognized sign is transmitted to a cloud platform in real-time. A mobile application, SignTalk, is also developed, which wirelessly receives the predicted signs from the cloud and displays it as text. Additionally, text-to-speech conversion is also provided on SignTalk to vocalize the predicted sign for better communication.
科研通智能强力驱动
Strongly Powered by AbleSci AI