推论
计算机科学
人工智能
延迟(音频)
任务(项目管理)
深度学习
卷积神经网络
机器学习
深层神经网络
可靠性(半导体)
人工神经网络
过程(计算)
电信
功率(物理)
物理
管理
量子力学
经济
操作系统
作者
Mingyue Zhao,Xing Zhang,Zezhao Meng,Xiangwang Hou
标识
DOI:10.1109/iwcmc55113.2022.9824945
摘要
Recently deep neural networks (DNNs) are widely used in various fields. These intelligence applications, such as target recognition, are often computation-intensive and latency-sensitive. Since a single UAV's computing resource is limited, it is difficult to complete the DNN inference task independently. Partitioning the deep neural work into numerals subtasks and distributing them to multiple UAVs for collaborative computing seems a better way to finish the task. However, UAV usually works in a harsh environment, such as battlefield, disaster area, etc., and the link interruption or node failure in the inference process caused by uncertain factors may lead to failure of inference task. Hence, the reliability of DNN inference is of high importance. In this paper, we propose a deep Q learning-based DNN partitioning strategy for minimizing the energy consumption of DNN collaborative inference among multi-UAVs within latency and reliability requirements. To validate the effectiveness of the proposed strategy, a series of experiments are conducted on four kinds of typical DNNs (i.e., AlexNet, VGG19, GoogleNet, and ResNet). The simulation results prove the proposed strategy can effectively reduce the DNN inference cost under constraints.
科研通智能强力驱动
Strongly Powered by AbleSci AI