计算机科学
边缘设备
服务器
边缘计算
分拆(数论)
钥匙(锁)
人工神经网络
分布式计算
人工智能
GSM演进的增强数据速率
机器学习
计算机网络
云计算
操作系统
数学
组合数学
标识
DOI:10.1109/jiot.2021.3127715
摘要
With the growth of intelligent Internet of Things (IoT) applications and services, deep neural network (DNN) has become the core method to power and enable increased functionality in many smart IoT devices. However, DNN training is difficult to carry out on end devices because it requires a great deal of computational power. The conventional approach to DNN training is generally implemented on a powerful computation server; nevertheless, this approach violates privacy because it exposes the training data to curious service providers. In this article, we consider a collaborative DNN training system between a resource-constrained end device and a powerful edge server, aiming at partitioning a DNN into a front-end part running on the end device and a back-end part running on the edge server to accelerate the training process while preserving the privacy of the training data. With the key challenge being how to locate the optimal partition point to minimize the end-to-end training delay, we propose an online learning module, called learn-to-split (L2S), to adaptively learn the optimal partition point on the fly. This approach is unlike existing efforts on DNN partitioning that relies heavily on a dedicated offline profiling stage. In particular, we design a new contextual bandit learning algorithm called LinUCB-E as the basis of L2S, which has provable theoretical learning performance and is ultralightweight for easy real-world implementation. We implement a prototype system consisting of an end device and an edge server, and experimental results demonstrate that L2S can significantly outperform state-of-the-art benchmarks in terms of reducing the end-to-end training delay and preserving privacy.
科研通智能强力驱动
Strongly Powered by AbleSci AI