An adaptive DNN inference acceleration framework with end–edge–cloud collaborative computing

计算机科学 推论 延迟(音频) 云计算 服务器 边缘计算 计算 边缘设备 计算卸载 分布式计算 GSM演进的增强数据速率 移动边缘计算 计算机网络 人工智能 算法 操作系统 电信
作者
Guozhi Liu,Fei Dai,Xiaolong Xu,Xiaodong Fu,Wanchun Dou,Neeraj Kumar,Muhammad Bilal
出处
期刊:Future Generation Computer Systems [Elsevier BV]
卷期号:140: 422-435 被引量:63
标识
DOI:10.1016/j.future.2022.10.033
摘要

Deep Neural Networks (DNNs) based on intelligent applications have been intensively deployed on mobile devices. Unfortunately, resource-constrained mobile devices cannot meet stringent latency requirements due to a large amount of computation required by these intelligent applications. Both exiting cloud-assisted DNN inference approaches and edge-assisted DNN inference approaches can reduce end-to-end inference latency through offloading DNN computations to the cloud server or edge servers, but they suffer from unpredictable communication latency caused by long wide-area massive data transmission or performance degeneration caused by the limited computation resources. In this paper, we propose an adaptive DNN inference acceleration framework, which accelerates DNN inference by fully utilizing the end–edge–cloud collaborative computing. First, a latency prediction model is built to estimate the layer-wise execution latency of a DNN on different heterogeneous computing platforms, which use neural networks to learn non-linear features related to inference latency. Second, a computation partitioning algorithm is designed to identify two optimal partitioning points, which adaptively divide DNN computations into end devices, edge servers, and the cloud server for minimizing DNN inference latency. Finally, we conduct extensive experiments on three widely-adopted DNNs, and the experimental results show that our latency prediction models can improve the prediction accuracy by about 72.31% on average compared with four baseline approaches, and our computation partitioning approach can reduce the end-to-end latency by about 20.81% on average against six baseline approaches under three wireless networks.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
1秒前
nano_metal完成签到 ,获得积分10
1秒前
1秒前
充电宝应助luxi0714采纳,获得10
1秒前
JHJ完成签到,获得积分10
1秒前
2秒前
赘婿应助知12采纳,获得10
2秒前
2秒前
王翔发布了新的文献求助10
2秒前
李孟宇发布了新的文献求助30
3秒前
嘤嘤怪完成签到,获得积分10
3秒前
淡定自中发布了新的文献求助10
3秒前
慕青应助壮观以松采纳,获得10
4秒前
Li完成签到,获得积分10
4秒前
个性的饼干完成签到,获得积分10
4秒前
清风应助自由梦槐采纳,获得10
4秒前
花筱一发布了新的文献求助10
4秒前
5秒前
CipherSage应助Sylvia采纳,获得10
5秒前
5秒前
无花果应助wjx采纳,获得10
5秒前
5秒前
5秒前
6秒前
科目三应助工大搬砖战神采纳,获得10
6秒前
Orange应助yuanshl1985采纳,获得10
7秒前
尉迟希望应助zhang采纳,获得10
7秒前
汉堡包应助xh采纳,获得10
7秒前
8秒前
8秒前
卢珈馨发布了新的文献求助10
8秒前
乐乐应助Ran采纳,获得10
8秒前
李爱国应助冬嘉采纳,获得10
8秒前
张子怡完成签到 ,获得积分10
8秒前
8秒前
fff发布了新的文献求助10
8秒前
8秒前
斯文败类应助qq采纳,获得10
8秒前
yufanhui应助自由的水绿采纳,获得20
9秒前
脑洞疼应助可靠月亮采纳,获得10
9秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Fermented Coffee Market 2000
PARLOC2001: The update of loss containment data for offshore pipelines 500
Critical Thinking: Tools for Taking Charge of Your Learning and Your Life 4th Edition 500
Phylogenetic study of the order Polydesmida (Myriapoda: Diplopoda) 500
A Manual for the Identification of Plant Seeds and Fruits : Second revised edition 500
Constitutional and Administrative Law 400
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 生物化学 物理 纳米技术 计算机科学 内科学 化学工程 复合材料 物理化学 基因 遗传学 催化作用 冶金 量子力学 光电子学
热门帖子
关注 科研通微信公众号,转发送积分 5261911
求助须知:如何正确求助?哪些是违规求助? 4423050
关于积分的说明 13768354
捐赠科研通 4297554
什么是DOI,文献DOI怎么找? 2358051
邀请新用户注册赠送积分活动 1354404
关于科研通互助平台的介绍 1315457