计算机科学
深度学习
推论
管道(软件)
人工智能
启发式
计算机体系结构
计算机工程
并行计算
嵌入式系统
操作系统
作者
Eunjin Jeong,Jangryul Kim,Soonhoi Ha
出处
期刊:ACM Transactions in Embedded Computing Systems
[Association for Computing Machinery]
日期:2022-01-26
卷期号:21 (5): 1-26
被引量:53
摘要
As deep learning inference applications are increasing in embedded devices, an embedded device tends to equip neural processing units (NPUs) in addition to a multi-core CPU and a GPU. NVIDIA Jetson AGX Xavier is an example. For fast and efficient development of deep learning applications, TensorRT is provided as the SDK for high-performance inference, including an optimizer and runtime that delivers low latency and high throughput for deep learning inference applications. Like most deep learning frameworks, TensorRT assumes that the inference is executed on a single processing element, GPU or NPU, not both. In this article, we present a TensorRT-based framework supporting various optimization parameters to accelerate a deep learning application targeted on an NVIDIA Jetson embedded platform with heterogeneous processors, including multi-threading, pipelining, buffer assignment, and network duplication. Since the design space of allocating layers to diverse processing elements and optimizing other parameters is huge, we devise a parameter optimization methodology that consists of a heuristic for balancing pipeline stages among heterogeneous processors and fine-tuning the process for optimizing parameters. With nine real-life benchmarks, we could achieve 101%~680% performance improvement and up to 55% energy reduction over the baseline inference using a GPU only.
科研通智能强力驱动
Strongly Powered by AbleSci AI