Accelerating Sampling and Aggregation Operations in GNN Frameworks with GPU Initiated Direct Storage Accesses

计算机科学 并行计算 采样(信号处理) 计算机数据存储 数据库 计算机硬件 滤波器(信号处理) 计算机视觉
作者
Jeongmin Park,Vikram Sharma Mailthody,Zaid Qureshi,Wen‐mei Hwu
出处
期刊:Proceedings of the VLDB Endowment [Association for Computing Machinery]
卷期号:17 (6): 1227-1240 被引量:1
标识
DOI:10.14778/3648160.3648166
摘要

Graph Neural Networks (GNNs) are emerging as a powerful tool for learning from graph-structured data and performing sophisticated inference tasks in various application domains. Although GNNs have been shown to be effective on modest-sized graphs, training them on large-scale graphs remains a significant challenge due to the lack of efficient storage access and caching methods for graph data. Existing frameworks for training GNNs use CPUs for graph sampling and feature aggregation, while the training and updating of model weights are executed on GPUs. However, our in-depth profiling shows CPUs cannot achieve the graph sampling and feature aggregation throughput required to keep up with GPUs. Furthermore, when the graph and its embeddings do not fit in the CPU memory, the overhead introduced by the operating system, say for handling page-faults, causes gross under-utilization of hardware and prolonged end-to-end execution time. To address these issues, we propose the GPU Initiated Direct Storage Access (GIDS) dataloader, to enable GPU-oriented GNN training for large-scale graphs while efficiently utilizing all hardware resources, such as CPU memory, storage, and GPU memory. The GIDS dataloader first addresses memory capacity constraints by enabling GPU threads to directly fetch feature vectors from storage. Then, we introduce a set of innovative solutions, including the dynamic storage access accumulator, constant CPU buffer, and GPU software cache with window buffering, to balance resource utilization across the entire system for improved end-to-end training throughput. Our evaluation using a single GPU on terabyte-scale GNN datasets shows that the GIDS dataloader accelerates the overall DGL GNN training pipeline by up to 582× when compared to the current, state-of-the-art DGL dataloader.

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
2秒前
给你寄春天完成签到 ,获得积分10
3秒前
山东人在南京完成签到 ,获得积分10
5秒前
6秒前
7秒前
英姑应助科研通管家采纳,获得30
8秒前
QZR应助科研通管家采纳,获得40
8秒前
shouyu29应助科研通管家采纳,获得10
8秒前
科目三应助科研通管家采纳,获得10
8秒前
8秒前
巴山郎完成签到,获得积分10
9秒前
kim发布了新的文献求助30
9秒前
狂舞完成签到,获得积分10
11秒前
mix完成签到 ,获得积分10
12秒前
Liugz完成签到,获得积分10
12秒前
LH发布了新的文献求助10
12秒前
陈景深完成签到,获得积分20
14秒前
laa完成签到,获得积分10
16秒前
Savior完成签到,获得积分10
16秒前
chen完成签到,获得积分10
16秒前
爱吃秋刀鱼的大脸猫完成签到,获得积分10
18秒前
一包辣条完成签到,获得积分10
19秒前
chen完成签到,获得积分10
20秒前
parrot完成签到,获得积分20
21秒前
一万朵蝴蝶完成签到,获得积分10
21秒前
ll完成签到,获得积分10
21秒前
唯医完成签到,获得积分10
22秒前
23秒前
小柯完成签到,获得积分10
24秒前
tad81完成签到,获得积分10
24秒前
奋斗诗云完成签到 ,获得积分10
25秒前
Hello应助hyan采纳,获得10
26秒前
zt完成签到,获得积分10
27秒前
刘师兄吧完成签到,获得积分10
29秒前
唯医发布了新的文献求助10
29秒前
huyan完成签到,获得积分10
29秒前
大力的灵雁应助Yao采纳,获得10
30秒前
研友_LNM9r8完成签到,获得积分10
30秒前
赵志烨完成签到 ,获得积分10
30秒前
小新完成签到,获得积分10
31秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Les Mantodea de Guyane Insecta, Polyneoptera 2000
Quality by Design - An Indispensable Approach to Accelerate Biopharmaceutical Product Development 800
Pulse width control of a 3-phase inverter with non sinusoidal phase voltages 777
Signals, Systems, and Signal Processing 610
Research Methods for Applied Linguistics: A Practical Guide 600
Research Methods for Applied Linguistics 500
热门求助领域 (近24小时)
化学 材料科学 医学 生物 纳米技术 工程类 有机化学 化学工程 生物化学 计算机科学 物理 内科学 复合材料 催化作用 物理化学 光电子学 电极 细胞生物学 基因 无机化学
热门帖子
关注 科研通微信公众号,转发送积分 6404502
求助须知:如何正确求助?哪些是违规求助? 8223687
关于积分的说明 17430446
捐赠科研通 5457106
什么是DOI,文献DOI怎么找? 2883693
邀请新用户注册赠送积分活动 1859946
关于科研通互助平台的介绍 1701380