加速
计算机科学
修剪
图形
建筑
算法
人工神经网络
并行计算
理论计算机科学
人工智能
艺术
农学
视觉艺术
生物
作者
Cen Chen,Kenli Li,Xiaofeng Zou,Yangfan Li
标识
DOI:10.1109/dac18074.2021.9586298
摘要
Recently, graph neural networks (GNNs) have achieved great success for graph representation learning tasks. Enlightened by the fact that numerous message passing redundancies exist in GNNs, we propose DyGNN, which speeds up GNNs by reducing redundancies. DyGNN is supported by an algorithm and architecture co-design. The proposed algorithm can dynamically prune vertices and edges during execution without accuracy loss. An architecture is designed to support dynamic pruning and transform it into performance improvement. DyGNN opens new directions for accelerating GNNs by pruning vertices and edges. DyGNN gains average $2\times$ speedup with accuracy improvement of 4% compared with state-of-the-art GNN accelerators.
科研通智能强力驱动
Strongly Powered by AbleSci AI