亲爱的研友该休息了!由于当前在线用户较少,发布求助请尽量完整地填写文献信息,科研通机器人24小时在线,伴您度过漫漫科研夜!身体可是革命的本钱,早点休息,好梦!

Neural Network Compression Based on Tensor Ring Decomposition

计算复杂性理论 秩(图论) 压缩(物理) 因式分解 张量(固有定义) 人工神经网络 矩阵分解 算法 素数(序理论) 理论计算机科学 数学 计算机科学 人工智能 纯数学 材料科学 复合材料 特征向量 物理 组合数学 量子力学
作者
Kun Xie,Can Liu,Xin Wang,Xiaocan Li,Gaogang Xie,Jigang Wen,Kenli Li
出处
期刊:IEEE transactions on neural networks and learning systems [Institute of Electrical and Electronics Engineers]
卷期号:36 (3): 5388-5402
标识
DOI:10.1109/tnnls.2024.3383392
摘要

Deep neural networks (DNNs) have made great breakthroughs and seen applications in many domains. However, the incomparable accuracy of DNNs is achieved with the cost of considerable memory consumption and high computational complexity, which restricts their deployment on conventional desktops and portable devices. To address this issue, low-rank factorization, which decomposes the neural network parameters into smaller sized matrices or tensors, has emerged as a promising technique for network compression. In this article, we propose leveraging the emerging tensor ring (TR) factorization to compress the neural network. We investigate the impact of both parameter tensor reshaping and TR decomposition (TRD) on the total number of compressed parameters. To achieve the maximal parameter compression, we propose an algorithm based on prime factorization that simultaneously identifies the optimal tensor reshaping and TRD. In addition, we discover that different execution orders of the core tensors result in varying computational complexities. To identify the optimal execution order, we construct a novel tree structure. Based on this structure, we propose a top-to-bottom splitting algorithm to schedule the execution of core tensors, thereby minimizing computational complexity. We have performed extensive experiments using three kinds of neural networks with three different datasets. The experimental results demonstrate that, compared with the three state-of-the-art algorithms for low-rank factorization, our algorithm can achieve better performance with much lower memory consumption and lower computational complexity.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
充电宝应助mostspecial采纳,获得10
14秒前
小白t73完成签到 ,获得积分10
17秒前
18秒前
量子星尘发布了新的文献求助10
22秒前
28秒前
38秒前
粽子完成签到,获得积分10
41秒前
白晔发布了新的文献求助10
43秒前
我爱学习完成签到 ,获得积分10
1分钟前
星际舟完成签到,获得积分10
1分钟前
1分钟前
1分钟前
默默乘云发布了新的文献求助10
1分钟前
mostspecial发布了新的文献求助10
1分钟前
科研通AI2S应助科研通管家采纳,获得10
1分钟前
1分钟前
1分钟前
1分钟前
1分钟前
过冷水发布了新的文献求助10
1分钟前
mostspecial完成签到,获得积分10
1分钟前
量子星尘发布了新的文献求助10
2分钟前
Cheung2121应助赫赫赫赫宇采纳,获得10
2分钟前
xiuxiuzhang完成签到 ,获得积分10
2分钟前
Nefelibata完成签到,获得积分10
2分钟前
Limerencia完成签到,获得积分10
2分钟前
天天快乐应助不落采纳,获得10
3分钟前
3分钟前
思源应助lushier采纳,获得10
3分钟前
3分钟前
3分钟前
3分钟前
不落发布了新的文献求助10
3分钟前
梦鱼完成签到,获得积分10
3分钟前
3分钟前
3分钟前
3分钟前
Hello应助科研通管家采纳,获得30
3分钟前
3分钟前
哈基米德应助科研通管家采纳,获得20
3分钟前
高分求助中
【提示信息,请勿应助】关于scihub 10000
Les Mantodea de Guyane: Insecta, Polyneoptera [The Mantids of French Guiana] 3000
The Mother of All Tableaux: Order, Equivalence, and Geometry in the Large-scale Structure of Optimality Theory 3000
徐淮辽南地区新元古代叠层石及生物地层 2000
A new approach to the extrapolation of accelerated life test data 1000
Exosomes from Umbilical Cord-Originated Mesenchymal Stem Cells (MSCs) Prevent and Treat Diabetic Nephropathy in Rats via Modulating the Wingless-Related Integration Site (Wnt)/β-Catenin Signal Transduction Pathway 500
Global Eyelash Assessment scale (GEA) 500
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 遗传学 基因 物理化学 催化作用 冶金 细胞生物学 免疫学
热门帖子
关注 科研通微信公众号,转发送积分 4029216
求助须知:如何正确求助?哪些是违规求助? 3568131
关于积分的说明 11356060
捐赠科研通 3299345
什么是DOI,文献DOI怎么找? 1816603
邀请新用户注册赠送积分活动 890889
科研通“疑难数据库(出版商)”最低求助积分说明 813883