Linear Recursive Feature Machines provably recover low-rank matrices

维数之咒 降维 设计矩阵 特征(语言学) 计算机科学 特征向量 秩(图论) 人工神经网络 估计员 人工智能 基质(化学分析) 算法 线性回归 机器学习 模式识别(心理学) 数学 统计 复合材料 哲学 组合数学 材料科学 语言学
作者
Adityanarayanan Radhakrishnan,Mikhail Belkin,Dmitriy Drusvyatskiy
出处
期刊:Proceedings of the National Academy of Sciences of the United States of America [National Academy of Sciences]
卷期号:122 (13)
标识
DOI:10.1073/pnas.2411325122
摘要

A fundamental problem in machine learning is to understand how neural networks make accurate predictions, while seemingly bypassing the curse of dimensionality. A possible explanation is that common training algorithms for neural networks implicitly perform dimensionality reduction—a process called feature learning. Recent work [A. Radhakrishnan, D. Beaglehole, P. Pandit, M. Belkin, Science 383 , 1461–1467 (2024).] posited that the effects of feature learning can be elicited from a classical statistical estimator called the average gradient outer product (AGOP). The authors proposed Recursive Feature Machines (RFMs) as an algorithm that explicitly performs feature learning by alternating between 1) reweighting the feature vectors by the AGOP and 2) learning the prediction function in the transformed space. In this work, we develop theoretical guarantees for how RFM performs dimensionality reduction by focusing on the class of overparameterized problems arising in sparse linear regression and low-rank matrix recovery. Specifically, we show that RFM restricted to linear models (lin-RFM) reduces to a variant of the well-studied Iteratively Reweighted Least Squares (IRLS) algorithm. Furthermore, our results connect feature learning in neural networks and classical sparse recovery algorithms and shed light on how neural networks recover low rank structure from data. In addition, we provide an implementation of lin-RFM that scales to matrices with millions of missing entries. Our implementation is faster than the standard IRLS algorithms since it avoids forming singular value decompositions. It also outperforms deep linear networks for sparse linear regression and low-rank matrix completion.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
1秒前
nano_metal完成签到 ,获得积分10
1秒前
1秒前
充电宝应助luxi0714采纳,获得10
1秒前
JHJ完成签到,获得积分10
1秒前
2秒前
赘婿应助知12采纳,获得10
2秒前
2秒前
王翔发布了新的文献求助10
2秒前
李孟宇发布了新的文献求助30
3秒前
嘤嘤怪完成签到,获得积分10
3秒前
淡定自中发布了新的文献求助10
3秒前
慕青应助壮观以松采纳,获得10
4秒前
Li完成签到,获得积分10
4秒前
个性的饼干完成签到,获得积分10
4秒前
清风应助自由梦槐采纳,获得10
4秒前
花筱一发布了新的文献求助10
4秒前
5秒前
CipherSage应助Sylvia采纳,获得10
5秒前
5秒前
无花果应助wjx采纳,获得10
5秒前
5秒前
5秒前
6秒前
科目三应助工大搬砖战神采纳,获得10
6秒前
Orange应助yuanshl1985采纳,获得10
7秒前
尉迟希望应助zhang采纳,获得10
7秒前
汉堡包应助xh采纳,获得10
7秒前
8秒前
8秒前
卢珈馨发布了新的文献求助10
8秒前
乐乐应助Ran采纳,获得10
8秒前
李爱国应助冬嘉采纳,获得10
8秒前
张子怡完成签到 ,获得积分10
8秒前
8秒前
fff发布了新的文献求助10
8秒前
8秒前
斯文败类应助qq采纳,获得10
8秒前
yufanhui应助自由的水绿采纳,获得20
9秒前
脑洞疼应助可靠月亮采纳,获得10
9秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Fermented Coffee Market 2000
PARLOC2001: The update of loss containment data for offshore pipelines 500
Critical Thinking: Tools for Taking Charge of Your Learning and Your Life 4th Edition 500
Phylogenetic study of the order Polydesmida (Myriapoda: Diplopoda) 500
A Manual for the Identification of Plant Seeds and Fruits : Second revised edition 500
Constitutional and Administrative Law 400
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 生物化学 物理 纳米技术 计算机科学 内科学 化学工程 复合材料 物理化学 基因 遗传学 催化作用 冶金 量子力学 光电子学
热门帖子
关注 科研通微信公众号,转发送积分 5261911
求助须知:如何正确求助?哪些是违规求助? 4423050
关于积分的说明 13768354
捐赠科研通 4297554
什么是DOI,文献DOI怎么找? 2358051
邀请新用户注册赠送积分活动 1354404
关于科研通互助平台的介绍 1315457