神经形态工程学
材料科学
光子学
加速度
计算机体系结构
纳米技术
计算机科学
人工神经网络
人工智能
光电子学
物理
经典力学
作者
Gaofei Wang,Junyan Che,Chen Gao,Han Zhou,Jiabin Shen,Zengguang Cheng,Peng Zhou
标识
DOI:10.1002/adma.202508029
摘要
Abstract Deep learning stands as a cornerstone of modern artificial intelligence (AI), revolutionizing fields from computer vision to large language models (LLMs). However, as electronic hardware approaches fundamental physical limits—constrained by transistor scaling challenges, von Neuman architecture, and thermal dissipation—critical bottlenecks emerge in computational density and energy efficiency. To bridge the gap between algorithmic ambition and hardware limitations, photonic neuromorphic computing emerges as a transformative candidate, exploiting light's inherent parallelism, sub‐nanosecond latency, and near‐zero thermal losses to natively execute matrix operations—the computational backbone of neural networks. Photonic neural networks (PNNs) have achieved influential milestones in AI acceleration, demonstrating single‐chip integration of both inference and in situ training—a leap forward with profound implications for next‐generation computing. This review synthesizes a decade of progress in PNNs core components, critically analyzing advances in linear synaptic devices, nonlinear neuron devices, and network architectures, summarizing their respective strengths and persistent challenges. Furthermore, application‐specific requirements are systematically analyzed for PNN deployment across computational regimes: cloud‐scale and edge/client‐side AIs. Finally, actionable pathways are outlined for overcoming material‐ and system‐level barriers, emphasizing topology‐optimized active/passive devices and advanced packaging strategies. These multidisciplinary advances position PNNs as a paradigm‐shifting platform for post‐Moore AI hardware.
科研通智能强力驱动
Strongly Powered by AbleSci AI