循环神经网络
计算机科学
堆积
序列(生物学)
人工智能
图层(电子)
计算
明星(博弈论)
门控
震级(天文学)
深度学习
算法
模式识别(心理学)
人工神经网络
材料科学
物理
神经科学
化学
纳米技术
生物
生物化学
天体物理学
天文
核磁共振
作者
Mehmet Özgür Türkoglu,Stefano D’Aronco,Jan Dirk Wegner,Konrad Schindler
标识
DOI:10.1109/tpami.2021.3064878
摘要
We propose a new STAckable Recurrent cell (STAR) for recurrent neural networks (RNNs), which has fewer parameters than widely used LSTM and GRU while being more robust against vanishing or exploding gradients. Stacking recurrent units into deep architectures suffers from two major limitations: (i) many recurrent cells (e.g., LSTMs) are costly in terms of parameters and computation resources; and (ii) deep RNNs are prone to vanishing or exploding gradients during training. We investigate the training of multi-layer RNNs and examine the magnitude of the gradients as they propagate through the network in the "vertical" direction. We show that, depending on the structure of the basic recurrent unit, the gradients are systematically attenuated or amplified. Based on our analysis we design a new type of gated cell that better preserves gradient magnitude. We validate our design on a large number of sequence modelling tasks and demonstrate that the proposed STAR cell allows to build and train deeper recurrent architectures, ultimately leading to improved performance while being computationally more efficient.
科研通智能强力驱动
Strongly Powered by AbleSci AI