残差神经网络
计算机科学
深度学习
残余物
联营
网络体系结构
抽象
人工神经网络
人工智能
卷积(计算机科学)
梯度下降
算法
计算机网络
认识论
哲学
作者
Piyush Nagpal,Shivani Atul Bhinge,Ajitkumar Shitole
标识
DOI:10.1109/smartgencon56628.2022.10083966
摘要
Neural networks today are becoming increasingly complex, from a few layers to more than 100 layers. The principal advantage of a totally deep neural network is that it may represent very complicated functions. Functions can be learned at different levels of abstraction, such as low-level boundary functions and high-level complex functions. However, the use of deep networks is not always efficient. A huge barrier to training them is vanishing gradients: very deep networks often have a gradient signal that goes to zero quickly, thus making gradient descent prohibitively slow. Deep residual networks are nearly like networks in which convolution, pooling, activation, and fully connected layers are superimposed. The only construct of a simple network that can be created as a residual network is the identifying link between the layers. Different types of ResNet can be developed depending on the depth of the network, such as ResNet-50 or ResNet-152. The number at the end of ResNet suggests the variety of layers in the community or the depth of the network. ResNet can be designed to any depth using ResNet's basic building blocks. In this article, we demonstrated a residual network with depths between 34 and 152 and tried to differentiate the architectures by training them on the same dataset.
科研通智能强力驱动
Strongly Powered by AbleSci AI