帕斯卡(单位)
残余物
分割
管道(软件)
计算机科学
人工智能
残差神经网络
机器学习
背景(考古学)
GSM演进的增强数据速率
编码(集合论)
模式识别(心理学)
深度学习
算法
集合(抽象数据类型)
古生物学
生物
程序设计语言
作者
Zifeng Wu,Chunhua Shen,Anton van den Hengel
标识
DOI:10.1016/j.patcog.2019.01.006
摘要
The trend towards increasingly deep neural networks has been driven by a general observation that increasing depth increases the performance of a network. Recently, however, evidence has been amassing that simply increasing depth may not be the best way to increase performance, particularly given other limitations. Investigations into deep residual networks have also suggested that they may not in fact be operating as a single deep network, but rather as an ensemble of many relatively shallow networks. We examine these issues, and in doing so arrive at a new interpretation of the unravelled view of deep residual networks which explains some of the behaviours that have been observed experimentally. As a result, we are able to derive a new, shallower, architecture of residual networks which significantly outperforms much deeper models such as ResNet-200 on the ImageNet classification dataset. We also show that this performance is transferable to other problem domains by developing a semantic segmentation approach which outperforms the state-of-the-art by a remarkable margin on datasets including PASCAL VOC, PASCAL Context, and Cityscapes. The architecture that we propose thus outperforms its comparators, including very deep ResNets, and yet is more efficient in memory use and sometimes also in training time. The code and models are available at https://github.com/itijyou/ademxapp
科研通智能强力驱动
Strongly Powered by AbleSci AI