帕斯卡(单位)
计算机科学
最大化
人工智能
计算
分割
规范化(社会学)
理论计算机科学
算法
数学
数学优化
社会学
人类学
程序设计语言
作者
Xia Li,Zhisheng Zhong,Jianlong Wu,Yibo Yang,Zhouchen Lin,Hong Liu
出处
期刊:Cornell University - arXiv
日期:2019-01-01
被引量:28
标识
DOI:10.48550/arxiv.1907.13426
摘要
Self-attention mechanism has been widely used for various tasks. It is designed to compute the representation of each position by a weighted sum of the features at all positions. Thus, it can capture long-range relations for computer vision tasks. However, it is computationally consuming. Since the attention maps are computed w.r.t all other positions. In this paper, we formulate the attention mechanism into an expectation-maximization manner and iteratively estimate a much more compact set of bases upon which the attention maps are computed. By a weighted summation upon these bases, the resulting representation is low-rank and deprecates noisy information from the input. The proposed Expectation-Maximization Attention (EMA) module is robust to the variance of input and is also friendly in memory and computation. Moreover, we set up the bases maintenance and normalization methods to stabilize its training procedure. We conduct extensive experiments on popular semantic segmentation benchmarks including PASCAL VOC, PASCAL Context and COCO Stuff, on which we set new records.
科研通智能强力驱动
Strongly Powered by AbleSci AI