异常检测
计算机科学
人工智能
离群值
分割
正规化(语言学)
机器学习
特征(语言学)
深度学习
异常(物理)
提前停车
模式识别(心理学)
学习迁移
特征工程
特征提取
人工神经网络
物理
哲学
语言学
凝聚态物理
作者
Tal Reiss,Niv Cohen,Liron Bergman,Yedid Hoshen
标识
DOI:10.1109/cvpr46437.2021.00283
摘要
Anomaly detection methods require high-quality features. In recent years, the anomaly detection community has attempted to obtain better features using advances in deep self-supervised feature learning. Surprisingly, a very promising direction, using pre-trained deep features, has been mostly overlooked. In this paper, we first empirically establish the perhaps expected, but unreported result, that combining pre-trained features with simple anomaly detection and segmentation methods convincingly outperforms, much more complex, state-of-the-art methods.In order to obtain further performance gains in anomaly detection, we adapt pre-trained features to the target distribution. Although transfer learning methods are well established in multi-class classification problems, the one-class classification (OCC) setting is not as well explored. It turns out that naive adaptation methods, which typically work well in supervised learning, often result in catastrophic collapse (feature deterioration) and reduce performance in OCC settings. A popular OCC method, DeepSVDD, advocates using specialized architectures, but this limits the adaptation performance gain. We propose two methods for combating collapse: i) a variant of early stopping that dynamically learns the stopping iteration ii) elastic regularization inspired by continual learning. Our method, PANDA, outperforms the state-of-the-art in the OCC, outlier exposure and anomaly segmentation settings by large margins 1 .
科研通智能强力驱动
Strongly Powered by AbleSci AI