计算机科学
异常检测
变压器
模式识别(心理学)
人工智能
工程类
电压
电气工程
作者
Ziyi Chen,Chenyao Bai,Yunlong Zhu,Xiwen Lu
标识
DOI:10.1109/lsp.2024.3372783
摘要
In anomaly detection, acquiring a sufficient number and diverse range of anomaly samples is challenging due to their scarcity and unpredictability. To address this issue, this paper focuses on exploring an unsupervised method that can fit the distribution of positive samples during training. Previous methods adopt a reconstruction paradigm to detect anomaly positions based on significant reconstruction discrepancies since rebuilding normal inputs often has minor errors. However, since the reconstruction is an unsupervised process, most vanilla generative models can also reconstruct anomaly samples with minor reconstruction errors, which has a negative effect on detecting anomaly samples. To address this problem, we propose a new U-net pipeline that integrates template-augmented Transformer to enlarge reconstruction error for anomaly samples, while still keeping the error for normal samples small. Additionally, we employ dense contrastive learning in the pre-training phase to help the encoder extract separable representations of normal and anomaly samples. Extensive experiments on four benchmark datasets demonstrate our model robustness and effectiveness.
科研通智能强力驱动
Strongly Powered by AbleSci AI