降噪
扩散
计算机科学
分布(数学)
人工智能
模式识别(心理学)
数学
物理
数学分析
热力学
作者
Mark S. Graham,Walter Hugo Lopez Pinaya,Petru-Daniel Tudosiu,Parashkev Nachev,Sébastien Ourselin,M. Jorge Cardoso
标识
DOI:10.1109/cvprw59228.2023.00296
摘要
Out-of-distribution detection is crucial to the safe deployment of machine learning systems. Currently, unsupervised out-of-distribution detection is dominated by generative-based approaches that make use of estimates of the likelihood or other measurements from a generative model. Reconstruction-based methods offer an alternative approach, in which a measure of reconstruction error is used to determine if a sample is out-of-distribution. However, reconstruction-based approaches are less favoured, as they require careful tuning of the model's information bottleneck-such as the size of the latent dimension - to produce good results. In this work, we exploit the view of denoising diffusion probabilistic models (DDPM) as denoising autoencoders where the bottleneck is controlled externally, by means of the amount of noise applied. We propose to use DDPMs to reconstruct an input that has been noised to a range of noise levels, and use the resulting multi-dimensional reconstruction error to classify out-of-distribution inputs. We validate our approach both on standard computer-vision datasets and on higher dimension medical datasets. Our approach outperforms not only reconstruction-based methods, but also state-of-the-art generative-based approaches. Code is available at https://github.com/marksgraham/ddpm-ood.
科研通智能强力驱动
Strongly Powered by AbleSci AI