可解释性
异常检测
人工智能
计算机科学
深度学习
机器学习
过程(计算)
基本事实
模式识别(心理学)
卷积神经网络
数据挖掘
半导体器件制造
工程类
操作系统
电气工程
薄脆饼
作者
Mark Gorman,Xuemei Ding,Liam Maguire,Damien Coyle
出处
期刊:IEEE Transactions on Semiconductor Manufacturing
[Institute of Electrical and Electronics Engineers]
日期:2023-02-01
卷期号:36 (1): 147-150
标识
DOI:10.1109/tsm.2022.3216032
摘要
Multivariate batch time-series data sets within Semiconductor manufacturing processes present a difficult environment for effective Anomaly Detection (AD). The challenge is amplified by the limited availability of ground truth labelled data. In scenarios where AD is possible, black box modelling approaches constrain model interpretability. These challenges obstruct the widespread adoption of Deep Learning solutions. The objective of the study is to demonstrate an AD approach which employs 1-Dimensional Convolutional AutoEncoders (1d-CAE) and Localised Reconstruction Error (LRE) to improve AD performance and interpretability. Using LRE to identify sensors and data that result in the anomaly, the explainability of the Deep Learning solution is enhanced. The Tennessee Eastman Process (TEP) and LAM 9600 Metal Etcher datasets have been utilised to validate the proposed framework. The results show that the proposed LRE approach outperforms global reconstruction errors for similar model architectures achieving an AUC of 1.00. The proposed unsupervised learning approach with AE and LRE improves model explainability which is expected to be beneficial for deployment in semiconductor manufacturing where interpretable and trustworthy results are critical for process engineering teams.
科研通智能强力驱动
Strongly Powered by AbleSci AI