跳到主要內容

臺灣博碩士論文加值系統

(44.200.194.255) 您好!臺灣時間:2024/07/18 13:13
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:徐銘澤
研究生(外文):Hsu, Ming-Tse
論文名稱:基於持續學習孿生神經網路之跨元件瑕疵偵測
論文名稱(外文):Siamese Neural Network-based Cross-Component Continual Learning Defect Detection
指導教授:帥宏翰
指導教授(外文):Shuai, Hong-Han
口試委員:帥宏翰林家瑜王蒞君張益銘
口試委員(外文):Shuai, Hong-HanLin, Chia-YuWang, Li-ChunChang, Yi-Ming
口試日期:2021-11-22
學位類別:碩士
校院名稱:國立陽明交通大學
系所名稱:電機工程學系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2021
畢業學年度:110
語文別:英文
論文頁數:41
中文關鍵詞:瑕疵偵測持續學習孿生神經網路圖像檢索深度學習
外文關鍵詞:Defect detectionContinual learningSiamese neural networkImage retrievalDeep learning
相關次數:
  • 被引用被引用:0
  • 點閱點閱:370
  • 評分評分:
  • 下載下載:62
  • 收藏至我的研究室書目清單書目收藏:0
隨著深度學習的發展,最近的研究路線旨在加強AOI檢測缺陷的能力,並取得了巨大的成功。然而,隨著產線、元件增多,通常需要重新訓練不同架構的模型,這可能給工程師維護、管理模型帶來嚴重問題。為了解決上述問題,我們提出了基於孿生神經網路的持續學習瑕疵偵測模型,這個模型可以感知圖片細節,並比較圖片之間的差異識別有瑕疵的圖片。並且在前一個任務學習的知識可以遷移到新的瑕疵偵測任務中。此外,我們提出了一種為孿生神經網路生成圖片對的方法:圖像檢索配對,這可以加快訓練過程並達到更好的效果。提出的模型在持續學習的場景下得到了很好的表現,在相同訓練方法下,較其他的瑕疵偵測模型有更好的結果。
With the advance of deep learning, a recent line of research aims to strengthen the ability of AOI to detect defects and achieves a great success. However, as the number of products and defect types increases, it usually requires re-training different models. This may cause a severe problem for engineers to maintain and manage models. In order to solve the above problems, we proposed a continual learning defect detection model based on the Siamese neural network. Our model can perceive image details and compare the differences in images to identify defective ones. Furthermore, the knowledge learned in the previous task can be transferred to the new defect detection task. In addition, we propose a method for generating image pairs for Siamese neural networks: image retrieval pairing. This can speed up the training process and reach better results. The proposed model achieves a great performance in the scene of continual learning. Under the same training scenario, it has better results than other classification models.
中文摘要 i
英文摘要 ii
誌謝 iii
Contents iv
List of Figures vi
List of Tables vii
1 Introduction 1
2 Related Work 5
2.1 Siamese Neural Network 5
2.2 Continual Learning 6
2.2.1 Regularization 7
2.2.2 Replay 7
2.2.3 Parameter Isolation 8
2.2.4 Expansion 8
3 Method 9
3.1 Overview 9
3.2 Siamese Neural Network Structure 11
3.3 Continual Learning 13
3.3.1 Expansion for Convolutional Layer 13
3.3.2 Expansion for Fully Connected Layer 14
3.3.3 Training Settings for Continual Learning 14
3.4 Image Retrieval Pairing 15
3.4.1 Pairing Method 15
3.4.2 Template Selection 17
3.4.3 Method for Imbalanced Dataset 17
3.5 Model Summary 18
4 Evaluation 20
4.1 Datasets 20
4.1.1 Electronic Component Defect Dataset 20
4.1.2 MVTec AD dataset 22
4.2 Experiment Design 23
4.3 Experimental Results 24
4.3.1 ECD Dataset 24
4.3.2 MVTec AD Dataset 25
4.3.3 Multitask Training 26
4.3.4 Continual Learning Verification 27
4.4 Ablation Studies 27
4.4.1 Pairing Method Comparison 27
4.4.2 Backbone Comparison 28
4.4.3 Training Sequence 29
4.4.4 Ablation Study of IRP 30
5 Discussion 31
5.1 Implementation Details 31
5.2 Initialization for Feature Extractor 31
5.3 Observations 32
5.3.1 The Impact of IRP At The Beginning of Training 32
5.3.2 Knowledge Transfer of Continual Learning 33
5.3.3 Defective Feature Perception 34
6 Conclusion 37
References 38
[1] Eid M. Taha et al. “Automatic Optical Inspection for PCB Manufacturing: a Survey”. In: IJSER. 2014.
[2] Hsien-Chou Liao et al. “Guidelines of Automated Optical Inspection (AOI) System Development”. In: ICSIP. 2018.
[3] Evan Racah et al. “Extremeweather: A large-scale climate dataset for semi-supervised detection, localization, and understanding of extreme weather events”. In: NIPS. 2017.
[4] Pramuditha Perera and Vishal M. Patel. “Learning deep features for one-class classification”. In: 2018. arXiv: 1801.05365 [cs.CV].
[5] Chenhui Luan et al. “A Siamese Network Utilizing Image Structural Differences for Cross-Category Defect Detection”. In: ICIP. 2020.
[6] Chong Zhou and Randy C. Paffenroth. “Anomaly detection with robust deep autoencoders”. In: KDD. 2017.
[7] Mohammad Sabokrou et al. “Adversarially learned one-class classifier for novelty detection”. In: CVPR. 2018.
[8] Pramuditha Perera et al. “Ocgan: One-class novelty detection using gans with constrained latent representations”. In: CVPR. 2018.
[9] Jane Bromley et al. “Signature Verification using a ‘ Siamese’ Time Delay Neural Network”. In: NIPS. 1993.
[10] Sumit Chopra et al. “Learning a Similarity Metric Discriminatively, with Application to Face Verification”. In: CVPR. 2005.
[11] Iaroslav Melekhov et al. “Siamese Network Features for Image Matching”. In: ICPR. 2016.
38[12] Qing Guo et al. “Learning Dynamic Siamese Network for Visual Object Tracking”. In: ICCV. 2017.
[13] Mustafa Berkay YILMAZ et al. “Hybrid User-Independent and User-Dependent Offline Signature Verification with a Two-Channel CNN”. In: CVPR. 2018.
[14] Kaiming He et al. “Deep Residual Learning for Image Recognition”. In: CVPR. 2016.
[15] Mingxing Tan et al. “EffcientNet: Rethinking Model Scaling for Convolutional Neural Networks”. In: ICML. 2019.
[16] Mingxing Tan et al. “EffcientDet: Scalable and Effcient Object Detection”. In: CVPR. 2020.
[17] Vinay Kumar Verma et al. “Effcient Feature Transformations for Discriminative and Generative Continual Learning”. In: CVPR. 2021.
[18] Paul Bergmann et al. “MVTec AD – A Comprehensive Real-World Dataset for Unsupervised Anomaly Detection”. In: CVPR. 2019.
[19] Paul Bergmann et al. “The MVTec Anomaly Detection Dataset: A Comprehensive Real-World Dataset for Unsupervised Anomaly Detection”. In: IJCV. 2021.
[20] Matthias De Lange et al. “A continual learning survey: Defying forgetting in classification tasks”. In: 2019. arXiv: 1909.08383 [cs.CV].
[21] James Kirkpatrick et al. “Overcoming catastrophic forgetting in neural networks”. In: NIPS. 2017.
[22] Friedemann Zenke et al. “Continual Learning Through Synaptic Intelligence”. In: ICML. 2017.
[23] Cuong V. Nguyen et al. “Variational Continual Learning”. In: ICLR. 2018.
[24] Ronald Kemker et al. “FearNet: Brain-Inspired Model for Incremental Learning”. In: ICLR. 2018.
[25] Hanul Shin et al. “Continual Learning with Deep Generative Replay”. In: NIPS. 2017.
[26] David Rolnick et al. “Experience Replay for Continual Learning”. In: NIPS. 2019.
39[27] Arun Mallya and Svetlana Lazebnik. “PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning”. In: CVPR. 2018.
[28] Joan Serrà et al. “Overcoming Catastrophic Forgetting with Hard Attention to the Task”. In: ICML. 2018.
[29] Andrei A. Rusu et al. “Progressive Neural Networks”. In: 2016. arXiv: 1606.04671 [cs.LG].
[30] Jaehong Yoon et al. “Lifelong Learning with Dynamically Expandable Networks”. In: ICLR. 2018.
[31] Jaehong Yoon et al. “Scalable and Order-robust Continual Learning with Additive Parameter Decomposition”. In: ICLR. 2020.
[32] Alex Krizhevsky et al. “Imagenet classification with deep convolutional neural networks”. In: NIPS. 2012.
[33] Saining Xie et al. “Aggregated residual transformations for deep neural networks”. In: CVPR. 2017.
[34] Andrew G. Howard et al. “MobileNets: Effcient Convolutional Neural Networks for Mobile Vision Applications”. In: CVPR. 2017.
[35] François Chollet et al. “Xception: Deep Learning with Depthwise Separable Convolutions”. In: CVPR. 2017.
[36] Laurent Sifre et al. “Rigid-motion scattering for image classification”. PhD thesis. 2014.
[37] Ali S. Razavian et al. “Visual Instance Retrieval with Deep Convolutional Networks”. In: 2014. arXiv: 1412.6574 [cs.CV].
[38] Philipp Fischer et al. “Descriptor Matching with Convolutional Neural Networks: a Comparison to SIFT”. In: 2014. arXiv: 1405.5769 [cs.CV].
[39] Maxime Oquab et al. “Learning and Transferring Mid-Level Image Representations using Convolutional Neural Networks”. In: CVPR. 2014.
[40] Jeff Donahue et al. “DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition”. In: ICML. 2014.
[41] zylo117 et al. Yet Another EffcientDet Pytorch. https://github.com/zylo117/Yet-Another-EfficientDet-Pytorch. 2019.
40[42] Simonyan K. and Zisserman A. “Very Deep Convolutional Networks for Large-Scale Image Recognition”. In: ICLR. 2015.
[43] Diederik P. Kingma and Jimmy Ba. “Adam: A Method for Stochastic Optimization”. In: ICLR. 2015.
[44] Aditya Chattopadhay et al. “Grad-CAM++: Generalized Gradient-Based Visual Explanations for Deep Convolutional Networks”. In: WACV. 2018.
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊