跳到主要內容

臺灣博碩士論文加值系統

(44.213.60.33) 您好!臺灣時間:2024/07/20 04:20
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:陳怡蓁
研究生(外文):CHEN, YI-ZHEN
論文名稱:以圖像修復為基礎之自監督瑕疵檢測模型應用於印刷電路板
論文名稱(外文):Inpainting-Based Anomaly Detection via Self-Supervised Learning for Printed Circuit Boards
指導教授:陳彥安陳彥安引用關係
指導教授(外文):CHEN, YAN-ANN
口試委員:林家瑜簡廷因
口試委員(外文):LIN, CHIA YUCHIEN, TING-YING
口試日期:2023-09-01
學位類別:碩士
校院名稱:元智大學
系所名稱:資訊工程學系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2023
畢業學年度:112
語文別:英文
論文頁數:35
中文關鍵詞:圖像修復自監督式學習
外文關鍵詞:Image InpaintingSelf-Supervised Learning
相關次數:
  • 被引用被引用:0
  • 點閱點閱:65
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
現今隨著電子產品不斷地發展和普及,印刷電路板幾乎成為許多電子產品製造中不可或缺的一項元件,然而印刷電路板的瑕疵將影響整體產品製造的品質。在傳統檢測的方法中主要仰賴人工,透過人工目視來檢測印刷電路板的瑕疵,這既增加了人工成本且檢測效率相對較低,同時人工檢測也存在著標準不一致性的問題,因此在這些問題的情況下,印刷電路板的瑕疵檢測在深度學習中的應用已是不可或缺的。

目前的一些方法採用分類模型來檢測並區分瑕疵資料與正常資料,這些方法透過從瑕疵圖片中提取瑕疵表面特徵再學習分類,它們需要使用大量標記的資料來訓練分類模型,然而印刷電路板的資料還存在著標記資料量稀缺且瑕疵多樣化的問題,這些都將對模型檢測瑕疵的能力構成挑戰。

本研究提出「以圖像修復為基礎之自監督瑕疵檢測模型應用於印刷電路板」,選擇使用無監督式的圖像修復模型來進行瑕疵檢測。圖像修復模型作為修復瑕疵圖片的架構,修復模型將瑕疵圖片還原為正常圖片,並透過瑕疵圖片中原始圖片和修復圖片之間的差異來實現瑕疵檢測。為了增強模型對圖片特徵的提取能力,我們結合了自監督式學習的方法,使模型能夠區分可接受的允收資料和無法接受的瑕疵資料,同時通過增強模型對於圖片的修復能力來改善修復圖片的細節,進一步提高了瑕疵檢測模型的效能。透過本研究提出的方法可以更有效地檢測印刷電路板中的瑕疵,從而提升產品製造品質和生產效率。

With the continuous development and popularization of electronic products, printed circuit boards have become an indispensable component in the manufacturing of many electronic products. This has led to the quality of printed circuit boards directly affecting the overall quality of electronic product manufacturing. In traditional inspection methods, heavy reliance is placed on manual labor for visually inspecting defects on printed circuit boards. However, this approach not only entails high labor costs but also tends to have relatively low inspection efficiency. Moreover, manual inspection is prone to issues of inconsistent standards. Therefore, in such cases, the application of defect detection in printed circuit boards through deep learning has become essential. Currently, some methods utilize classification models to distinguish between defect and normal data. These methods extract surface features from defect images for learning-based classification, requiring a large amount of labeled data for training the classification model. Nevertheless, the data for printed circuit boards often suffers from the scarcity of labeled data and the diversity of defects, which can hinder the model's ability to detect defects effectively.

We propose a “Inpainting-Based Anomaly Detection via Self-Supervised Learning for Printed Circuit Boards,” opting for an unsupervised image inpainting model for defect detection. The image inpainting model serves as the architecture for repairing defect images, transforming them into normal images. Defects are detected by assessing the differences between the original images and the repaired images from defect images. To enhance the model's feature extraction capabilities, we incorporate self-supervised learning methods, enabling the model to distinguish acceptable “allow” data from unacceptable defect data. Additionally, by bolstering the model's image repair capabilities to enhance finer details, we further elevate the performance of our defect detection system. Through our proposed approach, defects in printed circuit boards can be detected more effectively, ultimately improving product manufacturing quality and production efficiency.
中文摘要
Abstract
Acknowledgements
Contents
List of Tables
List of Figures
1 Introduction
1.1 Motivation
1.2 Challenges of PCB Data
1.3 Problem Description
1.4 Goal
2 Related Works
2.1 Anomaly Detection
2.1.1 Generative Adversarial Network-based Anomaly Detection
2.1.2 Image Inpainting-based Anomaly Detection
2.2 Image Feature Semantics
3 Inpainting-Based Anomaly Detection System
3.1 Image Inpainting Model
3.2 SSL Pre-training Stage
3.3 Anomaly Detection
4 Experiment
4.1 Data Description
4.2 Experiments
4.3 Discussion
5 Conclusions
References
[1] Goodfellow, I and Pouget-Abadie, J and Mirza, M and Xu, B and Warde-Farley,
D and Ozair, S and Courville, A and Bengio, Y ”Generative adversarial nets.”
arXiv preprint arXiv:1406.2661 2014.
[2] Bank, Dor, Noam Koenigstein, and Raja Giryes. ”Autoencoders.” Machine
Learning for Data Science Handbook: Data Mining and Knowledge Discovery
Handbook (2023): 353-374.
[3] Zavrtanik, Vitjan and Kristan, Matej and Skoˇcaj, Danijel ”Reconstruction by
inpainting for visual anomaly detection.” Pattern Recognition Elsevier, 2021.
[4] Tao, Xian and Zhang, Dapeng and Ma, Wenzhi and Liu, Xilong and Xu, De
”Automatic metallic surface defect detection and recognition with convolutional
neural networks.” Applied Sciences MDPI, 2018.
[5] Phaphuangwittayakul, Aniwat and Guo, Yi and Ying, Fangli ”Fast adaptive
meta-learning for few-shot image generation.” IEEE Transactions on Multimedia IEEE, 2021.
[6] Punn, Narinder Singh and Agarwal, Sonali ”BT-Unet: A self-supervised learning framework for biomedical image segmentation using barlow twins with U-net
models.” Machine Learning Springer, 2022.
[7] Akcay, Samet and Atapour-Abarghouei, Amir and Breckon, Toby P
”Ganomaly: Semi-supervised anomaly detection via adversarial training.”
Asian Conference on Computer Vision, Perth, Australia Springer, 2019.
[8] Ak¸cay, Samet and Atapour-Abarghouei, Amir and Breckon, Toby P ”Skipganomaly: Skip connected and adversarially trained encoder-decoder anomaly
detection.” IEEE International Joint Conference on Neural Networks (IJCNN)
2019.
[9] Haselmann, Matthias and Gruber, Dieter P and Tabatabai, Paul ”Anomaly
detection using deep learning based image completion.” IEEE international
conference on machine learning and applications (ICMLA) 2018.
[10] Doersch, Carl, Abhinav Gupta, and Alexei A. Efros. ”Unsupervised visual representation learning by context prediction.” Proceedings of the IEEE international conference on computer vision. 2015.
[11] Noroozi, Mehdi, and Paolo Favaro. ”Unsupervised learning of visual representations by solving jigsaw puzzles.” European conference on computer vision.
Cham: Springer International Publishing, 2016.
[12] Barnes, Connelly, et al. ”PatchMatch: A randomized correspondence algorithm
for structural image editing.” ACM Trans. Graph. 28.3 (2009): 24.
[13] Yang, Chao, et al. ”High-resolution image inpainting using multi-scale neural
patch synthesis.” Proceedings of the IEEE conference on computer vision and
pattern recognition. 2017.
[14] Pirnay, Jonathan and Chai, Keng ”Inpainting transformer for anomaly detection.” International Conference on Image Analysis and Processing (ICIAP)
2022.
[15] Liu, Hongyu and Jiang, Bin and Xiao, Yi and Yang, Chao ”Coherent semantic
attention for image inpainting.” IEEE International Conference on Computer
Vision 2019.
[16] Yan, Zhaoyi and Li, Xiaoming and Li, Mu and Zuo, Wangmeng and Shan,
Shiguang ”Shift-net: Image inpainting via deep feature rearrangement.” Proceedings of the European conference on computer vision (ECCV) 2018.
[17] Li, Wenbo and Lin, Zhe and Zhou, Kun and Qi, Lu and Wang, Yi and Jia, Jiaya
”Mat: Mask-aware transformer for large hole image inpainting.” Proceedings of
the IEEE/CVF conference on computer vision and pattern recognition 2022.
[18] Zbontar, Jure and Jing, Li and Misra, Ishan and LeCun, Yann and Deny,
St´ephane ”Barlow twins: Self-supervised learning via redundancy reduction.”
International Conference on Machine Learning PMLR, 2021.
[19] He, Kaiming and Zhang, Xiangyu and Ren, Shaoqing and Sun, Jian ”Deep
residual learning for image recognition.” Proceedings of the IEEE conference
on computer vision and pattern recognition 2016.
[20] Bergmann, Paul and Fauser, Michael and Sattlegger, David and Steger, Carsten
”MVTec AD–A comprehensive real-world dataset for unsupervised anomaly
detection.” Proceedings of the IEEE/CVF conference on computer vision and
pattern recognition 2019.
[21] Deng, Ye and Hui, Siqi and Zhou, Sanping and Meng, Deyu and Wang, Jinjun
”Learning contextual transformer network for image inpainting.” Proceedings
of the 29th ACM International Conference on Multimedia 2021.
[22] Yu, Jiahui and Lin, Zhe and Yang, Jimei and Shen, Xiaohui and Lu, Xin and
Huang, Thomas S ”Generative image inpainting with contextual attention.”
Proceedings of the IEEE conference on computer vision and pattern recognition
2018.
[23] Pathak, Deepak and Krahenbuhl, Philipp and Donahue, Jeff and Darrell, Trevor
and Efros, Alexei A ”Context encoders: Feature learning by inpainting.” Proceedings of the IEEE conference on computer vision and pattern recognition
2016, 2536–2544.
[24] He, Shuyi, et al. ”Semantic Segmentation of Remote Sensing Images With SelfSupervised Semantic-Aware Inpainting.” IEEE Geoscience and Remote Sensing
Letters 19 (2022): 1-5.
[25] Rai, Shyam Nandan, et al. ”FLUID: Few-Shot Self-Supervised Image Deraining.” Proceedings of the IEEE/CVF Winter Conference on Applications of
Computer Vision 2022.
[26] Fung, Daryl LX, et al. ”Self-supervised deep learning model for COVID-19
lung CT image segmentation highlighting putative causal relationship among
age, underlying disease and COVID-19.” Journal of Translational Medicine 19
(2021): 1-18.
[27] Chen, Da, et al. ”Self-supervised learning for few-shot image classification.”
ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and
Signal Processing (ICASSP) IEEE, 2021.
[28] Misra, Ishan, and Laurens van der Maaten. ”Self-supervised learning of pretextinvariant representations.” Proceedings of the IEEE/CVF conference on computer vision and pattern recognition 2020.
[29] Liu, Ze, et al. ”Swin transformer: Hierarchical vision transformer using shifted
windows.” Proceedings of the IEEE/CVF international conference on computer
vision 2021.
[30] Zhao, Haoru, et al. ”TransCNN-HAE: Transformer-CNN Hybrid AutoEncoder
for Blind Image Inpainting.” Proceedings of the 30th ACM International Conference on Multimedia 2022.
[31] Jiang, Jielin, et al. ”Masked swin transformer unet for industrial anomaly detection.” IEEE Transactions on Industrial Informatics 19.2 (2022): 2200-2209.
[32] Zhang, Bo, et al. ”Learning cross-image object semantic relation in transformer
for few-shot fine-grained image classification.” Proceedings of the 30th ACM
International Conference on Multimedia 2022.
[33] Naderi, MohammadReza, et al. ”SFI-Swin: Symmetric Face Inpainting with
Swin Transformer by Distinctly Learning Face Components Distributions.”
arXiv preprint arXiv 2301.03130 (2023).
[34] Cao, Hu, et al. ”Swin-unet: Unet-like pure transformer for medical image segmentation.” European Conference on Computer Vision. Cham: Springer Nature
Switzerland, 2022.
[35] Dosovitskiy, Alexey, et al. ”An image is worth 16x16 words: Transformers for
image recognition at scale.” arXiv preprint arXiv 2010.11929 (2020).
[36] Johnson, Justin, Alexandre Alahi, and Li Fei-Fei. ”Perceptual losses for realtime style transfer and super-resolution.” Computer Vision–ECCV 2016: 14th
European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part II 14. Springer International Publishing, 2016.
[37] Iandola, Forrest N., et al. ”SqueezeNet: AlexNet-level accuracy with 50x fewer
parameters and¡ 0.5 MB model size.” arXiv preprint arXiv 1602.07360 (2016).
電子全文 電子全文(網際網路公開日期:20281207)
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top