跳到主要內容

臺灣博碩士論文加值系統

(3.229.142.104) 您好!臺灣時間:2021/07/27 08:16
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:莊翔盛
研究生(外文):ZHUANG, XIANG-SHENG
論文名稱:以生成對抗網路與卷積神經網路於資料不平衡資料集之研究-以IC 表面二維條碼檢測為例
論文名稱(外文):A Research on Generative Adversarial Networks and Convolutional Neural Networks in Imbalanced Data Sets - Taking Two Dimensional Codes on IC Surface as an Example
指導教授:侯東旭侯東旭引用關係
指導教授(外文):HOU, TUNG-HSU
口試委員:陳奕中劉書助
口試委員(外文):CHEN,YI-CHUNGLIU,SHU-CHU
口試日期:2020-07-10
學位類別:碩士
校院名稱:國立雲林科技大學
系所名稱:工業工程與管理系
學門:工程學門
學類:工業工程學類
論文種類:學術論文
論文出版年:2020
畢業學年度:108
語文別:中文
論文頁數:80
中文關鍵詞:二維條碼卷積神經網路密集卷積神經網路生成對抗網路
外文關鍵詞:two-dimensional codeconvolutional neural networkDenseNetGenerative Adversarial Network
相關次數:
  • 被引用被引用:0
  • 點閱點閱:79
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
IC晶片會在封裝外殼的表面打印上製造商、品名、批號與製造日期、二維條碼等資訊,透過AOI檢驗可以得知表面印刷是否有瑕疵,其中二維條碼利用傳統AOI較不易進行缺陷辨識,因此本研究將透過卷積神經網路進行IC晶片上的二維條碼辨別,透過大量的影像找出特徵並分類,可以有效的辨別打印影像是否有缺陷。實務上蒐集正常與缺陷數據容易出現樣本比例不均的問題,樣本比例不均容易造成模型績效較差,因此本研究將透過生成對抗網路生成缺陷影像,再利用卷積神經網路建立缺陷辨識模型。
本研究將使用某IC業者拍攝於檢驗過程中打印於產品上的DataMatrix影像,正常影像為1446張,缺陷影像為468張。本研究使用的生成對抗網路為深度卷積生成對抗網路;卷積神經網路為密集卷積神經網路。為了探討卷積神經網路訓練資料平衡與否對模型績效是否有影響;生成對抗網路是否能有效生成具有與缺陷照片相同特徵之照片;增加大量生成影像進行卷積神經網路之模型建立是否有影響,因此提出兩種實驗,實驗一為使用原始不平衡之資料集正常影像1446張、缺陷影像468張建立模型以及將原始資料欠採樣為平衡資料集正常影像468張、缺陷影像468張,並比較兩者分類績效;實驗二為透過生成對抗網路生成缺陷影像,使照片總量增加原始欠採樣後平衡資料的30%、50%、60%,並進行模型績效比較。
實驗一結果,不平衡資料集的錯誤分類率為11.99%,平衡影像資料集的錯誤分類率為1.52%;實驗二結果,增加總量30%資料集的錯誤分類率為0.984%,增加總量50%資料集錯誤分類率為7.14%,增加總量60%資料集錯誤分類率為10.33%。


Information such as manufacturer, product name, batch number, manufacturing date, and two-dimensional bar code, etc. is printed on the surface of the package shell of an IC chip. Whether the surface printing is flawed or not can be known through AOI inspection and the two-dimensional bar code uses traditional AOI to identify defects. Therefore, the convolutional neural network(CNN) was used in this research to identify the two-dimensional bar code on the IC chip, find the features and classify them through a large number of images that could be effectively identified to be defective or not. In practice, collecting normal and defective data is prone to uneven sample proportions. As uneven sample proportions can easily lead to poor model performance, defect images were generated by confrontation networks in this research. Then convolutional neural networks were applied to build defect identification models.
The DataMatrix images taken by an IC manufacturer and printed on the product during the inspection process were used in this study. There were 1446 normal images and 468 defective images. The generative adversarial network(GAN) used in this study is a deep convolutional generative adversarial network(DCGAN); the convolutional neural network is a dense convolutional neural network(DenseNet). The influential factors like the impact of the balance of convolutional neural network training data on the performance of the model, effective generation of images by the generative adversarial network with the same characteristics as the defective images and increasing a large number of generated images for convolutional neural network model building would be explored. Consequently, two experiments were proposed. 1446 normal images and 468 defective images in the original unbalanced data set were adopted in Experiment One to establish a model. The original data was further desampled into 468 normal images and 468 defective images in the balanced data set and classification performance of both was compared. Defective images in Experiment Two were generated through the generation adversarial network, so the total number of photos increased by 30%, 50%, and 60% of the original under-sampling balance data. Finally, the performance of the model was evaluated.
The result of Experiment One showed that the misclassification rate of the unbalanced data set was 11.99%, and the misclassification rate of the balanced image data set was 1.52%; the result of Experiment Two indicated the misclassification rate of the data set was 0.984%, which increased the total amount by 30%. The misclassification rate of 50% data set was 7.14%, and the misclassification rate of the data set increased by 60% was 10.33%.

摘要 i
Abstract ii
目錄 iv
表目錄 vi
圖目錄 vii
第一章 緒論 1
1.1研究背景與動機 1
1.2研究目的 3
1.3研究流程 5
第二章 文獻探討 6
2.1條碼 6
2.2 類神經網路 7
2.2.1 類神經網路基本架構 7
2.2.2倒傳遞類神經網路 9
2.3卷積神經網路 9
2.3.1 卷積層 10
2.3.2 池化層 12
2.3.3 全連結層 12
2.3.4常用激活函數 13
2.3.5最佳化方法 15
2.3.6 遷移學習的預訓練 15
2.3.7 批量標準化 15
2.3.8 過度配適與正規化 16
2.3.9 全域平均池化法 17
2.4卷積神經網路架構 17
2.4.1殘差學習 18
2.4.2深度殘差網路模塊 19
2.4.4密集卷積神經網路 20
2.4.5密集卷積神經網路模型 21
2.5生成對抗網路 22
2.5.1生成器 24
2.5.1.1自編碼器 25
2.5.2判別器 26
2.6生成對抗網路架構 26
2.6.1深度卷積生成對抗網路 26
第三章 研究方法 28
3.1實驗組合 28
3.2實驗設備 29
3.3 Data Matrix影像來源 30
3.4實驗流程 30
3.4.1影像前處理 30
3.4.2深度卷積生成對抗網路生成影像 31
3.4.3切割資料集 31
3.4.4卷積神經網路訓練 32
3.4.5模型績效評估 34
3.5生成對抗網路之架構 35
3.6 卷積類神經網路之架構 35
第四章 實驗結果 39
4.1 實驗一結果 39
4.1.1實驗一描述性統計 40
4.1.2實驗一分類錯誤圖檔統計 41
4.1.3實驗一錯誤率之比較 41
4.2 實驗二結果 42
4.2.1實驗二描述性統計 45
4.2.2實驗二分類錯誤圖檔統計 45
4.2.3實驗二錯誤率之比較 46
4.3 實驗一與實驗二比較 48
第五章 結論與建議 49
5.1 結論 49
5.2 建議 50
參考文獻 51
附錄 54
附錄一 實驗一混淆矩陣 54
附錄二 實驗二混淆矩陣 61


1.GS1 Taiwan. http://www.gs1tw.org/twct/web/index.jsp
2.Zaccone, G.(2017)。深度學習快速入門:使用TensorFlow(Getting started with TensorFlow)(傅運文譯)(初版)。新北市:博碩文化。
3.戴琬倫(2019)。應用卷積類神經網路於二維條碼缺陷辨識之研究
4.斎藤康毅(2017)。Deep Learning 用python進行深度學習的基礎理論與實作。(吳嘉芳譯)。台北市:碁峰資訊股份有限公司。
5.陳杏圓、王焜潔(2007)。人工智慧。新北市:高立圖書。
6.簡禎富與許嘉裕(2017)。資料挖礦與大數據分析(出版五刷)。新北市:前程文化。
7.Abe, K., Iwana, B. K., Holmer, V. G., & Uchida, S. (2017, November). Font creation using class discriminative deep convolutional generative adversarial networks. In 2017 4th IAPR Asian Conference on Pattern Recognition (ACPR) (pp. 232-237). IEEE.
8.Bhandare, A., Bhide, M., Gokhale, P., & Chandavarkar, R. (2016). Applications of convolutional neural networks. International Journal of Computer Science and Information Technologies, 7(5), 2206-2215.
9.Courbariaux, M., Bengio, Y., & David, J. P. (2014). Training deep neural networks with low precision multiplications. arXiv preprint arXiv:1412.7024.
10.Ciocca, G., Napoletano, P., & Schettini, R. (2018). CNN-based features for retrieval and classification of food images. Computer Vision and Image Understanding, 176, 70-77.
11.Du, Y., Zhang, W., Wang, J., & Wu, H. (2019, May). DCGAN Based Data Generation for Process Monitoring. In 2019 IEEE 8th Data Driven Control and Learning Systems Conference (DDCLS) (pp. 410-415). IEEE.
12.He, K., Zhang, X., Ren, S., & Sun, J. (2015). Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision (pp. 1026-1034).
13.He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).
14.He, K., Zhang, X., Ren, S., & Sun, J. (2016, October). Identity mappings in deep residual networks. In European conference on computer vision (pp. 630-645). Springer, Cham.
15.Glorot, X., & Bengio, Y. (2010, March). Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics (pp. 249-256).
16.Guo, S., & Yang, Z. (2018). Multi-Channel-ResNet: An integration framework towards skin lesion analysis. Informatics in Medicine Unlocked, 12, 67-74.
17.Goodfellow, I. J., Vinyals, O., & Saxe, A. M. (2014). Qualitatively characterizing neural network optimization problems. arXiv preprint arXiv:1412.6544.
18.Gao, F., Wu, T., Li, J., Zheng, B., Ruan, L., Shang, D., & Patel, B. (2018). SD-CNN: A shallow-deep CNN for improved breast cancer diagnosis. Computerized Medical Imaging and Graphics, 70, 53-62.
19.Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., ... & Bengio, Y. (2014). Generative adversarial nets. In Advances in neural information processing systems (pp. 2672-2680).
20.Gao, F., Yang, Y., Wang, J., Sun, J., Yang, E., & Zhou, H. (2018). A deep convolutional generative adversarial networks (DCGANs)-based semi-supervised method for object recognition in synthetic aperture radar (SAR) images. Remote Sensing, 10(6), 846.
21.Ioffe, S., & Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167.
22.Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097-1105).
23.LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. nature, 521(7553), 436.
24.LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278-2324.
25.LeCun, Y., Boser, B., Denker, J. S., Henderson, D., Howard, R. E., Hubbard, W., & Jackel, L. D. (1989). Backpropagation applied to handwritten zip code recognition. Neural computation, 1(4), 541-551.
26.Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. (2014). Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1), 1929-1958.
27.Ye, R., Pan, C. S., Chang, M., & Yu, Q. (2018). Intelligent defect classification system based on deep learning. Advances in Mechanical Engineering, 10(3), 1687814018766682. 

電子全文 電子全文(網際網路公開日期:20220823)
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top