跳到主要內容

臺灣博碩士論文加值系統

(216.73.216.14) 您好!臺灣時間:2025/12/25 06:38
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:王富慶
研究生(外文):WANG, FU-CHING
論文名稱:應用ResNet於輪胎氣泡缺陷檢測
論文名稱(外文):Tire Bubble Defects Detection Using ResNet
指導教授:張傳育
指導教授(外文):CHANG, CHUAN-YU
口試委員:葉家宏柯建全黃登淵胡武誌
口試委員(外文):YEH, CHIA-HUNGKO, CHIEN-CHUANHUANG, DURN-YUANHU, WU-CHIH
口試日期:2019-07-29
學位類別:碩士
校院名稱:國立雲林科技大學
系所名稱:資訊工程系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2019
畢業學年度:107
語文別:中文
論文頁數:57
中文關鍵詞:輪胎氣泡缺陷檢測影像辨識數位剪像術殘差網路
外文關鍵詞:Tire Bubble Defects DetectionImage RecognitionDigital ShearographyResidual Network
相關次數:
  • 被引用被引用:1
  • 點閱點閱:256
  • 評分評分:
  • 下載下載:7
  • 收藏至我的研究室書目清單書目收藏:1
數位剪像術現在被廣泛應用在各領域的檢測,其能夠顯示肉眼所無法觀察到的缺陷,而輪胎氣泡缺陷也是其中的應用之一。輪胎製造廠透過數位剪像術得到輪胎影像,再經由現場人員進行氣泡缺陷的判定,而氣泡的判定不僅仰賴人員的經驗及觀察力,還可能因為不同的人員導致判定標準不一。本論文提出一個利用殘差網路(Residual Network)來進行氣泡缺陷的檢測方法。在訓練階段中,將整張輪胎影像分割成數個區塊,並透過資料增量的方式來增加訓練樣本並輸入至網路模型進行訓練;在測試階段中,先將輪胎影像進行前處理,以此來篩選疑似為氣泡缺陷的區域,接著將這些可疑區域輸入至網路模型進行氣泡缺陷判定,最後將輸出結果分為兩類:含有氣泡缺陷影像與無缺陷影像。在實驗結果中,氣泡缺陷檢出率約95%,無缺陷影像分類正確率約85%。透過此檢測方法,幫助輪胎製造廠商進一步達到自動化檢測及節省人力成本。
Digital shearography used to detect tire bubble defects that are unobservable by the naked-eye. The tire manufacturer obtains the tire image through digital shearography, and then judges the bubble defect through the experience operate. The determination of the bubble defects depends not only on the experience and observation of the personnel, but also because there is no uniform judgment standard due to different personnel. This thesis proposes a residual network to detect bubble defects. In the training phase, the whole tire image is divided into several blocks. Use the data augmentation method to increase the training sample, and then input into the network for training;In the test phase, the tire image is pre-processed to select suspected bubble defect areas, and then these suspicious areas are input into the network model for bubble defect classification. The final output is in two categories: bubble-defect and non-defect. In the experimental results, the bubble defect detection rate is about 95%, and the non-defect classification accuracy rate is about 85%. For this method which can help tire manufacturers to further achieve automated inspection and save labor costs.
摘要 i
ABSTRACT ii
誌謝 iii
目錄 iv
表目錄 vi
圖目錄 vii
第1章 緒論 1
1.1 研究動機與目的 1
1.2 相關文獻探討 2
1.2.1 數位剪像術 2
1.2.2 輪胎氣泡缺陷檢測 6
1.3 研究方法 6
1.4 章節大綱 6
第2章 相關理論 7
2.1 數位剪像術(Digital Shearography) 7
2.2 快速傅立葉轉換(Fast Fourier Transform, FFT) 8
2.3 霍夫轉換(Hough Transform, HT) 11
2.4 自適應閥值(Adaptive Threshold) 14
2.5 類神經網路(Artificial Neural Network) 16
2.5.1 多層感知器(Multilayer Perceptron, MLP) 17
2.5.2 倒傳遞神經網路(Back Propagation Neural Network, BPN) 18
2.6 卷積神經網路(Convolutional Neural Networks, CNN) 19
2.7 殘差網路(Residual Network, ResNet) 22
第3章 研究方法 25
3.1 系統架構 25
3.2 訓練樣本及資料增量 26
3.3 訓練階段 28
3.3.1 網路架構 28
3.4 測試階段 30
第4章 實驗結果與討論 32
4.1 影像資料與實驗設備 32
4.2 效能評估 34
4.3 區塊大小比較 34
4.4 網路架構比較 35
4.5 方法比較 38
4.6 實驗結果 39
4.7 誤判探討 42
4.8 漏檢探討 43
第5章 結論 44
參考文獻 45
[1] Wolfgang Steinchen, Lianxiang Yang, “Digital Shearography: Theory and Application of Digital Speckle Pattern Shearing Interferometry, ”SPIE Press, Bellingham, USA, 2003.
[2] Chuan-Yu Chang, Jhen-Ke Huang, “Tires Defects Detection Using Convolutional Neural Networks,” Proc. of Computer Vision, Graphics, and Image Processing, 2017.
[3] Chuan-Yu Chang, Wei-Chun Wang, “Integration of CNN and Faster R-CNN for Tire Bubble Defects Detection,” International Conference on Broadband and Wireless Computing, Communication and Applications, pp. 285-294, 2018.
[4]Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun, “Faster R-CNN: Towards real-time object detection with region proposal networks,” Proc. Neural Information Processing Systems, 2015.
[5] Y. Y. Hung, “Shearography: a new optical method for strain measurement and nondestructive testing, ”Optical Engineering, Vol. 21, pp.391-395, 1982.
[6] G. De Angelis, M. Meo, D. P. Almond, S. G. Pickering and S. L. Angioni, “A new technique to detect defect size and depth in composite structures using digital shearography and unconstrained optimization, ”NDT & E International, Vol. 45, pp. 91-96, 2012.
[7] Nan Xu, Xin Xie, George Harmon, Randy Gu and Lianxiang Yang, “Quality inspection of spot welds using digital shearography, ”SAE International Journal of Materials & Manufacturing, Vol. 5, pp.96-101, 2012.


[8] Kaiming He, Xiangyu Zhang, Shaoqing Ren and Jian Sun, “Deep residual learning for image recognition, ”Proceedings of the IEEE conference on computer vision and pattern recognition, pp.770-778, 2016.
[9] David E. Rumelhart, James L. McClelland and CORPORATE PDP Research Group, “Parallel distributed processing: explorations in the microstructure of cognition: foundations, ”MIT Press Cambridge, Vol. 2, 1987.
[10] James W. Cooley, John W. Tukey, “An algorithm for the machine calculation of complex Fourier series,” Mathematics of Computation, Vol. 19, pp.297-301, 1965.
[11] Hough, Paul VC., “Method and means for recognizing complex patterns, ”U.S. Patent No. 3,069,654. 18, 1962.
[12] Duda, Richard O., Hart, Peter E., “Use of the Hough transformation to detect lines and curves in pictures,”Sri International Menlo Park Ca Artificial Intelligence Center, 1972.
[13] Ballard, Dana H., “Generalizing the Hough transform to detect arbitrary shapes, ” Pattern recognition, Vol. 13.2, pp.111-122, 1981.
[14] Hubel, David Hunter, Torsten Nils Wiesel, “Receptive fields, binocular interaction and functional architecture in the cat's visual cortex, ”The Journal of physiology, Vol. 160.1, pp.106-154, 1962.
[15] Yann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner, “Gradient-based learning applied to document recognition,”Proceedings of the IEEE, Vol. 86.11, 1998.
[16] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. “Imagenet classification with deep convolutional neural networks, ”Advances in neural information processing systems, pp.1097-1105, 2012.

[17] Karen Simonyan, Andrew Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
[18] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed , Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich, “Going deeper with convolutions,” Proceedings of the IEEE conference on computer vision and pattern recognition, pp.1-9, 2015.
[19] Gao Huang, Liu Zhuang, Laurens van der Maaten and Kilian Q. Weinberger, “Densely connected convolutional networks, ”Proceedings of the IEEE conference on computer vision and pattern recognition, pp.4700-4708, 2017.
[20] Saining Xie, Ross Girshick, Piotr Dollar, Zhuowen Tu and Kaiming He, “Aggregated residual transformations for deep neural networks, ”Proceedings of the IEEE conference on computer vision and pattern recognition, pp.1492-1500, 2017.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top