(18.232.50.137) 您好!臺灣時間:2021/05/07 03:45
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

我願授權國圖
: 
twitterline
研究生:黃琢雅
研究生(外文):Jhuo-YaHuang
論文名稱:基於深度學習架構之混凝土表面損壞實時辨識系統
論文名稱(外文):Real-Time Concrete Damage Detection Based on Deep Learning Technique
指導教授:胡宣德
指導教授(外文):Hsuan-The Hu
學位類別:碩士
校院名稱:國立成功大學
系所名稱:土木工程學系
學門:工程學門
學類:土木工程學類
論文種類:學術論文
論文出版年:2020
畢業學年度:108
語文別:中文
論文頁數:106
中文關鍵詞:深度學習YOLOv3混凝土損壞檢測影像辨識
外文關鍵詞:Deep LearningYOLOv3Concrete Damage DetectionObject Detection
相關次數:
  • 被引用被引用:0
  • 點閱點閱:66
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
混凝土建物長年使用產生的表面損壞可以透過目視檢測辨識損壞狀況,但是檢測倚賴專業人員經驗且具危險性。近年來深度學習蓬勃發展已在各領域廣泛應用,可使用深度學習影像辨識模型輔助檢測。因此本研究的目標為建立一套實時辨識鋼筋混凝土表面裂縫與鋼筋外露的深度學習模型。
本研究主要分為兩部分,第一部分是建立裂縫圖像辨識模型,模型包含6層的神經網路,其正確率可以達到99.1%。第二部分是建立混凝土損壞影像辨識模型,採用Keras函式庫建立的YOLOv3神經網路,使用不同條件的樣本,逐步增加損壞類別,損壞類型包含裂縫、鋼筋外露以及裂縫分支。透過mAP、loss及影片檢測效果評估模型,最後再使用效果最好的模型進行實時辨識的測試。
影像辨識模型的分為四種:一般裂縫模型、橋梁裂縫模型、兩類損壞模型及三類損壞模型。一般裂縫模型分為三種條件,皆採用3510張樣本訓練,其中包含同樣數量的正負樣本,以不同類型的照片取代同樣比例的負樣本,其中加入建築照取代負樣本的模型裂縫AP最高,可以達到83.61%,但影片效果不佳,所以並未繼續採用此作法。橋梁裂縫模型在一般裂縫模型的基礎上繼續加入橋梁檢測照片,裂縫AP最高為69.38%,影片效果更優於一般裂縫模型,所以採用此作法繼續加入鋼筋外露的類別。兩類損壞模型所有類別mAP最高可以達80.57%,且影片效果也最好,最後採用此類模型進行實時辨識的測試。三類損壞模型加入裂縫分支的類別,以探討該裂縫分類法對於裂縫辨識成效的影響。針對上述模型做定量分析及影片效果比較後,最終以兩類損壞模型作為實時辨識系統的效果測試。
實時辨識的測試場所選於台南市安南區鹽水溪周邊橋梁,測試採用手機視訊電腦進行,並另外拍攝影片進行影像辨識,比較實時辨識與影片辨識的差異。結果顯示,鋼筋外露與裂縫在實時辨識與影片辨識中皆有相當良好的辨識成效。
Visual inspection is one of the commonest approaches in the field of Structural Health Monitoring (SHM). However, the works rely heavily on the inspectors’ knowledge and experience leading to subjective assessments. On the other hand, with the rapid development of the Convolution Neural Network (CNN), deep learning technique has been widely adopted for damage detection. In this study, a real-time concrete surface damage detecting system was developed based on YOLOv3 network. The influences of using different types of datasets for training on the accuracy of the models was also investigated.
The study is divided into three parts: image, video and real-time objects detection. First, an image classification model is developed for the recognition of the cracked and uncracked concrete images. Second, for the object detection in video, YOLOv3 was used for the crack and spalling detection using different types of datasets. The model with the best performance was therefore adopted for the real-time surface damage detection.
Consequently, four locations in Tainan City were selected for the validation of the real-time damage detection model. The results show that the real-time damage detection model has an outstanding performance with an AP of 79.78% in the detection of concrete crack and AP of 81.35% for the exposed rebar damages.
摘要 I
致謝 VII
目錄 VIII
表目錄 XI
圖目錄 XII
第一章 緒論 1
1. 1 研究動機與目的 1
1. 2 研究方法 2
1. 3 論文架構 2
第二章 文獻回顧 3
2. 1 深度學習框架下影像辨識的發展 3
2. 2 深度學習用於缺陷辨識 4
2. 3 混凝土損壞影像辨識研究 4
第三章 研究方法 7
3. 1 卷積神經網路 7
3. 1. 1 輸入資料 7
3. 1. 2 神經層 8
3. 1. 3 損失函數 10
3. 1. 4 優化器 10
3. 1. 5 卷積神經網路架構 10
3. 2 Keras 深度學習框架 11
3. 2. 1 Keras 函式庫 11
3. 2. 2 軟硬體配置 11
3. 3 YOLOv3神經網路架構 12
3. 3. 1 殘差網路 (ResNet) 12
3. 3. 2 特徵金字塔 (FPN) 13
3. 4 YOLOv3-tiny神經網路架構 14
3. 5 超參數 15
3. 6 遷移學習 (Transfer Learning) 16
3. 7 建立資料庫 17
3. 8 損壞類別 20
3. 8. 1 裂縫 (Crack) 20
3. 8. 2 鋼筋外露 (Spall) 20
3. 8. 3 裂縫分支 (Branch) 20
3. 9 精度及誤差 21
3. 9. 1 mAP 21
3. 9. 2 YOLO的損失函數 23
3. 9. 3 影像效果 25
3. 10 實時辨識測試 25
3. 10. 1 YOLOv3實時辨識測試 25
3. 10. 2 YOLOv3-tiny實時辨識測試 25
第四章 裂縫圖像辨識 27
4. 1 實驗介紹 27
4. 2 實驗流程 27
4. 3 神經網路結構 28
4. 4 圖像辨識實驗結果 29
第五章 影像辨識 34
5. 1 實驗介紹 34
5. 1. 1 一般裂縫模型 34
5. 1. 2 橋梁裂縫模型 35
5. 1. 3 兩類損壞模型 36
5. 1. 4 三類損壞模型 36
5. 2 實驗流程 37
5. 3 影像辨識訓練結果 41
5. 3. 1 一般裂縫模型 41
5. 3. 2 橋梁裂縫模型 45
5. 3. 3 兩類損壞模型 47
5. 3. 4 三類損壞模型 54
5. 3. 5 影像辨識訓練結果小結 57
5. 4 影像辨識實時測試 57
5. 4. 1 YOLOv3實時測試實驗結果 59
5. 4. 2 YOLOv3-tiny實時測試實驗結果 71
第六章 結論 74
6. 1 總結 74
6. 2 未來展望 75
參考文獻 77
附錄一:Anaconda 虛擬環境安裝套件 81
附錄二:裂縫圖像辨識python檔[33] 88
附錄三:Keras-YOLOv3 train.py 訓練python檔[41] 95
附錄四:Keras-YOLOv3實時測試python檔[41] 101
Y.Lecun, L.Bottou, Y.Bengio, and P.Haffner, “Gradient-Based Learning Applied to Document Recognition, proc. IEEE, 1998.
[2]O.Russakovsky et al., “ImageNet Large Scale Visual Recognition Challenge, Int. J. Comput. Vis., vol. 115, no. 3, pp. 211–252, 2015, doi: 10.1007/s11263-015-0816-y.
[3]A.Krizhevsky, I.Sutskever, and G. E.Hinton, “ImageNet classification with deep convolutional neural networks, Adv. Neural Inf. Process. Syst., vol. 2, pp. 1097–1105, 2012.
[4]K.Simonyan and A.Zisserman, “Very deep convolutional networks for large-scale image recognition, 3rd Int. Conf. Learn. Represent. ICLR 2015 - Conf. Track Proc., pp. 1–14, 2015.
[5]K.He, X.Zhang, S.Ren, and J.Sun, “Deep residual learning for image recognition, Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2016-Decem, pp. 770–778, 2016, doi: 10.1109/CVPR.2016.90.
[6]R.Girshick, J.Donahue, T.Darrell, and J.Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation, Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 580–587, 2014, doi: 10.1109/CVPR.2014.81.
[7]R.Girshick, “Fast R-CNN, Proc. IEEE Int. Conf. Comput. Vis., vol. 2015 Inter, pp. 1440–1448, 2015, doi: 10.1109/ICCV.2015.169.
[8]S.Ren, K.He, R.Girshick, and J.Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks, IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 6, pp. 1137–1149, 2017, doi: 10.1109/TPAMI.2016.2577031.
[9]T.-Y.Lin, P.Goyal, R.Girshick, K.He, and P.Dollar, “Focal Loss for Dense Object Detection, IEEE Trans. Pattern Anal. Mach. Intell., vol. 42, no. 2, pp. 318–327, 2018, doi: 10.1109/TPAMI.2018.2858826.
[10]T.-Y.Lin, P.Dollár, R.Girshick, K.He, B.Hariharan, and S.Belongie, “Feature pyramid networks for object detection, in Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, 2017, doi: 10.1109/CVPR.2017.106.
[11]J.Redmon, S.Divvala, R.Girshick, and A.Farhadi, “You only look once: Unified, real-time object detection, Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2016-Decem, pp. 779–788, 2016, doi: 10.1109/CVPR.2016.91.
[12]J.Redmon and A.Farhadi, “YOLO v.3, Tech Rep., pp. 1–6, 2018.
[13]C.Ralphs, “Better, faster, stronger, TLS - Times Lit. Suppl., vol. 2018-June, no. 6009, p. 28, 2018, doi: 10.5860/lrts.53n4.261.
[14]A.Ramcharan et al., “Assessing a mobile-based deep learning model for plant disease surveillance, 2018.
[15]L.Gao, Y.He, X.Sun, X.Jia, and B.Zhang, “Incorporating negative sample training for ship detection based on deep learning, Sensors (Switzerland), vol. 19, no. 3, 2019, doi: 10.3390/s19030684.
[16]Y.Li, Z.Han, H.Xu, L.Liu, X.Li, and K.Zhang, “YOLOv3-lite: A lightweight crack detection network for aircraft structure based on depthwise separable convolutions, Appl. Sci., vol. 9, no. 18, 2019, doi: 10.3390/app9183781.
[17]D.Han and G.Tang, “Damage detection of quayside crane structure based on improved Faster R-CNN, Int. J. New Dev. Eng. Soc., vol. 3, no. 1, pp. 284–301, 2019, doi: 10.25236/IJNDES.190238.
[18]赵庆安, “基于深度学习方法的古建筑砌体结构表层损伤 识别与定位, 大連理工大學, 2017.
[19]Z.Fan, Y.Wu, J.Lu, and W.Li, “Automatic Pavement Crack Detection Based on Structured Prediction with the Convolutional Neural Network, pp. 1–9, 2018.
[20]H.-W.Huang, Q.-T.Li, and D.-M.Zhang, “Deep Learning Based Image Recognition for Crack and Leakage Defects of Metro Shield Tunnel, Tunn. Undergr. Sp. Technol., vol. 77, no. March, pp. 166–176, 2018, doi: 10.1016/j.tust.2018.04.002.
[21]W.Silva and D.Lucena, “Concrete Cracks Detection Based on Deep Learning Image Classification, Proceedings, vol. 2, no. 8, p. 489, 2018, doi: 10.3390/icem18-05387.
[22]L.Yang, B.Li, W.Li, Z.Liu, G.Yang, and J.Xiao, “A robotic system towards concrete structure spalling and crack database, 2017 IEEE Int. Conf. Robot. Biomimetics, ROBIO 2017, vol. 2018-Janua, pp. 1–6, 2018, doi: 10.1109/ROBIO.2017.8324593.
[23]L.Yang, B.Li, W.Li, Z.Liu, G.Yang, and J.Xiao, “Deep Concrete Inspection Using Unmanned Aerial Vehicle Towards CSSC Database, Int. Conf. Intell. Robot. Syst., no. 61528303, 2017.
[24]Y.-J.Cha, W.Choi, G.Suh, S.Mahmoudkhani, and O.Büyüköztürk, “Autonomous Structural Visual Inspection Using Region-Based Deep Learning for Detecting Multiple Damage Types, Comput. Civ. Infrastruct. Eng., vol. 33, no. 9, pp. 731–747, 2018, doi: 10.1111/mice.12334.
[25]C.Zhang, C. C.Chang, and M.Jamshidi, “Bridge Damage Detection using a Single-Stage Detector and Field Inspection Images, 2018.
[26]楊松儒, “以深度學習為基礎之路面破損與閥栓檢測系統, 國立臺灣師範大學, 2019.
[27]S.Murao, Y.Nomura, H.Furuta, and C. W.Kim, “Concrete crack detection using UAV and deep learning, 13th Int. Conf. Appl. Stat. Probab. Civ. Eng. ICASP 2019, 2019.
[28]A.Satoshi, Y.Nobuyoshi, and F.Tomohiro, “Comparison of Deep Learning Model Precision for Detecting Concrete Deterioration Types from Digital Images, Computing in Civil Engineering 2019. pp. 196–203, 22-Jul-2019, doi: doi:10.1061/9780784482445.025.
[29]Ç. F.Özgenel, “Concrete Crack Images for Classification, 23 Jul, 2019. .
[30]施威銘研究室, tf.keras 技術者們必讀!深度學習攻略手冊. 台北市: 旗標, 2020.
[31]斎藤康毅, Deep Learning:用Python進行深度學習的基礎理論實作, 1st ed. 台北市: 碁峰資訊, 2017.
[32]N.Srivastava, G.Hinton, A.Krizhevsky, I.Sutskever, and R.Salakhutdinov, “Dropout: A Simple Way to Prevent Neural Networks from Overfittin, J. Mach. Learn. Res., vol. 15, pp. 1929–1958, Jan.2014, doi: 10.5555/2627435.2670313.
[33]F.Chollet, Deep learning 深度學習必讀:Keras 大神帶你用 Python 實作. 台北市: 旗標, 2019.
[34]A.Kathuria, “What’s new in YOLO v3? [Online]. Available: https://towardsdatascience.com/yolo-v3-object-detection-53fb7d3bfe6b. [Accessed: 27-May-2020].
[35]S.Ding, F.Long, H.Fan, L.Liu, and Y.Wang, “A Novel YOLOv3-tiny Network for Unmanned Airship Obstacle Detection, in 2019 IEEE 8th Data Driven Control and Learning Systems Conference (DDCLS), 2019, pp. 277–281, doi: 10.1109/DDCLS.2019.8908875.
[36]I.Goodfellow, Y.Bengio, and A.Courville, Deep Learning. MIT Press, 2016.
[37]T. Y.Lin et al., “Microsoft COCO: Common objects in context, Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 8693 LNCS, no. PART 5, pp. 740–755, 2014, doi: 10.1007/978-3-319-10602-1_48.
[38]財團法人中華顧問工程司, “混凝土橋常見裂化樣態探討, 2017.
[39]J.Cartucho, “mean Average Precision - This code evaluates the performance of your neural net for object recognition., 2019. [Online]. Available: https://github.com/Cartucho/mAP. [Accessed: 02-Oct-2019].
[40]M.Everingham, L.VanGool, C. K. I.Williams, J.Winn, and A.Zisserman, “The pascal visual object classes (VOC) challenge, Int. J. Comput. Vis., vol. 88, no. 2, pp. 303–338, 2010, doi: 10.1007/s11263-009-0275-4.
[41]Qqwweee, “keras-yolo3 - A Keras implementation of YOLOv3 (Tensorflow backend), 2018. [Online]. Available: https://github.com/qqwweee/keras-yolo3. [Accessed: 01-Oct-2019].
[42]T.Lin, “labelImg - LabelImg is a graphical image annotation tool and label object bounding boxes in images, 2015. [Online]. Available: https://github.com/tzutalin/labelImg. [Accessed: 25-Sep-2019].
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
系統版面圖檔 系統版面圖檔