跳到主要內容

臺灣博碩士論文加值系統

(18.97.9.171) 您好!臺灣時間:2024/12/09 07:50
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:洪銘鴻
研究生(外文):Hung, Ming-Hung
論文名稱:改良式YOLOv3深度學習網路應用於船舶影像分類
論文名稱(外文):Modified YOLOv3 Applied to Ship Image Classification
指導教授:張麗娜張麗娜引用關係
指導教授(外文):Lena Chang
口試委員:張順雄張陽郎
口試委員(外文):Shun-Hsyung ChangYang-Lang Chang
口試日期:2020-07-13
學位類別:碩士
校院名稱:國立臺灣海洋大學
系所名稱:通訊與導航工程學系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2020
畢業學年度:108
語文別:中文
論文頁數:58
中文關鍵詞:船舶影像分類深度學習YOLO網路YOLOv3
外文關鍵詞:ship image classificationDeep LearningYOLOYOLOv3
相關次數:
  • 被引用被引用:1
  • 點閱點閱:343
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
摘要....................................III
Abstract................................IV
圖目錄...................................VII
表目錄...................................IX
第一章 緒論..............................1
1.1 研究動機.............................1
1.2 研究目的.............................1
1.3 論文架構.............................3
第二章 物件偵測相關文獻回顧................4
2.1 卷積神經網路..........................5
2.1.1 卷積層.............................5
2.1.2 池化層.............................6
2.1.3 全連接層...........................7
2.2 R-CNN與Fast R-CNN網路................7
2.3 Faster R-CNN網路.....................9
2.4 SSD: Single Shot MultiBox Detector...12
2.5 YOLO.................................13
2.5.1 YOLOv1.............................14
2.5.2 YOLOv2.............................16
2.5.3 YOLOv3.............................23
第三章 研究方法............................26
3.1 研究資料來源介紹.......................26
3.2 船舶資料數據集建置.....................28
3.3 船體智慧偵測深度學習網路模型研究.........30
3.4 船舶分類網路改良與簡化..................30
3.5 深度學習網路效能評估....................34
第四章 實驗結果與討論.......................35
4.1 YOLO類型網路效能比較...................35
4.2 輸入影像尺寸效能比較....................38
4.3 改良式YOLOv3網路偵測尺度實驗............42
4.4 調整卷積通道參數實驗....................45
4.5 實驗結果討論...........................52
第五章 結論與建議..........................54
5.1 結論................................. 54
5.2 建議..................................54
參考文獻....................................55
[1] D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” IJCV, Vol. 60, No. 2, pp. 91-110, 2004
[2] W. Cheung and G. Hamarneh, “N-SIFT: N -dimensional scale invariant feature transform,” IEEE Transactions on Image Processing, Vol. 18, No. 9, pp. 2012-2021, 2009
[3] N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05), Vol. 1, pp. 886-893, 2005
[4] C. Cortes, V. Vapnik, “Support-vector networks,” Machine Learning, Vol. 20, No. 3, pp. 273-297, 1995
[5] A. Ben-Hur, D. Horn, H. Siegelmann, and V. Vapnik, “Support vector clustering,” Journal of Machine Learning Research, Vol. 2, pp. 125-137, 2001
[6] A Krizhevsky, I Sutskever, GE Hinton, “ImageNet classification with deep convolutional neural networks,” In Advances in Neural Information Processing Systems; ACM: New York, NY, USA, pp. 1097-1105, 2012
[7] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 23-28 June 2014, pp. 580-587, 2014
[8] R. Girshick, “Fast R-CNN” In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), pp. 1440-1448, 2015
[9] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: towards real-time object detection with region proposal networks,” IEEE Advances in Neural Information Processing Systems, pp. 91-99, 2015
[10] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu and A. C. Berg, “SSD: Single Shot MultiBox Detector,” ArXiv e-prints, pp. 4-21, 2015.
[11] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You Only Look Once: unified, real-time object detection,” In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition(CVPR), pp. 779-788, 2016
[12] J. Redmon and A. Farhadi, “YOLO9000: better, faster, stronger,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6517-6525, November 2017
[13] J. Redmon and A. Farhadi “YOLOv3: an incremental improvement,” Computer Science, arXiv 1804.02767, 2018
[14] M. Everingham, L. V. Gool, C. K. Williams, J. Winn, and A. Zisserman, “The pascal visual object classes (VOC) challenge,” Int. J. Comput. Vis., Vol. 88, No. 2, pp. 303–338, 2010
[15] R. Zhang, J. Yao, K. Zhang, C. Feng, and J. Zhang, “S-CNN ship detection from high-resolution remote sensing images,” in Proc. Int. Congr. Arch. Photogramm., Remote Sens. Spatial Inf. Sci., pp. 423–430, 2016
[16] J. Xu, X. Sun, D. Zhang, and K. Fu, “Automatic detection of inshore ships in high-resolution remote sensing images using robust invariant generalized Hough transform,” IEEE Geosci. Remote Sens. Lett., Vol. 11, No. 12, pp. 2070-2074, Dec. 2014
[17] G. Liu et al., “A new method on inshore ship detection in high-resolution satellite images using shape and context information,” IEEE Geosci. Remote Sens. Lett., Vol. 11, No. 3, pp. 617-621, Mar. 2014
[18] Y. D. Yu, X. B. Yang, S. J. Xiao, and J. L. Lin, “Automated ship detection from optical remote sensing image,” IEEE Geoscience & Remote Sensing Letters. Vol. 9, pp. 749-753, 2012
[19] C. Zhu, H. Zhou, R. Wang, J. Guo, and J. A Novel, “Hierarchical method of ship detection from spaceborne optical image based on shape and texture features,” IEEE Transactions on Geoscience & Remote Sensing, Vol. 48, pp. 3446-3456, 2010
[20] X. Yang, H. Sun, K. Fu, J. Yang, X. Sun, M. Yan, and Z. Guo, “Automatic ship detection of remote sensing images from google earth in complex scenes based on multi-scale rotation dense feature pyramid,” Remote Sens., Vol. 10, No. 1: 132, 2018
[21] M. Tello, C. Lopez-Martinez, and J. J. Mallorqui, “A novel algorithm for ship detection in SAR imagery based on the wavelet transform,” IEEE Geosci. Remote Sens. Lett., Vol. 2, No. 2, pp. 201-205, Apr. 2005
[22] X. Xing, K. Ji, L. Kang, and M. Zhan, “Review of ship surveillance technologies based on high-resolution wide-swath synthetic aperture radar imaging,” J. Radars, Vol. 4, No. 1, pp. 107-121, 2015
[23] Y. L. Chang, A. Anagaw, Lena Chang, Y. Wang, C. Hsiao, and W. Lee, “Ship detection based on YOLOv2 for SAR imagery,” Remote Sensing, MDPI, Vol. 11, No. 7: 786, 2019
[24] X. Wang and C. Chen, “An automatic ship detection method based on local gray-level gathering characteristics in SAR Imagery,” Electronic Letters on Computer Vision and Image Analysis, Vol. 12, No. 1, pp. 33-41, 2013
[25] M. U. Selvi1 and S. S. Kumar, “A novel approach for ship recognition using shape and texture,” International Journal of Advanced Information Technology, Vol. 1, No. 5, pp. 23-29, 2011
[26] 張麗娜、陳威霖,“應用階層式影像切割技術於SAR 影像船舶偵測,” 船舶科技,第五十期, pp. 1-13, 2018
[27] W. Tao, H. Jin, and J. Liu, “Unified mean shift segmentation and graph region merging algorithm for infrared ship target segmentation,” Opt. Eng., Vol. 46, No. 12, pp. 127002-1-127002-7, 2007
[28] S. R. Rotman, “Region-of-interest-based algorithm for automatic target detection in infrared images,” Opt. Eng., Vol. 44, No. 7, pp. 166-169, Jul. 2005
[29] M. Ren and Z. Tang, “One effective method for ship recognition in ship locks,” Proc. SPIE, Vol. 3720, pp. 467-472, Apr. 1999.
[30] 洪銘鴻、張麗娜,“應用深度學習於船舶影像分類,” 中華民國系統工程研討會, Jun. 2020
[31] SuperDataScience Team, “Convolutional Neural Networks (CNN): Step 3 - Flattening,” Aug. 2018
https://www.superdatascience.com/blogs/convolutional-neural-networks-cnn-step-3-flattening
[32] Convolutional Neural Networks (CNNs / ConvNets)
https://cs231n.github.io/convolutional-networks/
[33] MAX-Pooling
https://embarc.org/embarc_mli/doc/build/html/MLI_kernels/pooling_max.html
[34] Fully Connected Layers in Convolutional Neural Networks: The Complete Guide
https://missinglink.ai/guides/convolutional-neural-networks/fully-connected-layers-convolutional-neural-networks-complete-guide/
[35] OpenCV教學:實作Selective Search物體偵測候選區域演算法
https://blog.gtwang.org/programming/selective-search-for-object-detection/
[36] 目標偵測開山之作RCNN體現的諸多思想與方法(上)
https://kknews.cc/news/e4l2mny.html
[37] Object Detection with Pytorch-Lightning
https://www.kaggle.com/artgor/object-detection-with-pytorch-lightning
[38] Leyan Bin Veon, “YOLO v2 物件偵測~論文整理” May 2019
https://medium.com/程式工作紡/yolo-v2-物件偵測-論文整理-a8e11d8b4409
[39] 殘差網路的理解
https://www.itread01.com/content/1545338181.html
[40] Santosh GSK, “Training Object Detection (YOLOv2) from scratch using Cyclic Learning Rates,” Mar. 2018
https://towardsdatascience.com/training-object-detection-yolov2-from-scratch-using-cyclic-learning-rates-b3364f7e4755
[41] 目標檢測|YOLOv2原理與實現(附YOLOv3)
https://blog.csdn.net/hejin_some/article/details/80581789
[42] YOLOv3-引入:FPN+多尺度偵測(目標檢測)(one-stage)(深度學習)(CVPR 2018)
https://blog.csdn.net/Gentleman_Qin/article/details/84350496
[43] [論文] YOLOv3 : An Incremental Improvement
https://allen108108.github.io/blog/2020/02/15/%5b%E8%AB%96%E6%96%87%5d%20YOLOv3%20_%20An%20Incremental%20Improvement/
電子全文 電子全文(網際網路公開日期:20250720)
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊