跳到主要內容

臺灣博碩士論文加值系統

(44.201.92.114) 您好!臺灣時間:2023/03/28 04:57
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:姚勝閔
研究生(外文):YAO,SHENG-MIN
論文名稱:結合影像辨識用於偵測海漂垃圾與增加海上定位精度之應用
論文名稱(外文):Combined with image recognition to detect marine debris and increase marine positioning accuracy
指導教授:黃凱翔黃凱翔引用關係
指導教授(外文):Huang, Kai-Hsiang
口試委員:黃凱翔余志成蘇東青
口試委員(外文):Huang, Kai-HsiangYu, Jyh-ChengSu,Tung-Ching
口試日期:2022-07-28
學位類別:碩士
校院名稱:國立高雄科技大學
系所名稱:土木工程系
學門:工程學門
學類:土木工程學類
論文種類:學術論文
論文出版年:2022
畢業學年度:110
語文別:中文
論文頁數:91
中文關鍵詞:演算法深度學習海漂垃圾
外文關鍵詞:algorithmdeep learningsea drifting garbag
相關次數:
  • 被引用被引用:0
  • 點閱點閱:103
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
人為經濟活動的發展常伴隨生態環境的污染,大量海漂廢棄物隨洋流飄散,人類將不得不重視此汙染。海洋廢棄物(海廢),有寶特瓶、塑膠瓶蓋、吸管、塑膠提袋…等,大部分都從東南亞等地順著洋流或海浪漂到台灣周圍沿海。不僅破壞海岸景觀,嚴重的是伴隨著海洋汙染,甚至威脅到海洋生命,在海洋永續發展中,如何回收是一大挑戰。
本研究將結合人工智慧圖像判讀與海氣象資訊進行海廢自動偵測模式建置,並使用無人水面載具(Unmanned surface vehicle,USV)來進行收集海洋廢棄物,以達到更有效率與準確的偵測海漂垃圾。本研究將以海岸與港灣地區之海漂垃圾監控為主題,希望設置固定與機動式的攝像頭偵測裝置、藉由UAV進行影像拍攝;利用三種演算法經由深度學習的方式辨識海廢,1. Mask R-CNN (Mask Region-Convolution Neural Network)為實利分割,在每個像素上標示出所屬之類別,並進行圖像分割與特徵定位,2. YOLO v3 (You only look once v3)與3. SSD (Single Shot Multi-Box Detector),將海廢影像框選與訓練,藉由演算法對海廢影像進行辨識,以準確的偵測出垃圾之位置與種類。將所開發的偵測架構應用於港區航行之熱點進行監控,且在偵測出海廢垃圾範圍及位置後,海廢清掃船自動規劃路徑進行海漂垃圾的收集,當清掃船將海廢清掃完畢返航時,因海水起伏不定使導致訊號接收不易造成定位精度不足,本研究希望在港灣明顯處設置座標影像,供使海漂廢棄物自動收集機器人辨識座標影像內的資訊,結合三角定位測量的原理,使清掃船計算出當下座標,藉此輔助GPS定位精度,以達到自動海面污染監控與清理返航之目標。

The development of man-made economic activities is often accompanied by the pollution of the ecological environment. A large amount of sea drifting waste is scattered with the ocean currents. Humans will have to pay attention to this pollution. Marine waste (maritime waste), including plastic bottles, plastic bottle caps, straws, plastic bags, etc., most of which are drifted from Southeast Asia and other places along the ocean currents or waves to the coast around Taiwan. It not only destroys the coastal landscape, but also seriously accompanies marine pollution and even threatens marine life. In the sustainable development of the ocean, how to recycle is a big challenge.
This research will combine artificial intelligence image interpretation and marine meteorological information to build an automatic detection mode of marine waste. and use an Unmanned Aerial Vehicle (UAV) and a marine waste collection robot to collect marine waste. In order to achieve more efficient and accurate detection of sea drifting garbage. This research will focus on the monitoring of marine debris in coastal and harbor areas. It is hoped that fixed and mobile camera detection devices will be set up, and images will be captured by UAV. Three algorithms are used to identify marine waste through deep learning. The first is Image segmentation is Mask R-CNN (Mask Region-Convolution Neural Network) for semantic segmentation, and the other two algorithms are YOLO v3 (You only look once v3) and SSD (Single Shot Multi-Box Detector) for instance segmentation, the marine waste image is framed and trained, and the algorithm is used to identify the marine waste image to accurately detect the location and type of garbage. The developed detection framework is applied to the hotspots of navigation in the port area for monitoring, and after detecting the range and location of marine waste, the marine waste sweeper automatically plans a path to collect marine waste. When the sweeper cleans the marine waste When returning to the voyage, the GPS positioning accuracy is insufficient due to the undulating sea water. This study hopes to set up a coordinate images in the obvious place of the harbor, so that the robot can automatically collect the drifting waste to identify the information in the coordinate images. Combined with the principle of triangulation, the sweeper can calculate the current coordinates, thereby assisting the GPS positioning accuracy, so as to achieve the goal of automatic sea surface pollution monitoring and cleaning and returning to voyage.

目錄

摘要 I
目錄 III
表目錄 VI
圖目錄 VII
第一章 緒論 1
1.1 研究背景 1
1.2 研究動機 2
1.3 研究目的 2
1.4 研究流程 3
第二章 文獻回顧 6
2.1、 海廢清理種類介紹 6
2.1.1、 Mr. Trash Wheel 垃圾輪先生 6
2.1.2、 Sea bin 海洋垃圾桶 7
2.1.3、 The Ocean Cleanup海洋吸塵器 7
2.1.4、 Jellyfishbot水母機器人 8
2.1.5、 WasteShark垃圾鯊魚 8
2.1.6、 Clearbot自主垃圾收集器 9
2.1.7、 各清掃儀器之比較 10
2.2、 演算法介紹 11
2.2.1、 CNN (Convolution Neural Network) 13
2.2.2、 Fast R-CNN (Fast Region-Convolution Neural Network) 14
2.2.3、 Faster R-CNN (Faster Region-Convolution Neural Network) 14
2.2.4、 Mask R-CNN (Mask Region-Convolution Neural Network) 15
2.2.5、 YOLO v3 (You only look once v3) 17
2.2.6、 SSD (Single Shot Multi-Box Detector) 18
2.2.7、 物件追蹤 19
2.3、 相機率定 20
2.3.1、 共線式 20
2.3.2、 求取單應性矩陣( Homography matrix) 22
2.3.3、 求解相機參數 23
2.4、 定位方式 23
2.4.1、 三角定位 23
第三章 研究方法 24
3.1、 實驗設備 25
3.2、 實驗環境 28
3.3、 實驗材料 29
3.4、 影像判釋演算法 30
3.4.1、 Mask R-CNN 30
3.4.2、 YOLO v3 (You only look once) 33
3.4.3、 SSD: Single Shot Multi-Box Detector 35
3.4.4、 三角測量 37
3.5、 精度評估 41
第四章 研究成果 43
4.1、 Mask R-CNN模型之成果 43
4.1.1、 模型效能之評估 43
4.1.2、 視覺化成果 45
4.2、 YOLO v3模型之成果 47
4.2.1、 模型效能之評估 47
4.2.2、 視覺化成果 48
4.3、 SSD模型之成果 51
4.3.1、 模型效能之評估 51
4.3.2、 視覺化成果 53
4.3.3、 航行方向 56
4.4、 演算法效能比較 57
4.5、 視覺化效能之比較 58
4.6、 三角測量成果 59
4.6.1、 將儀器架於中間 60
4.6.2、 將儀器架於左右 65
4.7、 三角測量視覺化比較 70
4.8、 觀測值趨勢 73
第五章 結論與建議 76
5.1、 結論 76
(一) 建立海廢自動化判讀系統 76
(二) 測試影像辨識演算法 76
從成果之誤差矩陣可以得出結論:SSD>YOLO v3>Mask R-CNN 76
以下為各演算法之結論 76
(三) 三角定位測量 77
5.2、 建議 78
參考文獻 79
作者簡歷 82


Chipman, J. W., Lillesand, T. M., Schmaltz, J. E., Leale, J. E., & Nordheim, M. J. (2004). Mapping lake water clarity with Landsat images in Wisconsin, USA. Canadian journal of remote sensing, 30(1), 1-7.
2、 Du, J. (2018). Understanding of object detection based on CNN family and YOLO. Journal of Physics: Conference Series,
3、 Girshick, R. (2015). Fast r-cnn. Proceedings of the IEEE international conference on computer vision,
4、 Harikrishnan, P., Thomas, A., Gopi, V. P., Palanisamy, P., & Wahid, K. A. (2021). Inception single shot multi-box detector with affinity propagation clustering and their application in multi-class vehicle counting. Applied Intelligence, 51(7), 4714-4729.
5、 He, X., Du, X., Wang, X., Tian, F., Tang, J., & Chua, T.-S. (2018). Outer product-based neural collaborative filtering. arXiv preprint arXiv:1808.03912.
6、 Kirillov, A., Girshick, R., He, K., & Dollár, P. (2019). Panoptic feature pyramid networks. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,
7、 Li, M., Zhang, Z., Lei, L., Wang, X., & Guo, X. (2020). Agricultural greenhouses detection in high-resolution satellite images based on convolutional neural networks: Comparison of faster R-CNN, YOLO v3 and SSD. Sensors, 20(17), 4938.
8、 Li, W. (2021). Analysis of object detection Eprformance based on Faster R-CNN. Journal of Physics: Conference Series,
9、 Maity, M., Banerjee, S., & Chaudhuri, S. S. (2021). Faster r-cnn and yolo based vehicle detection: A survey. 2021 5th International Conference on Computing Methodologies and Communication (ICCMC),
10、 Mao, Q.-C., Sun, H.-M., Liu, Y.-B., & Jia, R.-S. (2019). Mini-YOLOv3: real-time object detector for embedded applications. Ieee Access, 7, 133529-133538.
11、 Meng, R., Rice, S. G., Wang, J., & Sun, X. (2018). A fusion steganographic algorithm based on faster R-CNN. Computers, Materials & Continua, 55(1), 1-16.
12、 Ning, C., Zhou, H., Song, Y., & Tang, J. (2017). Inception single shot multibox detector for object detection. 2017 IEEE International Conference on Multimedia & Expo Workshops (ICMEW),
13、 Rahadian, F. A. (2019). 用於人臉驗證的緊湊且低成本的卷積神經網路 National Central University].
14、 Rastegari, M., Ordonez, V., Redmon, J., & Farhadi, A. (2016). Xnor-net: Imagenet classification using binary convolutional neural networks. EuroEpan conference on computer vision,
15、 Redmon, J., & Farhadi, A. (2018). Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767.
16、 Ren, F., Lu, B.-R., Li, S., Huang, J., & Zhu, Y. (2003). A comparative study of genetic relationships among the AA-genome Oryza sEpcies using RAPD and SSR markers. Theoretical and Applied Genetics, 108(1), 113-120.
17、 Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems, 28.
18、 Tian, Y., Yang, G., Wang, Z., Wang, H., Li, E., & Liang, Z. (2019). Apple detection during different growth stages in orchards using the improved YOLO-V3 model. Computers and electronics in agriculture, 157, 417-426.
19、 Tian, Z., Shen, C., Chen, H., & He, T. (2019). Fcos: Fully convolutional one-stage object detection. Proceedings of the IEEE/CVF international conference on computer vision,
20、 Wang, P., Chen, P., Yuan, Y., Liu, D., Huang, Z., Hou, X., & Cottrell, G. (2018). Understanding convolution for semantic segmentation. 2018 IEEE winter conference on applications of computer vision (WACV),
21、 Wang, Q., Zhang, X., Chen, G., Dai, F., Gong, Y., & Zhu, K. (2018). Change detection based on Faster R-CNN for high-resolution remote sensing images. Remote sensing letters, 9(10), 923-932.
22、 Yan, L. (2000). Recognizing handwritten characters. Computer (clean), 500.
23、 Zhang, J., Huang, M., Jin, X., & Li, X. (2017). A real-time chinese traffic sign detection algorithm based on modified YOLOv2. Algorithms, 10(4), 127.
24、 Zhao, T., Liu, J., & Shen, Q. (2019). An Improved Multi-Gate Feature Pyramid Network. Acta Optica Sinica, 39(8), 0815005.
25、 Zou, Z., Shi, Z., Guo, Y., & Ye, J. (2019). Object detection in 20 years: A survey. arXiv preprint arXiv:1905.05055.
26、 唐哲峰. (2017). 使用新激活函數的人臉辨識深度神經網路之實作.
27、 楊大吉, & 陳任芳. (2004). 花蓮區植物疫情之偵測與監測. 花蓮區農業專訊, 48, 18-20.
28、 潘偉庭. (2014). 應用多來源影像進行影像式模型重建及精度評估指標建立.

電子全文 電子全文(網際網路公開日期:20250905)
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊