跳到主要內容

臺灣博碩士論文加值系統

(44.200.27.215) 您好!臺灣時間:2024/04/13 18:41
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:徐苓毓
研究生(外文):Ling-Yuh Hsu
論文名稱:基於深度學習中數據增強有效性研究─ 以甘藍菜葉面病蟲害檢測為例
論文名稱(外文):Study on Effectiveness of Data Augmentation for Deep Learning─ A Case Study of the Detection of Cabbage Leaf Diseases and Pests
指導教授:游竹
指導教授(外文):Chu Yu
口試委員:徐鈴淵張介仁
口試委員(外文):Ling-Yuan HsuJieh-Ren Chang
口試日期:2022-09-23
學位類別:碩士
校院名稱:國立宜蘭大學
系所名稱:電子工程學系碩士班
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2022
畢業學年度:110
語文別:中文
論文頁數:51
中文關鍵詞:生成對抗網路甘藍葉面病蟲害檢測YOLOv5數據增強
外文關鍵詞:Generative Adversarial NetworkCabbage DetectionYOLOv5Data Augmentation
相關次數:
  • 被引用被引用:2
  • 點閱點閱:206
  • 評分評分:
  • 下載下載:49
  • 收藏至我的研究室書目清單書目收藏:0
近年精準農業的概念已經逐漸成熟,人工智慧的應用已經越來越多實際運用到農業上,然而在實務應用上,要求更精準的判斷,才能實際達成省工的目的。本論文以甘藍葉面病蟲害檢測為例,探討應用深度學習模型YOLOv5中數據增強技術有效性的研究。訓練用的資料集是從網路上蒐集而成,包含潛葉蟲、黴菌、小菜蛾等病徵的圖片,並基於深度學習物件偵測模型YOLOv5,建立甘藍病蟲害辨識系統,以達到精準且快速病蟲害辨識的目標。此外,本論文除對數據增強技術有效性的探討外,同時在資料集樣本數不足下,我們利用生成對抗網路有效提高樣本數,以提高甘藍菜葉面病蟲害檢測整體辨識效果。根據實驗結果顯示,在使用預設參數與原始樣本數下訓練,其均值平均精確度(mean Average Precision, mAP) 為0.81;採用生成對抗網路增加樣本數後,在相同參數訓練下,mAP則提升至0.93,比僅用原始樣本數提高14%,明顯對甘藍葉面病蟲害檢測的識別能力顯著提升。此外,我們發現在保持原mAP效能下,使用單一數據增強技術可縮短訓練時間,由原本的1小時43分鐘下降至1小時20分鐘左右,大約節省22%的訓練時間。
In recent years, the conception of precision agriculture has gradually matured, and the application of artificial intelligence has been more and more practically applied to agriculture. However, in practical applications, a more precise identification is required in order to actually achieve the purpose of labor saving. This Thesis takes the detection of the pests and diseases of cabbages as an example to explore the effectiveness of data augmentation technology in the application of deep learning model YOLOv5. The training dataset used in this Thesis is collected from the internet network, including disease symptoms detection such as leaf miners, mildews, and back moths. Based on YOLOv5, a pest identification system is established to achieve accurate and rapid disease and pest identification. In addition to the discussion of the effectiveness of data augmentation technology, this Thesis uses generative adversarial networks to effectively increase sample size for improving the identification of the pest and disease on cabbages, when an insufficient sample size of the dataset. According to the experimental results, mean average precision (mAP) was 0.81 when using the default training parameters and the original sample size. After increasing the sample size by generative adversarial networks, mAP was increased to 0.93 under the same training parameter, which was 14% higher than the original dataset. In addition, we can find the fact that the use of a single data augmentation can reduce training time from 1:43 to 1:20 while maintaining the original mAP performance, which is reduced by about 22%.
目錄

中文摘要 I
Abstract II
目錄 III
圖目錄 V
表目錄 VII
第 1 章 諸論 1
1.1 研究背景 1
1.2 研究動機 2
1.3 論文架構 3
第 2 章 背景知識 4
2.1 類神經網路 4
2.2 卷積神經網路 5
2.2.1 卷積層(Convolution Layers) 6
2.2.2 池化層(Pooling Layers) 7
2.2.3 激勵函數(Activation Function) 8
2.2.4 批量正規化(Batch Normalization, BN) 10
2.2.5 全連接層(Fully Connected Layers) 11
2.3 反向傳遞演算法 11
2.4 生成對抗網路 13
2.5 物件偵測 15
2.5.1 兩段式物件偵測器 17
2.5.2 單段式物件偵測器 19
第 3 章 研究方法 23
3.1 簡介 23
3.2 資料收集 24
3.2.1 圖像預處理 25
3.3 資料集訓練 27
3.3.1 原始YOLOv5訓練 27
3.3.2 生成網路模組照片生成與選用 28
3.3.3 數據增強訓練 29
第 4 章 實驗結果 35
4.1 評估標準 35
4.2 混淆矩陣 40
4.3 檢測結果 46
第 5 章 結論與未來展望 48
參考文獻 50


圖目錄

圖2 1、神經元示意圖 5
圖2 2、卷積運算示意圖 7
圖2 3、池化計算示意圖 8
圖2 4、激勵函數圖形 9
圖2 5、不作標準化與作標準化之優化過程的示意圖[7] 10
圖2 6、卷積神經網絡 CNN的完整架構圖[8] 11
圖2 7、生成對抗網路示意圖[10] 13
圖2 8、目標檢測發展[11] 16
圖2 9、 Faster R-CNN演算法處理流程[14] 19
圖2 10、Mask R-CNN演算法處理流程[15] 19
圖2 11、YOLOv5算法性能測試圖[17] 20
圖2 12、YOLOv5架構圖[17] 21
圖3 1、系統架構圖 23
圖3 2、黴菌感染 24
圖3 3、潛葉蟲 24
圖3 4、小菜蛾 25
圖3 5、生成網路所輸出之圖片:(左)輸入圖片、(右)生成圖片 29
圖3 6、混合技術示意圖 30
圖3 7、透視變換示意圖 30
圖3 8、馬賽克訓練示意圖 31
圖3 9、縮放示意圖 31
圖3 10、平移變換示意圖 32
圖3 11、錯切示意圖 32
圖3 12、上下翻轉示意圖 33
圖3 13、左右翻轉示意圖 33
圖3 14、HSV變換示意圖 34
圖3 15、複製貼上示意圖 34
圖4 1、YOLOv5與本論文提出之研究訓練結果 35
圖4 2、原始YOLOv5損失函數 39
圖4 3、本論文採用方式的損失函數 40
圖4 4、無數據增強網路模型的混淆矩陣 40
圖4 5、預設參數數據增強網路模型的混淆矩陣 42
圖4 6、預設參數數據增強加入生成樣本訓練後的混淆矩陣 42
圖4 7、圖片縮放網路模型的混淆矩陣 43
圖4 8、圖片縮放網路模型加入生成樣本訓練後的混淆矩陣 43
圖4 9、馬賽克網路模型的混淆矩陣 44
圖4 10、馬賽克網路模型加入生成樣本訓練後的混淆矩陣 44
圖4 11、平移網路模型的混淆矩陣 45
圖4 12、平移網路模型加入生成樣本訓練後的混淆矩陣 45
圖4 13、資料集正確標籤檢測結果 46
圖4 14、圖片縮放於驗證集檢測結果 47


表目錄

表3 1、YOLO標籤格式範例 26
表3 2、二元混淆矩陣要素表 27
表4 1、數據增強對網路訓練的影響比較 36
表4 2、YOLOv5的數據增強預設參數 37
表4 3、增加生成樣本後數據增強對網路訓練的影響比較 38


[1]S. Dick, “Artificial Intelligence,” Harvard Data Science Review, Jul. 2019.
[2]T. M. Mitchell, Machine Learning (no. 9). McGraw-hill New York, 1997.
[3]Y. LeCun, Y. Bengio, and G. Hinton, “Deep Learning,” Nature, vol. 521, no. 7553, pp. 436-444, May. 2015.
[4]https://www.ibm.com/smarterplanet/us/en/
[5]https://www.intelligentagri.com.tw/xmdoc/cont?xsmsid=0J164373919378174143
[6]K. O'Shea and R. Nash, “An Introduction to Convolutional Neural Networks,” arXiv preprint arXiv:1511.08458, Dec. 2015.
[7]S. Ioffe and C. Szegedy, “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift,” in International Conference on Machine Learning, Mar. 2015, pp. 448-456.
[8]https://aigeekprogrammer.com/convolutional-neural-network-image-recognition-part-2/
[9]D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning Representations by Back-Propagating Errors,” Nature, vol. 323, no. 6088, pp. 533-536, Oct. 1986.
[10]I. Goodfellow et al., “Generative Adversarial Networks,” Communications of the ACM, vol. 63, no. 11, pp. 139-144, Oct. 2020.
[11]Z.-Q. Zhao, P. Zheng, S.-t. Xu, and X. Wu, “Object Detection with Deep Learning: A Review,” IEEE Transactions on Neural Networks and Learning Systems, vol. 30, no. 11, pp. 3212-3232, Jan. 2019.
[12]R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation,” in Proceedings of the IEEE Conference on Computer vision and Pattern Recognition, Oct. 2014, pp. 580-587.
[13]R. Girshick, “Fast R-CNN,” in Proceedings of the IEEE International Conference on Computer Vision, Dec. 2015, pp. 1440-1448.
[14]S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” Advances in Neural Information Processing Systems, vol. 28, Dec . 2015.
[15]K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Mask R-CNN,” in Proceedings of the IEEE International Conference on Computer Vision, Oct. 2017, pp. 2961-2969.
[16]J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You Only Look Once: Unified, Real-Time Object Detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Dec. 2016, pp. 779-788.
[17] R. Xu, H. Lin, K. Lu, L. Cao, and Y. Liu, “A Forest Fire Detection System Based on Ensemble Learning,” Forests, vol. 12, no. 2, p. 217, Feb, 2021.
[18]M. A. Tanner and W. H. Wong, “The Calculation of Posterior Distributions by Data Augmentation,” Journal of the American Statistical Association, vol. 82, no. 398, pp. 528-540, Apr. 1987.
[19]C. Shorten and T. M. Khoshgoftaar, “A Survey on Image Data Augmentation for Deep Learning,” Journal of Big Data, vol. 6, no. 1, pp. 1-48, Jul. 2019.
[20]E. Rich, “What Is Artificial Intelligence?,” Artificial intelligence, Jan. 1983.
[21]L. Perez and J. Wang, “The Effectiveness of Data Augmentation in Image Classification using Deep Learning,” arXiv Preprint arXiv:1712.04621, Dec. 2017.
[22]C. Ledig et al., “Photo-Realistic Single Image Super-Resolution using a Generative Adversarial Network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, May. 2017, pp. 4681-4690.
[23]W. Lan, J. Dang, Y. Wang, and S. Wang, “Pedestrian Detection based on YOLO Network Model,” in 2018 IEEE International Conference on Mechatronics and Automation (ICMA), Aug. 2018, pp. 1547-1551.
[24]R. Huang, J. Pedoeem, and C. Chen, “YOLO-LITE: a Real-Time Object Detection Algorithm Optimized for non-GPU Computers,” in 2018 IEEE International Conference on Big Data (Big Data), Dec. 2018, pp. 2503-2510.
[25]R. Hecht-Nielsen, “Theory of the Backpropagation Neural Network,” in Neural Networks for Perception: Elsevier, Jun. 1992, pp. 65-93.
[26]K. P. Ferentinos, “Deep Learning Models for Plant Disease Detection and Diagnosis,” Computers and Electronics in Agriculture, vol. 145, pp. 311-318, Feb. 2018.
[27]M. Ebrahimi, M. H. Khoshtaghaza, S. Minaei, and B. Jamshidi, “Vision-Based Pest Detection based on SVM Classification Method,” Computers and Electronics in Agriculture, vol. 137, pp. 52-58, May. 2017.
[28]https://github.com/ultralytics/yolov5
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊