(3.238.174.50) 您好!臺灣時間:2021/04/18 16:29
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

我願授權國圖
: 
twitterline
研究生:蘇筠凱
研究生(外文):Su, Yun-Kai
論文名稱:基於多尺度特徵之稻作害蟲監測與預警系統
論文名稱(外文):A Monitoring and Forewarning System for Rice Pests via Multi-Scale Feature Mixture
指導教授:王聖智王聖智引用關係
指導教授(外文):Wang, Sheng-Jyh
口試委員:蕭旭峰彭文孝
口試委員(外文):Hsiao, Hsu-FengPeng, Wen-Hsiao
口試日期:2018-10-09
學位類別:碩士
校院名稱:國立交通大學
系所名稱:電子研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2018
畢業學年度:107
語文別:英文
論文頁數:56
中文關鍵詞:影像辨識物體偵測
外文關鍵詞:Image recognitionObject detection
相關次數:
  • 被引用被引用:0
  • 點閱點閱:105
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
飛蝨為全球性之重要害蟲,這些害蟲有遷飛性可以在台灣直接越冬,其破壞除了直接吸取作物汁液造成損傷外,部分物種如白背飛蝨可以傳播黑條矮縮病,因此如何有效噴藥撲殺害蟲並減少農藥對土地與人體的危害成為了重要的議題。
在本論文中,我們提出稻作害蟲的監測系統,利用傳統影像處理技術與深度學習技術計算害蟲數量,當數量達到一定標準以上才會噴藥,達到有效除去害蟲同時減少農藥殘留。第一階段,先利用傳統影像處理技術找出植株的位置,切除周圍非植株與地面土壤部分,把植株提供給第二階段做辨識;第二階段,使用深度類神經網路架構找出害蟲位置與數量,此階段改良Faster R-CNN與Feature Pyramid network適用於多尺度問題的模型架構,其利用不同解析度的特徵來進行多尺度偵測,我們加入新的結構來混合同解析度下不同尺度的特徵,並加入Negative data training strategy,因此在準確率與回召率都有顯著的提升。
Rice planthopper is a kind of global pest which can travel worldwide. They can survive in Taiwan even in winter and cause damage to the rice plant. In addition, some also spread disease that can cause immense loss in agriculture industry. Therefore, building a system that can detect rice planthoppers so that we can properly spray pesticide to minimize the damage is an important topic.
In this thesis, we develop a rice pest monitoring and forewarning system that can locate and count the number of pests. In the first phase, we use traditional image processing technique to detect the plant and ground. We reserve the major part of plant for the second phase process, which uses deep learning techniques to locate the pests and determine its species. Our model is based on Faster R-CNN and Feature Pyramid Network, which can handle multi-scale object detection. Feature pyramid network uses different resolutions to predict objects of different scales. We propose a Mixture block which combines multi-scale feature under the same resolution to provide better features for prediction. Additionally, we use negative data training strategy to handle hard negative. As a result, we obtain better results which contain fewer mistakes.
Chapter 1 Introduction 1
1.1 Project Description and Goal 1
1.2 Motivation 2
1.3 Contribution 3
1.4 Organization 3
Chapter 2 Background and Related Works 4
2.1 Background Knowledge 4
2.1.1 Convolution Neural Network 4
2.1.2 Receptive Field 6
2.2 Related Works about Object Detection 8
2.2.1 Fast R-CNN 9
2.2.2 Faster R-CNN 10
2.2.3 Single Shot MultiBox Detector (SSD) 12
2.2.4 Feature Pyramid Network (FPN) 13
2.2.5 Local Difference Pooling 14
Chapter 3 Data and Preprocessing 16
3.1 Data 16
3.1.1 Data Statistic 18
3.1.2 Data Preprocessing 21
3.1.3 Data Augmentation 23
3.2 Preprocessing 25
3.2.1 Plant Locator 25
3.2.2 Plant-Ground Junction Detector 26
Chapter 4 Proposed model 32
4.1 Basic setting 32
4.2 Model Reduction 33
4.3 Customized Adjustment 35
4.4 Negative Training 38
4.5 Parameter Reduction 40
4.6 Multi-Scale Feature Mixture 42
Chapter 5 Experimental Result 49
5.1 Evaluation Metrics 49
5.2 Precision-Recall Curve 50
5.3 Compare Result 52
Chapter 6 Conclusion 54
Bibliography 55
[1] W.-R. Lin, “A Monitoring and Forewarning System for Rice Pests,” Science in Electronics Engineering, National Chiao Tung University, 2017.
[2] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg, "Ssd: Single shot multibox detector." pp. 21-37.
[3] "Insecticide Resistance Action Committee," http://www.irac-online.org/pests/nilaparvata-lugens/.
[4] S. Ren, K. He, R. Girshick, and J. Sun, "Faster r-cnn: Towards real-time object detection with region proposal networks." pp. 91-99.
[5] K. He, G. Gkioxari, P. Dollár, and R. Girshick, "Mask r-cnn." pp. 2980-2988.
[6] T.-Y. Lin, P. Dollár, R. B. Girshick, K. He, B. Hariharan, and S. J. Belongie, "Feature Pyramid Networks for Object Detection." p. 3.
[7] R. Girshick, "Fast r-cnn." pp. 1440-1448.
[8] A. Krizhevsky, I. Sutskever, and G. E. Hinton, "Imagenet classification with deep convolutional neural networks." pp. 1097-1105.
[9] K. Simonyan, and A. J. a. p. a. Zisserman, “Very deep convolutional networks for large-scale image recognition,” 2014.
[10] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, "Going deeper with convolutions." pp. 1-9.
[11] K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition." pp. 770-778.
[12] "Convolutional Neural Network - MathWorks," https://www.mathworks.com/solutions/deep-learning/convolutional-neural-network.html.
[13] "Deep learning for complete beginners: convolutional neural networks with keras," https://cambridgespark.com/content/tutorials/convolutional-neural-networks-with-keras/index.html.
[14] "A guide to receptive field arithmetic for Convolutional Neural Networks," https://medium.com/mlreview/a-guide-to-receptive-field-arithmetic-for-convolutional-neural-networks-e0f514068807.
[15] W. Luo, Y. Li, R. Urtasun, and R. Zemel, "Understanding the effective receptive field in deep convolutional neural networks." pp. 4898-4906.
[16] J. R. Uijlings, K. E. Van De Sande, T. Gevers, and A. W. J. I. j. o. c. v. Smeulders, “Selective search for object recognition,” vol. 104, no. 2, pp. 154-171, 2013.
[17] R. Girshick, J. Donahue, T. Darrell, and J. Malik, "Rich feature hierarchies for accurate object detection and semantic segmentation." pp. 580-587.
[18] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, "You only look once: Unified, real-time object detection." pp. 779-788.
[19] N. Bodla, B. Singh, R. Chellappa, and L. S. Davis, "Soft-nms—improving object detection with one line of code." pp. 5562-5570.
[20] K. He, X. Zhang, S. Ren, and J. Sun, "Spatial pyramid pooling in deep convolutional networks for visual recognition." pp. 346-361.
[21] R. G. Georgia Gkioxari, Kaiming He, Justin Johnson, Jifeng Dai, Piotr Dollàr. "Instance-level Visual Recognition," https://instancetutorial.github.io/.
[22] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, "Imagenet: A large-scale hierarchical image database." pp. 248-255.
[23] M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. J. I. j. o. c. v. Zisserman, “The pascal visual object classes (voc) challenge,” vol. 88, no. 2, pp. 303-338, 2010.
[24] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, "Microsoft coco: Common objects in context." pp. 740-755.
[25] M. D. Fairchild, Color appearance models: John Wiley & Sons, 2013.
[26] "TensorFlow Object Detection API," https://github.com/tensorflow/models/tree/master/research/object_detection.
[27] S. Zhang, L. Wen, X. Bian, Z. Lei, and S. Z. Li, "Single-shot refinement neural network for object detection."
電子全文 電子全文(網際網路公開日期:20211014)
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
系統版面圖檔 系統版面圖檔