跳到主要內容

臺灣博碩士論文加值系統

(35.175.191.36) 您好!臺灣時間:2021/07/30 10:55
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:徐莨智
研究生(外文):HSU, LANG-CHIH
論文名稱:應用於蜂箱監測之物件標示系統開發
論文名稱(外文):Development of objects labelImg system for beehive monitoring
指導教授:蔡哲民蔡哲民引用關係
指導教授(外文):TSAI, JER-MIN
口試委員:王建仁陳以德
口試委員(外文):WANG, CHIEN-JENCHEN, I-Te
口試日期:2021-06-12
學位類別:碩士
校院名稱:崑山科技大學
系所名稱:資訊工程研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2021
畢業學年度:109
語文別:中文
論文頁數:69
中文關鍵詞:深度學習自動標示蜂箱監測引導程序數量擴增YOLO演算法
外文關鍵詞:Deep learningAutomatic labelingBeehive monitoringBootstrapNumber augmentYOLO
相關次數:
  • 被引用被引用:0
  • 點閱點閱:49
  • 評分評分:
  • 下載下載:12
  • 收藏至我的研究室書目清單書目收藏:0
蜜蜂雖只是大自然中的一個不起眼的生物,但卻是維持自然生態平衡與農業生態的重要功臣。
近年來,大自然在人類過度開發下,氣候產生劇烈變遷,導致全球的蜜蜂面臨大規模的族群崩潰與消失的現象,這種現象被科學家稱為「蜂群崩潰症候群」。
監測蜂巢出入口影像,了解工蜂出勤、回巢的數量與狀況,是研究蜂群崩潰症候群問題的重要技術。但蜂巢出入口常會有大量蜜蜂聚集,難以有效的偵測與標示。近年來,深度學習技術已能有效的執行物件偵測;然而要訓練出有效的深度學習模型,得要仰賴大量的人力進行標示以提供訓練樣本。本論文開發一套應用於蜂箱監測之物件標示系統,可以解決擁有大量蜂箱巢口影像,卻缺乏大量人力進行蜜蜂標示,以致無法訓練出準確度高的深度學習模型的問題。本系統測試3種演算法以取代人工標示大量的影像,分別為:(1)傳統影像辨識、(2)以人工標示圖片訓練之深度學習,以及(3)本論文提出之Bootstrap與數量擴增標示圖片進行深度學習法。第三種演算法並加入以蜜蜂物件大小做為偵測過濾機制。
經實驗得知本論文提出之Bootstrap與數量擴增演算法當利用250張人工標示圖片所得到的模型,經Bootstrap 及數量擴增將標示圖片數量增加至5000張後,訓練所得之模型其IoU準確度可達48.56%,超過只以人力標示圖片訓練之模型(第二種演算法)的45.57%。此外,本論文提出之演算法只須15張人工標示圖片,再利用Bootstrap 與數量擴增將標示圖片數量增加至2500張後,訓練所得之模型其可達成 41.58 % IoU準確度。本論文提出之演算法除應用於蜂巢口的蜜蜂影像標示外,當應用於偵測人面蜘蛛與寄生蜘蛛的物件偵測,可達到77.2%的高IoU準確度。
本論文建構可節省人力並提高準確度的自動蜜蜂標示系統。未來將以此技術,製作成一追蹤監督系統,進一步與本研究群開發數位蜂箱之各種感測器數據整合,達成掌握蜜蜂環境與其行為模式的自動監控工具。

Although the bee is just an inconspicuous creature in nature, it is an important contributor to maintaining the natural ecological balance and agricultural ecology.
In recent years, due to the over-exploitation of nature, the climate has undergone drastic changes, leading to large-scale population collapse and disappearance of bees around the world. This phenomenon is called "colony collapse syndrome" by scientists.
Monitoring the images of the entrance of the hive and understanding the number and status of worker bees attending and returning to the hive is an important technology for studying the problem of colony collapse syndrome. Nevertheless, a large number of bees often gather at the entrance of the hive, which makes it difficult to detect and mark the bees effectively. In recent years, deep learning technology has been able to effectively perform object detection. However, training an effective deep learning model of object detection needs a lot of marked training samples which cost a lot of manpower. In this paper, we develop an object labelling system for beehive monitoring. It tries to solve the problem of lacking adequate manpower for bee marking. We have tested 3 algorithms to replace manual labeling of a large number of images, (1) traditional image recognition, (2) deep learning training with manual labeled images, and (3) the proposed deep learning with bootstrap and number augmentation of labeled images. In the third algorithm, the bee size ofis utilized for eliminating unsuitable labelling.
Utilizing 250 manual labeled images as the initial training images, when the proposed algorithm increases the number of marked images to 5000, the IOU accuracy of the trained model can reach 48.56%, which is more than 45.57% of the model trained with only human-labeled images (the second algorithm). In addition, utilizing only 15 manual labeled images, after some iterations of bootstrap and number augmentation to increase the number of marked images to 2500, the trained model can achieve 41.58% IoU accuracy. When applying the proposed algorithm for detecting human-faced spiders and parasitic spiders, the IoU accuracy can reach 77.2%.
This paper constructs an automatic bee marking system that can save manpower and improve accuracy. In the future, we will apply this technology to create a tracking and monitoring system. In addition, various sensor data of the digital beehive developed by this research group will be integrated into our system to build an automatic monitoring tool for mastering the environment and behavior patterns of bees.

中文摘要........................................................................................................i
英文摘要.......................................................................................................ii
誌 謝............................................................................................................iv
目 錄.............................................................................................................v
表 目 錄.......................................................................................................vii
圖 目 錄......................................................................................................viii
符 號 說 明....................................................................................................x
一、 緒論..............................................................................................1
1.1. 研究背景---------------------------------------------------------------- 1
1.2. 研究動機與目的------------------------------------------------------- 3
1.3. 論文架構---------------------------------------------------------------- 5
二、 文獻探討......................................................................................6
2.1. 傳統影像辨識---------------------------------------------------------- 6
2.1.1. Histogram(直方圖)------------------------------------------------ 6
2.1.2. 顏色區域分佈特徵---------------------------------------------------- 7
2.1.3. 顏色正規化------------------------------------------------------------- 8
2.1.4. 角度正規化------------------------------------------------------------- 8
2.1.5. 大小正規化------------------------------------------------------------- 8
2.2. 深度學習---------------------------------------------------------------- 8
2.2.1. CNN(Convolutional Neural Network)------------------------ 10
2.2.2. LeNet-5---------------------------------------------------------------- 12
2.2.3. AlexNet --------------------------------------------------------------- 13
2.2.4. ResNet----------------------------------------------------------------- 14
2.2.5. R-CNN(Regions with CNN features) ------------------------- 15
2.2.6. YOLO (You Only Look Once) -------------------------------- 16
2.3. Image Labeling------------------------------------------------------- 17
2.4. IoU(Intersection over Union) ---------------------------------- 18
三、 系統架構....................................................................................19
3.1. 人工智慧影像追蹤系統建置流程-------------------------------- 19
3.2. 影像物件自動標示系統-------------------------------------------- 20
3.3. 基於傳統影像辨識方法之影像物件自動標示系統----------- 21
3.3.1. 基於傳統影像辨識方法之影像物件自動標示系統-顏色直方
圖 22vi
3.3.2. 基於傳統影像辨識方法之影像物件自動標示系統-顏色區域
分佈特徵 ---------------------------------------------------------------------- 23
3.4. 基於深度學習方式之影像物件自動標示系統----------------- 24
3.4.1. 基於深度學習方式之影像物件自動標示系統-CNN --------- 24
3.4.2. 基於深度學習方式之影像物件自動標示系統-YOLOv3 ---- 26
3.5. Bootstrap 與數量擴張 ---------------------------------------------- 27
四、 實驗與討論................................................................................29
4.1. 實驗環境與素材----------------------------------------------------- 29
4.2. 基於傳統影像辨識方法之影像物件自動標示系統----------- 31
4.2.1. 基於傳統影像辨識方法之影像物件自動標示系統-顏色直方
圖 31
4.2.2. 基於傳統影像辨識方法之影像物件自動標示系統-顏色區域
分佈特徵 ---------------------------------------------------------------------- 34
4.2.3. 小結-------------------------------------------------------------------- 38
4.3. 基於深度學習方式之影像物件自動標示系統----------------- 38
4.3.1. 基於深度學習方式之影像物件自動標示系統-CNN --------- 38
4.3.2. 基於深度學習方式之影像物件自動標示系統-YOLOv3 ---- 41
4.3.3. 小結-------------------------------------------------------------------- 42
4.4. Bootstrap 與數量擴張 ---------------------------------------------- 42
4.5. 不同物種實驗-------------------------------------------------------- 48
4.6. 小結-------------------------------------------------------------------- 52
五、 結論............................................................................................53
參 考 文 獻..................................................................................................54
附 錄 一.......................................................................................................60
附 錄 二.......................................................................................................61
附 錄 三.......................................................................................................62
[1]天天里仁,“你知道蜜蜂正在消失嗎?“,https://www.newsmarket.com.tw/blog/130960/,2020.01.08取得。
[2]臺灣生物多樣性-WE CARE!WE PROTECT!,“讓餐桌食物多樣化的小「蜜」方:保護蜜蜂”,https://biodiversity.tw/newpage.php?id=12 ,2020.01.08取得。
[3]獨傲村夫,“蜜蜂日漸消失,將是人類的災難,搶救刻不容緩!“,https://www.peoplenews.tw/news/45540f8f-12ff-4083-b273-4a898849d393,2020.01.08取得。
[4]郭久亦、于冰,“蜜蜂的神奇生活”,世界環境,第170卷,第一期,2018年,第64-67頁。
[5]Jamie Chang,“蜜蜂消失和我有什麼關係?“,https://medium.com/tedxtaipei/marla-spivak-why-bees-are-disappearing-a2077ceafba2 ,2020.01.08取得。
[6]Y. L. Chen, H. Y. Chien, T. H. Hsu, Y. J. Jing, C. Y. Lin and Y. C. Lin, “A Pi-Based Beehive IoT System Design,” In International Conference on Security with Intelligent Computing and Big-data Services. Springer, Cham, December 2018, pp. 535-543.
[7]李想、江朝暉、陸元洲、潘煒、余林生,“基于微傳感器陣列的蜂巢溫度監測與分析系统”,傳感器與微系统,第34卷,第11期,2015年,第63-65頁。
[8]A. Zacepins, A. Pecka, V. Osadcuks, A. Kviesis, S. Engel, “Solution for auto-mated bee colony weight monitoring,” Agron Res, 15(2), 2017, pp. 585-593.
[9]蔡哲民、王建仁、沈英謀、周以敦,“以重量為基礎的蜂箱搬移時機偵測系統”,TANET2019 臺灣網際網路研討會,第87-91頁。
[10]蜂友會,“箱外觀察蜜蜂,再蜂場和巢門前分析和判斷蜂群的情況”,https://kknews.cc/zh-tw/news/z8vzmpl.html,2021.03.12取得。
[11]蜂部落,“能追蹤蜜蜂的黑科技出來了,價格5位數,付使用方法”,https://kknews.cc/zh-tw/agriculture/xjyn2er.html,2021.03.15取得。
[12]魏憶君,“以運動補償模型為基礎之移動式相機多物件追蹤”,碩士論文,國立中央大學,2012。
[13]Masa Chen ,“AI人工智慧- 定義、技術原理、趨勢、以及應用領域”,https://oosga.com/artificial-intelligence/,2020.11.19取得。
[14]L. Liu, W. Ouyang, X. Wang, P. Fieguth, J. Chen, X. Liu and M. Pietikäinen, “Deep learning for generic object detection: A survey,”International journal of computer vision, 128(2), 2020, pp.261-318.
[15]M. Fischler and R. Elschlager, “The representation and matching of pictorial structures,”IEEE Transactions on Computers, 100(1), 1973, pp.67–92.
[16]A. L. Yuille and C. Liu, “Deep Nets: What have they ever done for Vision?,” arXiv preprint arXiv:1805.04025, 2018.
[17]環境資訊中心,“蜂情萬種__蜂類大不同,原來蜜蜂不是唯一”,https://e-info.org.tw/node/110328,2021.03.15取得。
[18]張明旭,“自動化蝴蝶自然影像辨識系統”,碩士論文,國立交通大學,2009。
[19]K. Pearson,“ X. Contributions to the mathematical theory of evolution.—II. Skew variation in homogeneous material,” Philosophical Transactions of the Royal Society of London.(A.), (186), 1895, pp.343-414.
[20]維基百科,“直方圖”,https://zh.wikipedia.org/wiki/直方圖,2021.03.15取得。
[21]許勝毅,“利用Histogram實現正交分頻多工系統資料輔助時間訊號同步演算法”,碩士論文,國立交通大學,2005。
[22]游仁宏,“基於最佳化直方圖實現嵌入式水草生長控制系統”,碩士論文,國立臺北教育大學,2020。
[23]Z. Zivkovic and B. Krose, “An EM-like algorithm for color-histogram-based object tracking,” In Proceedings of the 2004 IEEE Computer Society Confer-ence on Computer Vision and Pattern Recognition, CVPR, Vol. 1, June 2004, pp. I-I.
[24]蔡哲民、王建仁、沈英謀、陳旻緒、周以敦,“雙鏡頭貝殼辨識箱導覽系統”, TANET2017 臺灣網際網路研討會, 第824-829頁。
[25]陳沂鈴,“基於雷射量測之常見食用貝類辨識系統”,碩士論文,私立崑山科技大學,2018。
[26]CH.Tseng,“ILSVRC 歷屆的深度學習模型”,https://chtseng.wordpress.com/2017/11/20/ilsvrc-歷屆的深度學習模型/,2020.01.08取得。
[27]A. Krizhevsky, I. Sutskever, and G. Hinton, “ImageNet classification with deep convolutional neural networks,”In NIPS, 2012, pp.1097–1105.
[28]M. Zeiler and R. Fergus, “ Visualizing and understanding convolutional net-works,” In ECCV, 2014, pp. 818–833.
[29]C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke and A. Rabinovich,“Going deeper with convolutions,” In CVPR, 2015, pp. 1–9.
[30]K. He, X. Zhang, S. Ren, and J. Sun,“Deep residual learning for image recogni-tion, ” In CVPR, 2016, pp. 770–778.
[31]X. Zeng, W. Ouyang, J. Yan, H. Li, T. Xiao, K. Wang, et al,“Crafting gbd-net for object detection, ” IEEE Transactions on Pattern Analysis and Machine Intelli-gence, 40(9), 2017, 2109–2123.
[32]J. Hu, L. Shen and G. Sun, “Squeeze and excitation networks,” In CVPR, 2018.
[33]R. Girshick, J. Donahue, T. Darrell and J. Malik, “ Rich feature hierarchies for accurate object detection and semantic segmentation,” In CVPR, 2014, pp. 580–587.
[34]S. Ren, K. He, R. Girshick and J. Sun,“ Faster R-CNN: Towards real time object detection with region proposal networks,” In NIPS, 2015, pp. 91–99.
[35]K. He, G. Gkioxari, P. Dollár and R. Girshick,“ Mask RCNN,” In ICCV, 2017.
[36]J. Redmon, S. Divvala, R. Girshick and A. Farhadi, “You only look once: Uni-fied, real time object detection,” In CVPR, 2016, pp. 779–788.
[37]W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Fu and A. Berg,“SSD: Single shot multibox detector,” In ECCV, 2016, pp. 21–37.
[38]James學習筆記,“[資料分析&機器學習]第5.1講:卷積神經網路介紹”,https://medium.com/jameslearningnote/資料分析-機器學習-第5-1講-卷積神經網絡介紹-convolutional-neural-network-4f8249d65d4f ,2021.03.16取得。
[39]CH.Tseng ,“初探卷積神經網路”,https://chtseng.wordpress.com/2017/09/12/初探卷積神經網路,2021.03.16取得。
[40]維基百科,“卷積神經網路”,https://zh.wikipedia.org/wiki/卷积神经网络,2021.03.16取得。
[41]Y. LeCun,; L. Bottou, Y. Bengio and P. Haffner,“Gradient-based learning ap-plied to document recognition,”Proceedings of the IEEE. 86(11), 1998 , pp. 2278-2324.
[42]劉德營、王家亮、林相澤、陳京、於海明,“基於卷積神經網路的白背飛虱識別方法”,農業機械學報,49(5),2018, 第51-56頁。
[43]A. Krizhevsky, I. Sutskever and G. E. Hinton,“ImageNet classification with deep convolutional neural networks,”Communications of the ACM, 60(6), 2017, pp. 84-90.
[44]維基百科,“AlexNet”,https://zh.wikipedia.org/wiki/AlexNet,2021.04.12取得。
[45]K. He, X. Zhang, S. Ren, and J. Sun,“ Identity Mappings in Deep Residual Net-works,” Lecture Notes in Computer Science, 2016, pp.630-645.
[46]J. R. Uijlings, K. E. Van De Sande, T. Gevers and A. W. Smeulders,“Selective search for object recognition,”International journal of computer vision, 104(2), 2013, pp. 154-171.
[47]S. Kido, Y. Hirano and N. Hashimoto,“Detection and classification of lung ab-normalities by use of convolutional neural network (CNN) and regions with CNN features (R-CNN),” In 2018 International workshop on advanced im-age technology (IWAIT), January 2018, pp. 1-4.
[48]J. Redmon, S. Divvala, R. Girshick and A. Farhadi,“You only look once: Uni-fied, real-time object detection,” In Proceedings of the IEEE conference on computer vision and pattern recognition , 2016, pp. 779-788.
[49]J. Redmon and A. Farhadi,“YOLO9000: better, faster, stronger,”In Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 7263-7271.
[50]J. Redmon and A. Farhadi,“Yolov3: An incremental improvement,” Computer Vision and Pattern Recognition, 2018.
[51]Zih Zrong,“雜亂大全24-YOLOv3(v1-v4統整資料)簡介”, https://liaozihzrong.github.io/2020/09/18/allinone24/,2021.04.15取得。
[52]B. Benjdira, T. Khursheed, A. Koubaa, A. Ammar and K. Ouni,“Car detection using unmanned aerial vehicles: Comparison between faster r-cnn and yolov3,” In 2019 1st International Conference on Unmanned Vehicle Systems-Oman (UVS), February 2019, pp. 1-6.
[53]蔡承翰、洪國峰,“模擬器自動圖像生成及自動標註: 以深度學習結合機器人隨機堆疊取料為例”,機械工業雜誌,449,2020,第28-35頁。
[54]Adrian Rosebrock,“Intersection over Union (IoU) for object detection”,https://www.pyimagesearch.com/2016/11/07/intersection-over-union-iou-for-object-detection/,2021.03.18取得。
[55]韓翔宇,“適合自動駕駛車輛之結合邊緣資訊即時影像語意分割系統”,碩士論文,國立臺灣大學,2018。
[56]Darrenl,“LabelImg”,https://github.com/tzutalin/labelImg,2020.11.18取得。
[57]國立海洋科技博物館,“義大利蜂”,http://mscloud.nmmst.gov.tw/chhtml/opencontenttab.aspx?tdid=91,2020.12.15取得。
[58]王順吉、吳怡興、何應魁、江清泉、蔡孟材,“模糊色彩量化區域特徵選取之彩色影像檢索方法”,NCS 全國計算機會議,2005。
[59]程式前沿,“經典卷積神經網路 LeNet-5 模型”,https://codertw.com/程式語言/568544/,2021.01.27取得。
[60]程式前沿,“【深度學習】AlexNet 原理解析及實現”,https://codertw.com/程式語言/638975/,2021.01.27取得。
[61]曾定章,“傳統與卷積神經網路的小眾人臉偵測與辨識”,http://ip.csie.ncu.edu.tw/course/DL/DL0207p%20人體特徵偵測與辨識應用.pdf,2021.02.25取得。
[62]東京,“H.264星光級低照度1080P攝像頭模組IMX322高清200萬圖元監控廣角150度鏡頭”,https://item.jd.com/10020853028201.html,2019.09.18取得。
[63]Github,“YOLOv3”,https://github.com/qqwweee/keras-yolo3,2021.02.26取得。

QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top