跳到主要內容

臺灣博碩士論文加值系統

(100.28.0.143) 您好!臺灣時間:2024/07/19 17:08
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:黃雅勤
研究生(外文):HUANG, YA-CHIN
論文名稱:物件偵測輔助標示系統-以無螫蜂為例
論文名稱(外文):Object Detection-Assisted Marking System for Stingless Bee Monitoring
指導教授:蔡哲民蔡哲民引用關係曾生元曾生元引用關係
指導教授(外文):TSAI, JER-MINTSENG, SHENG-YUAN
口試委員:陳以德
口試委員(外文):CHEN, I-TE
口試日期:2024-06-08
學位類別:碩士
校院名稱:崑山科技大學
系所名稱:資訊管理研究所
學門:電算機學門
學類:電算機一般學類
論文種類:學術論文
論文出版年:2024
畢業學年度:112
語文別:中文
論文頁數:56
中文關鍵詞:YOLOv7YOLOv8無螫蜂巢口監測輔助標示
外文關鍵詞:YOLOv7YOLOv8Stingless BeesEntrance MonitoringAssisted Labeling
相關次數:
  • 被引用被引用:0
  • 點閱點閱:17
  • 評分評分:
  • 下載下載:2
  • 收藏至我的研究室書目清單書目收藏:0
台灣本土的特有種黃紋無螫蜂因體型小、適應性強和高蜂膠產量被認為是替代目前商業養蜂蜂種的選擇,但無螫蜂的生活型態特殊,導致在人工養殖上有很大的困難。為了可以清楚了解無螫蜂的生活型態,本研究將IoT與深度學習技術應用於無螫蜂蜂箱監測,透過收集野外蜂箱環境、影像數據即時監測,並使用深度學習模型計算蜂群數量以了解蜂群發展狀況。但訓練深度學習模型需要蒐集大量影像並進行標示,相當耗費人力。為解決該問題,本研究開發出一套物件偵測輔助標示系統,預先標示影像並輔助使用者標示無螫蜂影像,並透過網頁界面以更直觀的方式進行模型訓練,在深度學習模型訓練上則使用遷移式學習以降低訓練所需資料量並加速訓練。經實驗影像輔助標示可以節省手動標示時間約18%,而運用本研究所建立之流程,YOLOv7在影像標示程序上節省43%的標示時間,模型建立時間則節省40%,辨識準確度可達70%。而YOLOv8在影像標示程序上節省60%的時間,模型建立可以節省58%的時間,準確度可達72.1%。未來可將此技術應用於其他物種之觀測上。
The Taiwan endemic species, the Lepidotrigona hoozana, is considered a replacement for current commercial beekeeping species due to its small size, strong adaptability, and high propolis production. However, the unique lifestyle of stingless bees presents significant challenges in artificial cultivation. To clearly understand the lifestyle of stingless bees, this study applies IoT and deep learning technologies to monitor bee boxes in the wild. By collecting environmental and image data from field bee boxes for real-time monitoring, and using deep learning models to calculate bee colony numbers to understand the development of the colonies. However, training deep learning models requires the collection of large amounts of images and manual labeling, which is labor-intensive. To solve this problem, this research developed an object detection assisted labeling system that pre-labels images and assists users in labeling stingless bee images through a web interface, providing a more intuitive approach to model training. In deep learning model training, transfer learning is used to reduce the amount of data needed and speed up training. Experimental results show that assisted image labeling can save about 18% of manual labeling time. Using the process established in this study, YOLOv7 saved 43% of the time in the image labeling process and 40% of the time in model establishment, with an accuracy rate of 70%. YOLOv8 saved 60% of the time in the image labeling process and 58% of the time in model establishment, with an accuracy of 72.1%. This technology can be applied to the observation of other species in the future.
摘 要 i
ABSTRACT ii
誌 謝 iii
目 錄 iv
表 目 錄 vi
圖 目 錄 vii
一、緒論 1
1.1 研究背景 1
1.2 研究動機與目的 3
1.3 論文架構 4
二、文獻探討 5
2.1 深度學習 5
2.2 物件偵測 6
2.3 YOLO演算法 8
2.4 遷移式學習 9
2.5 影像標示(Image Labeling) 9
2.6 模型評估指標 11
2.6.1 混淆矩陣(Confusion Matrix) 11
2.6.2 mean Average Precision 11
2.6.3 IoU(Intersection over Union) 12
2.7 智慧監測 12
2.8 深度學習硬體運算平台 13
三、系統架構 14
3.1 系統架構 14
3.2 影音串流子系統 18
3.3 網頁輔助標示子系統 20
3.3.1 網頁標示模組 21
3.3.2 輔助標示模組 23
3.4 深度學習訓練控制子系統 24
3.5 監測子系統 25
四、實驗與討論 27
4.1. 實驗環境與素材 27
4.2 影像串流子系統顯示畫面與負載分析 33
4.3 網頁輔助標示子系統分析 35
4.3.1 網頁標示模組分析 36
4.3.2 輔助標示模組分析 37
4.4 深度學習訓練控制子系統結果與分析 39
4.5 完整建立時間分析 44
4.6 監測子系統結果與分析 46
五、結論 49
參考文獻 50


[1]Food and Agriculture Organization of the United Nations, "The importance of bees and other pollinators for food and agriculture," Food and Agriculture Organi-zation of the United Nations, [Online]. Available: https://www.gov.si/assets/ministrstva/MKGP/PROJEKTI/SDC_WBD/TOOLKIT/General-Information/FAO_brosura_ENG_print.pdf.
[2]陳枻廷、張羽萱. (2017).全球蜂產業問題及臺灣蜂產業發展現況.農業生技產業季刊, 52, 53-57. 財團法人台灣經濟研究院生物科技產業研究中心.
[3]陳本翰 (2023, April 30). 【永續蜂業1】臺灣蜂產業的經營現況用產銷履歷迎戰五大困境. https://www.agriharvest.tw/archives/55099
[4]維基百科. 蜂群崩潰失調病. https://zh.wikipedia.org/zh-tw/%E8%9C%82%E7%BE%A4%E5%B4%A9%E6%BA%83%E5%A4%B1%E8%B0%83%E7%97%85
[5]行政院農業委員會 (2023, April 30). 蜂群崩壞失調症候群:蜜蜂的浩劫. https://kmweb.moa.gov.tw/theme_data.php?theme=news&sub_theme=agri_life&id=59090
[6]吳明城. (2016). 臺灣西方蜂種原分析與展望. 苗栗區農業專訊, 75, 21-23.
[7]孫聖恆. (2019). 臺灣養蜂業採蜜決策之經濟分析 (碩士論文). 國立臺灣大學農業經濟學系.
[8]陳怡伶. (2018). 臺灣西方蜂種原分析與展望. 生物科技產學論壇, 21, 15-17.
[9]Farnesi, A. P., Aquino-Ferreira, R., De Jong, D., Bastos, J. K., & Soares, A. E. E. (2009). Effects of stingless bee and honey bee propolis on four species of bacte-ria. Genetics and Molecular Research, 8(2), 635-640. https://doi.org/10.4238/vol8-2kerr023
[10]農傳媒. (2023, April 30). 蜂蜜大百科:養蜂簡史與蜂蜜的科學探秘. https://www.agriharvest.tw/archives/88824
[11]Marconi, M., Modesti, A., Vecco Giove, C. D., Mancini, E., & Di Giulio, A. (2022). Nest architectures of myrmecophilous stingless bees, Trigona sp. cfr. cilipes and Paratrigona sp., from Peruvian Amazon (Hymenoptera: Apidae, Api-nae, Meliponini). Fragmenta Entomologica, 54(1), 179-184.
[12]健康醫療網. (2023, April 30). 蜂膠防疫效果好!挑選 5 招要學起來. https://www.healthnews.com.tw/article/48083
[13]OralFresh. (2023, April 30). 蜜蜂 5 大產物大剖析:蜂蜜、蜂膠、蜂王乳、蜂蠟、蜂巢功效特色. https://www.oralfreshtw.com/blog/posts/%E8%9C%9C%E8%9C%825%E5%A4%A7%E7%94%A2%E7%89%A9%E5%A4%A7%E5%89%96%E6%9E%90%EF%BC%9A-%E8%9C%82%E8%9C%9C%E3%80%81%E8%9C%82%E8%86%A0%E3%80%81%E8%9C%82%E7%8E%8B%E4%B9%B3%E3%80%81%E8%9C%82%E8%A0%9F%E3%80%81%E8%9C%82%E5%B7%A2%E5%8A%9F%E6%95%88%E7%89%B9%E8%89%B2
[14]環境資訊中心. (2023, April 30). 蜜蜂研究:「迷你大腦」解開複雜學習與記憶之. https://e-info.org.tw/node/212206
[15]廖冠瑋. (2022). 利用深度學習與水下影像研發白蝦成長及行為模式的物聯網監測系統 (碩士論文). 國立臺灣大學生物機電工程學系.
[16]陳昱翔. (2020). 蛋雞飼養之物聯網軟體平台遠端控制與監測 (碩士論文). 淡江大學電機工程學系碩士班.
[17]Kulyukin, V. (2021). Audio, image, video, and weather datasets for continuous electronic beehive monitoring. Applied Sciences, 11(10), Article 4632. https://doi.org/10.3390/app11104632
[18]Chiu, S. (2023, April 30). 城市裡也能養蜂?義大利 Beeing 研發家庭和城市適用的小型蜂箱,讓人們在家輕鬆養蜂. https://www.shoppingdesign.com.tw/post/view/4370
[19]葛瑞萊科技有限公司. (2023, March 20). 智慧科技蜂箱. http://gorilladenki.com/
[20]蜂友會. (2021, March 12). 箱外觀察蜜蜂,再蜂場和巢門前分析和判斷蜂群的情況. https://kknews.cc/zh-tw/news/z8vzmpl.html
[21]廖靜蕙. (2023, April 23). 【里山台灣】中高海拔授粉靠本土「無螫蜂」 里山綠色經濟好幫手. 環境資訊中心. https://einfo.org.tw/node/212206
[22]蜂部落. (2021, March 15). 能追蹤蜜蜂的黑科技出來了,價格 5 位數,付使用方法. https://kknews.cc/zh-tw/agriculture/xjyn2er.html
[23]Gernat, T., Jagla, T., Jones, B. M., Middendorf, M., & Robinson, G. E. (2023). Au-tomated monitoring of honey bees with barcodes and artificial intelligence re-veals two distinct social networks from a single affiliative behavior. Scientific Reports, 13(1), Article 1541.
[24]Chen, M. (2023, April 23). AI 人工智慧- 定義、技術原理、趨勢、以及應用領域. https://oosga.com/artificial-intelligence/
[25]Amazon. (2023, April 23). 什麼是資料標記?. https://aws.amazon.com/tw/what-is/data-labeling/
[26]Wang, G. T. (2023, April 23). LabelImg 影像標註工具使用教學,製作深度學習用的資料集. https://blog.gtwang.org/useful-tools/labelimg-graphical-image-annotation-tool-tutorial/
[27]廖靜蕙. (2023, April 23). 悉心呵護原生授粉昆蟲 與無螫蜂為伍的繁蜂達人伍憲章. 地球日. https://earthday.org.tw/archives/1052
[28]Python. (2023, April). https://www.python.org
[29]GitHub. (2023, April 23). labelImg. https://github.com/HumanSignal/labelImg
[30]維基百科. (2023, April 30). 深度學習. https://zh.wikipedia.org/zh-tw/%E6%B7%B1%E5%BA%A6%E5%AD%A6%E4%B9%A0
[31]LeCun, Y., et al. (1989). Backpropagation applied to handwritten zip code recog-nition. Neural Computation, 1(4), 541-551. doi:10.1162/neco.1989.1.4.541
[32]Deng, L., & Yu, D. (2014). Deep learning methods and applications. Foundations and Trends in Signal Processing, 7(3-4), 197-387.
[33]ImageNet. (2023, April 23). ILSVRC2012. https://www.image-net.org/challenges/LSVRC/2012/
[34]王若樸. (2023, April 23). 以YOLOv4打敗Google還不夠,中研院組隊瞄準物件追蹤AI要再拿世界第一. iThome. https://www.ithome.com.tw/news/141153
[35]Girshick, R., Donahue, J., Darrell, T., & Malik, J. (2014). Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 580-587). doi: 10.1109/CVPR.2014.81
[36]Girshick, R. (2015). Fast R-CNN. In Proceedings of the IEEE International Con-ference on Computer Vision (pp. 1440-1448). doi: 10.1109/ICCV.2015.169
[37]Lin, J., & Gong, S. (2023). GridCLIP: One-stage object detection by grid-level CLIP representation learning. arXiv preprint arXiv:2303.09252. https://arxiv.org/abs/2303.09252
[38]Kraus, F., & Dietmayer, K. (2019). Uncertainty estimation in one-stage object de-tection. arXiv preprint arXiv:1905.10296. https://arxiv.org/abs/1905.10296
[39]Example Authors, A. B., & Author, C. D. (2023). Title of the chapter. In E. F. Ex-ample Editor (Ed.), Title of the book (pp. 1-20). Springer. https://doi.org/10.1007/978-3-031-19833-5_1
[40]Guirguis, K., Abdelsamad, M., Eskandar, G., Hendawy, A., Kayser, M., Yang, B., & Beyerer, J. (2023). Towards discriminative and transferable one-stage few-shot object detectors. In Proceedings of the IEEE/CVF Winter Conference on Applica-tions of Computer Vision (WACV) (pp. 3760-3769).
[41]粘為博. (2023, April 23). 即時、精準辨識動態物件深度學習為自駕車開眼. MEM. https://www.mem.com.tw/%E5%8D%B3%E6%99%82%E3%80%81%E7%B2%BE%E6%BA%96%E8%BE%A8%E8%AD%98%E5%8B%95%E6%85%8B%E7%89%A9%E4%BB%B6-%E3%80%80%E6%B7%B1%E5%BA%A6%E5%AD%B8%E7%BF%92%E7%82%BA%E8%87%AA%E9%A7%95%E8%BB%8A%E9%96%8B%E7%9C%BC/
[42]王岫晨. (2023, April 23). 揮別傳統檢測 AI電腦視覺為工業產線加值. SmartAu-to. https://smartauto.ctimes.com.tw/DispArt-tw.asp?O=HK69R8AAUUWARASTD6
[43]Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You only look once: Unified, real-time object detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 779-788). Las Vegas, NV, USA. https://doi.org/10.1109/CVPR.2016.91
[44]Redmon, J., & Farhadi, A. (2017). YOLO9000: Better, faster, stronger. In Pro-ceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recogni-tion (CVPR) (pp. 7263-7271). Honolulu, HI, USA. https://doi.org/10.1109/CVPR.2017.690
[45]Redmon, J., & Farhadi, A. (2018). YOLOv3: An incremental improvement. arXiv preprint arXiv:1804.02767. https://arxiv.org/abs/1804.02767
[46]Bochkovskiy, A., Wang, C. Y., & Liao, H. Y. M. (2020). YOLOv4: Optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934. https://arxiv.org/abs/2004.10934
[47]Jocher, G. (2020). YOLOv5 by Ultralytics. GitHub repository. Available: https://github.com/ultralytics/yolov5
[48]Wang, C. Y., Bochkovskiy, A., & Liao, H. Y. M. (2022). YOLOv7: Trainable Bag-of-Freebies Sets New State-of-the-Art for Real-Time Object Detectors. arXiv preprint arXiv:2207.02696. Available: https://arxiv.org/abs/2207.02696
[49]Jocher, G. (2020). YOLOv5 by Ultralytics [GitHub repository]. https://github.com/ultralytics/yolov5
[50]Ma, L., Zhao, L., Wang, Z., Zhang, J., & Chen, G. (2023). Detection and counting of small target apples under complicated environments by using improved YOLOv7-tiny. Agronomy, 13(5), 1419. https://doi.org/10.3390/agronomy13051419
[51]Shokri, D., Larouche, C., & Homayouni, S. (2023). A comparative analysis of multi-label deep learning classifiers for real-time vehicle detection to support in-telligent transportation systems. Smart Cities, 6(5), 2982-3004. https://doi.org/10.3390/smartcities6050145
[52]Hussain, M. (2023). YOLO-v1 to YOLO-v8, the rise of YOLO and its complemen-tary nature toward digital manufacturing and industrial defect detection. Ma-chines, 11(7), Article 677. https://doi.org/10.3390/machines11070677
[53]Bozinovski, S. (2020). Reminder of the first paper on transfer learning in neural networks, 1976. Informatica, 44(3), 291-302. https://doi.org/10.31449/inf.v44i3.2828
[54]楊宛芸. (2020). 遷移學習應用於二維胰臟影像小區塊方式之腫瘤辨識 (碩士論文, 應用數學科學研究所). 國立臺灣大學.
[55]許宗嫄. (2020). 語言語言表示模型進行跨語言遷移學習之問答 系統 (碩士論文). 國立臺灣大學.
[56]Tang, P. Y.-X., & Summers, R. M. (2020). Fast few-shot transfer learning for dis-ease identification from chest x-ray images using autoencoder ensemble. In SPIE Medical Imaging 2020: Computer-Aided Diagnosis (Vol. 11314, p. 1131406). SPIE. https://doi.org/10.1117/12.2549060
[57]Shao, S., McAleer, S., Yan, R., & Baldi, P. (2019). Highly accurate machine fault diagnosis using deep transfer learning. IEEE Transactions on Industrial Informat-ics, 15(4), 2446-2455. https://doi.org/10.1109/TII.2018.2864759
[58]王本立. (2018). 一個基於多物體辨識器及追蹤器的自駕車半自動影片標註工具 (碩士論文, 資訊工程學系所). 國立清華大學.
[59]徐莨智.(2021).應用於蜂箱監測之物件標示系統開發.崑山科技大學.
[60]Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. MIT Press.
[61]維基百科. (n.d.). 二元分類器的評估. 維基百科.https://en.wikipedia.org/wiki/Evaluation_of_binary_classifiers
[62]COCO Dataset. (n.d.). Detection evaluation. https://cocodataset.org/#detection-eval
[63]Rosebrock, A. (2016, November 7). Intersection over Union (IoU) for object de-tection. PyImageSearch. https://www.pyimagesearch.com/2016/11/07/intersection-over-union-iou-for-object-detection/
[64]TechNews. (2023, April 23). 海康威視打造 AI 智慧水產養殖監測系統. https://cdn.technews.tw/2018/12/05/seagate-nsysu-aquaculture/
[65]Cecchi, S., Spinsante, S., Terenzi, A., & Orcioni, S. (2020). A smart sensor-based measurement system for advanced bee hive monitoring. Sensors, 20(2726). https://doi.org/10.3390/s20092726
[66]Leadtek. (2023, April 30). Leadtek 論壇帖子. Retrieved from https://forums.leadtek.com/tw/post/49
[67]Wang, Y., Wei, G.-Y., & Brooks, D. (2019). Benchmarking TPU, GPU, and CPU platforms for deep learning. arXiv preprint arXiv:1907.10701. https://arxiv.org/abs/1907.10701
[68]Zhang, J., Yeung, S. H., Shu, Y., He, B., & Wang, W. (2019). Efficient memory management for GPU-based deep learning systems. arXiv preprint arXiv:1903.06631. https://arxiv.org/abs/1903.06631
[69]Boutros, A., et al. (2020). Beyond peak performance: Comparing the real perfor-mance of AI-optimized FPGAs and GPUs. In Proceedings of the 2020 Interna-tional Conference on Field-Programmable Technology (ICFPT) (pp. 10-19). Maui, HI, USA. https://doi.org/10.1109/ICFPT51103.2020.00011
[70]Koliousis, P., Watcharapichat, P., Weidlich, M., Mai, L., Costa, P., & Pietzuch, P. (2019). CROSSBOW: Scaling deep learning with small batch sizes on multi-GPU servers. arXiv preprint arXiv:1901.02244. https://arxiv.org/abs/1901.02244
[71]Gholami, A., Azad, P., Jin, P., Keutzer, K., & Buluç, A. (2017). Integrated model, batch, and domain parallelism in training neural networks. arXiv preprint arXiv:1712.04432. https://arxiv.org/abs/1712.04432
[72]王建仁、蔡哲民、曾生元、黃雅勤. (2023). 以巢口監測影像為基礎的無螫蜂族群發展監測系統. In TANET2023 臺灣網際網路研討會 (pp. 52-56).
[73]Kaup, F., Gottschling, P., & Hausheer, D. (2014). PowerPi: Measuring and modeling the power consumption of the Raspberry Pi. In Proceedings of the 39th Annual IEEE Conference on Local Computer Networks (pp. 236-243). Edmonton, AB, Canada. https://doi.org/10.1109/LCN.2014.6925777

QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊