跳到主要內容

臺灣博碩士論文加值系統

(44.200.194.255) 您好!臺灣時間:2024/07/15 01:19
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:葉昱翎
研究生(外文):YEH, YU-LING
論文名稱:無人機結合影像辨識應用於觀光農場防疫之研究
論文名稱(外文):Research on the Application of Unmanned Aerial Vehicle Combined with Image Recognition for Epidemic Prevention in Leisure Farms
指導教授:林宸生林宸生引用關係
口試委員:張興政賴雲龍
口試日期:2023-12-26
學位類別:碩士
校院名稱:逢甲大學
系所名稱:自動控制工程學系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2024
畢業學年度:112
語文別:中文
論文頁數:53
中文關鍵詞:深度學習口罩佩戴無人機影像辨識多目標追蹤
外文關鍵詞:Deep LearningMask WearingDroneImage RecognitionMultiple Object Tracking(MOT)
相關次數:
  • 被引用被引用:0
  • 點閱點閱:179
  • 評分評分:
  • 下載下載:51
  • 收藏至我的研究室書目清單書目收藏:0
近年來,由於Covid-19疫情蔓延,造成大量人員染病甚至死亡。然而,觀光農場仍然是農民的收入來源之一。雖然政府開放了旅遊業以提振經濟,但控制疫情蔓延也必不可少。
因此,本研究通過基於深度學習神經網路的物件偵測方法,並且結合口罩佩戴議題,透過無人機的輔助進行影像的取得,對即時影像內的人物進行影像辨識分析。當被拍攝的人物與無人機距離過遠,以至於無法辨識其口罩之佩戴與否時,首先偵測畫面中是否存在人物,在取得影像內的人物資訊後,以人物的座標位置及框選面積作為依據,並且結合多目標追蹤方法,透過創新公式計算最佳移動方向後,使無人機自行移動位置,直到能夠判斷即時影像內所有人物的口罩是否佩戴正確。
本研究將數據集分為四項情況,分別為:人物、戴口罩、無戴口罩及戴口罩方式錯誤,關於第四項情況,若有人已經佩戴口罩,但是位於鼻子下方或者被拉到下巴下方時,系統將自動判斷為口罩沒有正確佩戴。最後,經由實驗結果證實,本系統可以有效的使用無人機自動跟蹤其拍攝畫面中的所有人物,並偵測出人物的口罩佩帶情形,進而對需要糾正之人物進行進一步處理。其中,四項類別之物件偵測精確度(AP)分別為84.8%、92.2%、77.3%及76.3%,並且各類別精確度的總平均(mAP)為82.7%。
當人物的站立位置及角度不同時,本系統可避免固定式攝影機無法拍攝的死角,最後將辨識結果為無戴口罩及戴口罩方式錯誤之人物的框選影像另存成檢測紀錄。綜上所述,以無人機結合基於神經網路的物件偵測來動態巡邏,將會是觀光農場管理口罩佩戴的絕佳方式。
In recent years, due to the spread of the Covid-19 epidemic, a large number of people have become ill and even died. However, tourist farms are still one of the sources of income for farmers. Although the government has opened up the tourism industry to boost the economy, it is also necessary to control the spread of the epidemic.
Therefore, this study obtains video from the drone, and uses an object detection method based on deep learning neural network, combined with the issue of mask-wearing, to conduct image recognition and analysis people in real-time video. When the distance between the person being filmed and the drone is too far, it is impossible to identify whether the person is wearing a mask or not. First, detect whether there are people in the video and obtain the information of the people. Based on the position and framed area of people, and combine with Multiple Object Tracking method, the best movement direction will be calculated through an innovative formula. Then let the drone move on its own until it can determine whether all the people in the real-time video are wearing a mask correctly.
This study divided the data set into four conditions, namely: people, wearing a mask, not wearing a mask, and wearing a mask in the wrong way. Regarding the fourth condition, if someone is already wearing a mask, but it is under the nose or pulled under the chin, the system will automatically determine that the mask is not worn correctly. Finally, the experimental results confirmed that this system can effectively enable the drone to automatically track all the people in the video and detect the mask wearing situations of people, so as to further process the people who need to be corrected. And the average precision (AP) of the four categories are 84.8%, 92.2%, 77.3% and 76.3% respectively, the mean average precision (mAP) is 82.7%.
When people stand in different positions and angles, this system can avoid blind area that CCTV camera cannot capture. Finally, the detection results can be summarized, and the framed images of people who is not wearing a mask and wearing a mask in the wrong way will be saved as detection records. In summary, dynamic patrolling using drones combined with object detection based on neural network will be an excellent way to manage mask-wearing detection in tourist farms.
致謝 iii
中文摘要 iv
Abstract v
目錄 vii
圖目錄 ix
表目錄 xi
第一章 緒論 1
1.1研究背景與動機 1
1.2研究目的及其重要性 1
1.3國內外相關文獻研究 2
第二章 研究理論與方法 5
2.1深度學習 5
2.1.1卷積層(Convolution Layer) 5
2.1.2激活層(Activation Layer) 6
2.1.3池化層(Pooling Layer) 7
2.1.4全連接層(Fully Connected Layer) 7
2.2 YOLO 8
2.2.1 YOLOv8 9
2.3物件追蹤 9
2.3.1目標移動方向追蹤 10
2.3.2質心追蹤演算法(Centroid Tracking) 12
2.3.3多目標追蹤(MOT) 15
2.3.4無人機跟蹤 18
第三章 實驗架構與流程 19
3.1軟體應用 19
3.2硬體設備 19
3.2.1無人機 19
3.2.2鏡頭 20
3.2.3硬體系統架構 21
3.3系統架構流程 22
第四章 實驗結果與討論 24
4.1研究成果 24
4.1.1資料集 24
4.1.2模型評估 24
4.1.3物件偵測 26
4.1.4目標跟蹤 28
4.1.5多目標跟蹤 30
4.2成果討論 34
4.2.1物件偵測之誤差分析 34
4.2.2多目標追蹤方法比較 39
4.2.3多目標追蹤方法與物件數量之關係分析 43
第五章 結論與未來展望 47
5.1結論 47
5.2未來展望 48
參考文獻 49

[1]R. Güner, I. Hasanoğlu, F. Aktaş, “COVID-19: Prevention and Control Measures in Community,” Turkish Journal of Medical Sciences, vol. 50, no. 3, 2020, pp. 571-577.
[2]S.E. Eikenberry, M. Mancuso, E. Iboi, Y. Phan, K. Eikenberry, Y. Kuang, E. Kostelich, A.B. Gumel, “To mask or not to mask: Modeling the potential for face mask use by the general public to curtail the COVID-19 pandemic,” InfectIoUs Disease Modelling, vol. 5, 2020, pp. 293-308.
[3]D. H. Mo, Y. C. Wu, C. S. Lin, “The Dynamic Image Analysis of Retaining Wall Crack Detection and Gap Hazard Evaluation Method with Deep Learning,” Applied Sciences, vol. 12, no. 18, 2022, 9289.
[4]Y. Zhou, Z. Jin, H. Shi, Z. Wang, N. Lu, F. Liu, “UAV-Assisted Fair Communication for Mobile Networks: A Multi-Agent Deep Reinforcement Learning Approach,” Remote Sensing, vol. 14, no. 22, 2022, 5662.
[5]R. G. M. Saleu, L. Deroussi, D. Feillet, N. Grangeon, A. Quilliot, “The parallel drone scheduling problem with multiple drones and vehicles,” European Journal of Operational Research, vol. 300, no. 2, 2022, pp. 571-589.
[6]A. Restas, “Drone Applications for Supporting Disaster Management,” World Journal of Engineering and Technology, vol. 3, no. 3C, 2015, pp. 316-321.
[7]E. N. Hartman, B. Daines, C. Seto, D. Shimshoni, M. E. Feldman M. LaBrunda, “Sort, Assess, Life-Saving Intervention, Triage With Drone Assistance in Mass Casualty Simulation: Analysis of Educational Efficacy,” Cureus, vol. 12, no. 9, 2020, e10572.
[8]S. Hayat, E. Yanmaz, C. Bettstetter, T. X. Brown, “Multi-objective drone path planning for search and rescue with quality-of-service requirements,” Autonomous Robots, vol. 44, 2020, pp. 1183-1198.
[9]M. Perreault, K. Behdinan, “Delivery Drone Driving Cycle,” IEEE Transactions on Vehicular Technology, vol. 70, no. 2, 2021, pp. 1146-1156.
[10]C. S. Lin, S. H. Chen, C. M. Chang, T. W. Shen, “Crack Detection on a Retaining Wall with an Innovative, Ensemble Learning Method in a Dynamic Imaging System,” Sensors, vol. 19, no. 21, 2019, 4784.
[11]H. K. Jung, G. S. Choi, “Improved YOLOv5: Efficient Object Detection Using Drone Images under VarIoUs Conditions”, Applied Sciences, vol. 12, no. 14, 2022, 7255.
[12]V. Raoult, L. Tosetto, J.E. Williamson, “Drone-Based High-Resolution Tracking of Aquatic Vertebrates,” Drones, vol. 2, no. 4, 2018, 37.
[13]A. Ammar, A. Koubaa, B. Benjdira, “Deep-Learning-Based Automated Palm Tree Counting and Geolocation in Large Farms from Aerial Geotagged Images,” Agronomy, vol. 11, no. 8, 2021, 1458.
[14]D. Wu, S. Lv, M. Jiang, H. Song, “Using channel pruning-based YOLO v4 deep learning algorithm for the real-time and accurate detection of apple flowers in natural environments,” Computers and Electronics in Agriculture, vol. 178, 2020, 105742.
[15]F. Dang, D. Chen, Y. Lu, Z. Li, “YOLOWeeds: A novel benchmark of YOLO object detectors for multi-class weed detection in cotton production systems,” Computers and Electronics in Agriculture, vol. 205, 2023, 107655.
[16]X. Lv, X. Lian, L. Tan, Y. Song, C. Wang, “HPMC: A Multi-target Tracking Algorithm for the IoT,” Intelligent Automation & Soft Computing, vol. 28, no. 2, 2021, pp. 513-526.
[17]Z. Han, H. Huang, Q. Fan, Y. Li, Y. Li, X. Chen, “SMD-YOLO: An efficient and lightweight detection method for mask wearing status during the COVID-19 pandemic,” Computer Methods and Programs in Biomedicine, vol. 221, 2022, 106888.
[18]P. Ding, H. Qian, S. Chu, “SlimYOLOv4: lightweight object detector based on YOLOv4,” Journal of Real-Time Image Processing, vol. 19, 2023, pp. 487-498.
[19]S. Jiang, B. Luo, J. Liu, Y. Zhang, L. Zhang, “UAV-Based Vehicle Detection by Multi-source Images,” In Proceedings of the 2nd CCF Chinese Conference on Computer Vision (CCCV), 2017, pp. 12–14.
[20]R. S. Rampriya, R. Suganya, S. Nathan, P. S. Perumal, “A Comparative Assessment of Deep Neural Network Models for Detecting Obstacles in the Real Time Aerial Railway Track Images,” Applied Artificial Intelligence, vol. 36, no. 1, 2021, 2018184.
[21]Z. Yang, Z. Xu, Y. Wang, “Bidirection-Fusion-YOLOv3: An Improved Method for Insulator Defect Detection Using UAV Image,” IEEE Transactions on Instrumentation and Measurement, vol. 71, 2022, pp. 1-8.
[22]L. Wang, J. Ai, L. Zhang, Z. Xing, “Design of Airport Obstacle-Free Zone Monitoring UAV System Based on Computer Vision,” Sensors, vol. 20, no. 9, 2020, 2475.
[23]H. Zhang, G. Wang, Z. Lei, J. N. Hwang, “Eye in the Sky: Drone-Based Object Tracking and 3D Localization,” In Proceedings of the 27th ACM International Conference on Multimedia (MM '19), 2019, pp. 899-907.
[24]R. Barták, A. Vykovský, “Any Object Tracking and Following by a Flying Drone,” 2015 Fourteenth Mexican International Conference on Artificial Intelligence (MICAI), 2015, pp. 35-41.
[25]G. Zhou, J. Yuan, I. L. Yen, F. Bastani, “Robust real-time UAV based power line detection and tracking,” 2016 IEEE Interna-tional Conference on Image Processing (ICIP), 2016, pp. 744-748.
[26]P. Chen, Y. Dang, R. Liang, W. Zhu, X. He, “Real-Time Object Tracking on a Drone With Multi-Inertial Sensing Data,” IEEE Transactions on Intelligent Transportation Systems, vol. 19, no. 1, 2018, pp. 131-139.
[27]A. Altan, Ö. Aslan, R. Hacıoğlu, “Real-Time Control based on NARX Neural Network of Hexarotor UAV with Load Transporting System for Path Tracking,” 2018 6th International Conference on Control Engineering & Information Technology (CEIT), 2018, pp. 1-6.
[28]C. Li, X. Sun, J. Cai, “Intelligent Mobile Drone System Based on Real-Time Object Detection,” Journal on Artificial Intelligence, vol. 1, no. 1, 2019, pp. 1-8.
[29]A. C. Woods, H. M. La, “Dynamic Target Tracking and Obstacle Avoidance using a Drone,” Advances in Visual Computing, vol. 9474, 2015, pp. 857-866.
[30]A. Rohan, M. Rabah, S. H. Kim, “Convolutional Neural Network-Based Real-Time Object Detection and Tracking for Parrot AR Drone 2,” IEEE Access, vol. 7, 2019, pp. 69575-69584.
[31]S. Kumar, M. Kumar, “A Study on the Image Detection Using Convolution Neural Networks and TenserFlow,” 2018 International Conference on Inventive Research in Computing Applications (ICIRCA), 2018, pp. 1080-1083.
[32]J. Redmon, S. Divvala, R. Girshick, A. Farhadi, “You Only Look Once: Unified, Real-Time Object Detection,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 779-788.
[33]G. Jocher, A. Chaurasia, J. Qiu, "YOLO by Ultralytics," [online] Available: https://github.com/ultralytics/ultralytics.
[34]楊晟志,“機器視覺與人工智慧技術應用於無人機智能跟隨監控系統”,逢甲大學自動控制工程學系,碩士論文,2023.
[35]Y. Zhang, P. Sun, Y. Jiang, D. Yu, F. Weng, Z. Yuan, P. Luo, W. Liu, X. Wang, “ByteTrack: Multi-object Tracking by Associating Every Detection Box,” Computer Vision - ECCV 2022, vol 13682, 2022, pp. 1-21.
[36]N. Aharon, R. Orfaig, B. Bobrovsky, “BoT-SORT: Robust Associations Multi-Pedestrian Tracking,” arXiv, 2022, https://arxiv.org/abs/2206.14651.
[37]D. Zhou, J. Fang, X. Song, C. Guan, J. Yin, Y. Dai, R. Yang, “IoU Loss for 2D/3D Object Detection,” 2019 International Conference on 3D Vision (3DV), 2019, pp. 85-94.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊