( 您好!臺灣時間:2024/07/15 01:19
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::


研究生(外文):YEH, YU-LING
論文名稱(外文):Research on the Application of Unmanned Aerial Vehicle Combined with Image Recognition for Epidemic Prevention in Leisure Farms
外文關鍵詞:Deep LearningMask WearingDroneImage RecognitionMultiple Object Tracking(MOT)
  • 被引用被引用:0
  • 點閱點閱:179
  • 評分評分:
  • 下載下載:51
  • 收藏至我的研究室書目清單書目收藏:0
In recent years, due to the spread of the Covid-19 epidemic, a large number of people have become ill and even died. However, tourist farms are still one of the sources of income for farmers. Although the government has opened up the tourism industry to boost the economy, it is also necessary to control the spread of the epidemic.
Therefore, this study obtains video from the drone, and uses an object detection method based on deep learning neural network, combined with the issue of mask-wearing, to conduct image recognition and analysis people in real-time video. When the distance between the person being filmed and the drone is too far, it is impossible to identify whether the person is wearing a mask or not. First, detect whether there are people in the video and obtain the information of the people. Based on the position and framed area of people, and combine with Multiple Object Tracking method, the best movement direction will be calculated through an innovative formula. Then let the drone move on its own until it can determine whether all the people in the real-time video are wearing a mask correctly.
This study divided the data set into four conditions, namely: people, wearing a mask, not wearing a mask, and wearing a mask in the wrong way. Regarding the fourth condition, if someone is already wearing a mask, but it is under the nose or pulled under the chin, the system will automatically determine that the mask is not worn correctly. Finally, the experimental results confirmed that this system can effectively enable the drone to automatically track all the people in the video and detect the mask wearing situations of people, so as to further process the people who need to be corrected. And the average precision (AP) of the four categories are 84.8%, 92.2%, 77.3% and 76.3% respectively, the mean average precision (mAP) is 82.7%.
When people stand in different positions and angles, this system can avoid blind area that CCTV camera cannot capture. Finally, the detection results can be summarized, and the framed images of people who is not wearing a mask and wearing a mask in the wrong way will be saved as detection records. In summary, dynamic patrolling using drones combined with object detection based on neural network will be an excellent way to manage mask-wearing detection in tourist farms.
致謝 iii
中文摘要 iv
Abstract v
目錄 vii
圖目錄 ix
表目錄 xi
第一章 緒論 1
1.1研究背景與動機 1
1.2研究目的及其重要性 1
1.3國內外相關文獻研究 2
第二章 研究理論與方法 5
2.1深度學習 5
2.1.1卷積層(Convolution Layer) 5
2.1.2激活層(Activation Layer) 6
2.1.3池化層(Pooling Layer) 7
2.1.4全連接層(Fully Connected Layer) 7
2.2 YOLO 8
2.2.1 YOLOv8 9
2.3物件追蹤 9
2.3.1目標移動方向追蹤 10
2.3.2質心追蹤演算法(Centroid Tracking) 12
2.3.3多目標追蹤(MOT) 15
2.3.4無人機跟蹤 18
第三章 實驗架構與流程 19
3.1軟體應用 19
3.2硬體設備 19
3.2.1無人機 19
3.2.2鏡頭 20
3.2.3硬體系統架構 21
3.3系統架構流程 22
第四章 實驗結果與討論 24
4.1研究成果 24
4.1.1資料集 24
4.1.2模型評估 24
4.1.3物件偵測 26
4.1.4目標跟蹤 28
4.1.5多目標跟蹤 30
4.2成果討論 34
4.2.1物件偵測之誤差分析 34
4.2.2多目標追蹤方法比較 39
4.2.3多目標追蹤方法與物件數量之關係分析 43
第五章 結論與未來展望 47
5.1結論 47
5.2未來展望 48
參考文獻 49

[1]R. Güner, I. Hasanoğlu, F. Aktaş, “COVID-19: Prevention and Control Measures in Community,” Turkish Journal of Medical Sciences, vol. 50, no. 3, 2020, pp. 571-577.
[2]S.E. Eikenberry, M. Mancuso, E. Iboi, Y. Phan, K. Eikenberry, Y. Kuang, E. Kostelich, A.B. Gumel, “To mask or not to mask: Modeling the potential for face mask use by the general public to curtail the COVID-19 pandemic,” InfectIoUs Disease Modelling, vol. 5, 2020, pp. 293-308.
[3]D. H. Mo, Y. C. Wu, C. S. Lin, “The Dynamic Image Analysis of Retaining Wall Crack Detection and Gap Hazard Evaluation Method with Deep Learning,” Applied Sciences, vol. 12, no. 18, 2022, 9289.
[4]Y. Zhou, Z. Jin, H. Shi, Z. Wang, N. Lu, F. Liu, “UAV-Assisted Fair Communication for Mobile Networks: A Multi-Agent Deep Reinforcement Learning Approach,” Remote Sensing, vol. 14, no. 22, 2022, 5662.
[5]R. G. M. Saleu, L. Deroussi, D. Feillet, N. Grangeon, A. Quilliot, “The parallel drone scheduling problem with multiple drones and vehicles,” European Journal of Operational Research, vol. 300, no. 2, 2022, pp. 571-589.
[6]A. Restas, “Drone Applications for Supporting Disaster Management,” World Journal of Engineering and Technology, vol. 3, no. 3C, 2015, pp. 316-321.
[7]E. N. Hartman, B. Daines, C. Seto, D. Shimshoni, M. E. Feldman M. LaBrunda, “Sort, Assess, Life-Saving Intervention, Triage With Drone Assistance in Mass Casualty Simulation: Analysis of Educational Efficacy,” Cureus, vol. 12, no. 9, 2020, e10572.
[8]S. Hayat, E. Yanmaz, C. Bettstetter, T. X. Brown, “Multi-objective drone path planning for search and rescue with quality-of-service requirements,” Autonomous Robots, vol. 44, 2020, pp. 1183-1198.
[9]M. Perreault, K. Behdinan, “Delivery Drone Driving Cycle,” IEEE Transactions on Vehicular Technology, vol. 70, no. 2, 2021, pp. 1146-1156.
[10]C. S. Lin, S. H. Chen, C. M. Chang, T. W. Shen, “Crack Detection on a Retaining Wall with an Innovative, Ensemble Learning Method in a Dynamic Imaging System,” Sensors, vol. 19, no. 21, 2019, 4784.
[11]H. K. Jung, G. S. Choi, “Improved YOLOv5: Efficient Object Detection Using Drone Images under VarIoUs Conditions”, Applied Sciences, vol. 12, no. 14, 2022, 7255.
[12]V. Raoult, L. Tosetto, J.E. Williamson, “Drone-Based High-Resolution Tracking of Aquatic Vertebrates,” Drones, vol. 2, no. 4, 2018, 37.
[13]A. Ammar, A. Koubaa, B. Benjdira, “Deep-Learning-Based Automated Palm Tree Counting and Geolocation in Large Farms from Aerial Geotagged Images,” Agronomy, vol. 11, no. 8, 2021, 1458.
[14]D. Wu, S. Lv, M. Jiang, H. Song, “Using channel pruning-based YOLO v4 deep learning algorithm for the real-time and accurate detection of apple flowers in natural environments,” Computers and Electronics in Agriculture, vol. 178, 2020, 105742.
[15]F. Dang, D. Chen, Y. Lu, Z. Li, “YOLOWeeds: A novel benchmark of YOLO object detectors for multi-class weed detection in cotton production systems,” Computers and Electronics in Agriculture, vol. 205, 2023, 107655.
[16]X. Lv, X. Lian, L. Tan, Y. Song, C. Wang, “HPMC: A Multi-target Tracking Algorithm for the IoT,” Intelligent Automation & Soft Computing, vol. 28, no. 2, 2021, pp. 513-526.
[17]Z. Han, H. Huang, Q. Fan, Y. Li, Y. Li, X. Chen, “SMD-YOLO: An efficient and lightweight detection method for mask wearing status during the COVID-19 pandemic,” Computer Methods and Programs in Biomedicine, vol. 221, 2022, 106888.
[18]P. Ding, H. Qian, S. Chu, “SlimYOLOv4: lightweight object detector based on YOLOv4,” Journal of Real-Time Image Processing, vol. 19, 2023, pp. 487-498.
[19]S. Jiang, B. Luo, J. Liu, Y. Zhang, L. Zhang, “UAV-Based Vehicle Detection by Multi-source Images,” In Proceedings of the 2nd CCF Chinese Conference on Computer Vision (CCCV), 2017, pp. 12–14.
[20]R. S. Rampriya, R. Suganya, S. Nathan, P. S. Perumal, “A Comparative Assessment of Deep Neural Network Models for Detecting Obstacles in the Real Time Aerial Railway Track Images,” Applied Artificial Intelligence, vol. 36, no. 1, 2021, 2018184.
[21]Z. Yang, Z. Xu, Y. Wang, “Bidirection-Fusion-YOLOv3: An Improved Method for Insulator Defect Detection Using UAV Image,” IEEE Transactions on Instrumentation and Measurement, vol. 71, 2022, pp. 1-8.
[22]L. Wang, J. Ai, L. Zhang, Z. Xing, “Design of Airport Obstacle-Free Zone Monitoring UAV System Based on Computer Vision,” Sensors, vol. 20, no. 9, 2020, 2475.
[23]H. Zhang, G. Wang, Z. Lei, J. N. Hwang, “Eye in the Sky: Drone-Based Object Tracking and 3D Localization,” In Proceedings of the 27th ACM International Conference on Multimedia (MM '19), 2019, pp. 899-907.
[24]R. Barták, A. Vykovský, “Any Object Tracking and Following by a Flying Drone,” 2015 Fourteenth Mexican International Conference on Artificial Intelligence (MICAI), 2015, pp. 35-41.
[25]G. Zhou, J. Yuan, I. L. Yen, F. Bastani, “Robust real-time UAV based power line detection and tracking,” 2016 IEEE Interna-tional Conference on Image Processing (ICIP), 2016, pp. 744-748.
[26]P. Chen, Y. Dang, R. Liang, W. Zhu, X. He, “Real-Time Object Tracking on a Drone With Multi-Inertial Sensing Data,” IEEE Transactions on Intelligent Transportation Systems, vol. 19, no. 1, 2018, pp. 131-139.
[27]A. Altan, Ö. Aslan, R. Hacıoğlu, “Real-Time Control based on NARX Neural Network of Hexarotor UAV with Load Transporting System for Path Tracking,” 2018 6th International Conference on Control Engineering & Information Technology (CEIT), 2018, pp. 1-6.
[28]C. Li, X. Sun, J. Cai, “Intelligent Mobile Drone System Based on Real-Time Object Detection,” Journal on Artificial Intelligence, vol. 1, no. 1, 2019, pp. 1-8.
[29]A. C. Woods, H. M. La, “Dynamic Target Tracking and Obstacle Avoidance using a Drone,” Advances in Visual Computing, vol. 9474, 2015, pp. 857-866.
[30]A. Rohan, M. Rabah, S. H. Kim, “Convolutional Neural Network-Based Real-Time Object Detection and Tracking for Parrot AR Drone 2,” IEEE Access, vol. 7, 2019, pp. 69575-69584.
[31]S. Kumar, M. Kumar, “A Study on the Image Detection Using Convolution Neural Networks and TenserFlow,” 2018 International Conference on Inventive Research in Computing Applications (ICIRCA), 2018, pp. 1080-1083.
[32]J. Redmon, S. Divvala, R. Girshick, A. Farhadi, “You Only Look Once: Unified, Real-Time Object Detection,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 779-788.
[33]G. Jocher, A. Chaurasia, J. Qiu, "YOLO by Ultralytics," [online] Available: https://github.com/ultralytics/ultralytics.
[35]Y. Zhang, P. Sun, Y. Jiang, D. Yu, F. Weng, Z. Yuan, P. Luo, W. Liu, X. Wang, “ByteTrack: Multi-object Tracking by Associating Every Detection Box,” Computer Vision - ECCV 2022, vol 13682, 2022, pp. 1-21.
[36]N. Aharon, R. Orfaig, B. Bobrovsky, “BoT-SORT: Robust Associations Multi-Pedestrian Tracking,” arXiv, 2022, https://arxiv.org/abs/2206.14651.
[37]D. Zhou, J. Fang, X. Song, C. Guan, J. Yin, Y. Dai, R. Yang, “IoU Loss for 2D/3D Object Detection,” 2019 International Conference on 3D Vision (3DV), 2019, pp. 85-94.
第一頁 上一頁 下一頁 最後一頁 top