跳到主要內容

臺灣博碩士論文加值系統

(44.210.83.132) 您好!臺灣時間:2024/05/29 13:41
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:劉宇桓
研究生(外文):LIU, YU-HUAN
論文名稱:以YOLOv4為基礎的火焰偵測技術
論文名稱(外文):YOLOv4-Based Fire Detection
指導教授:陳良華陳良華引用關係
指導教授(外文):CHEN, LIANG-HUA
口試委員:傅楸善李瑞庭
口試委員(外文):FUH, CHIOU-SHANNLEE, JUI-TING
口試日期:2023-07-11
學位類別:碩士
校院名稱:輔仁大學
系所名稱:資訊工程學系碩士班
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2023
畢業學年度:111
語文別:中文
論文頁數:35
中文關鍵詞:深度學習物件偵測YOLOv4
外文關鍵詞:Deep LearningObject DetectionYOLOv4
相關次數:
  • 被引用被引用:0
  • 點閱點閱:125
  • 評分評分:
  • 下載下載:30
  • 收藏至我的研究室書目清單書目收藏:0
火災是一種常見的災害,不論是人為造成或是自然災害,經常會造成嚴重的損失。若能盡早發現,將能減少大部分的生命、財產損失。
在各種深度學習演算法中,YOLO系列物件偵測模型扮演著非常重要的角色,在本篇論文中,在YOLOv4演算法的基礎上提出了一種改進的方法,原架構中的YOLOv4模型分為3個架構,分別為提取特徵的Backbone、辨識特徵的Neck、以及標記影像物件的Head,而本研究提出一個針對Backbone當中的殘差網路進行改進,將該網路中的最小卷積單元進行加深,以此提升整體YOLOv4架構對於提取特徵的能力,該架構不僅能提高對於辨識火焰的準確率,更能較原本的架構穩定的辨識火焰影像。並於本研究進行實際驗證,證明本研究所提出的架構理論與實際之可行性。首先收集了1600張圖片,並手動標記圖片中火焰位置,其次,對神經網路的骨幹部分進行了一些改進。最後,資料集在提出的模型上訓練了20000個迭代。實驗結果顯示,本研究提出的方法相較於原始YOLOv4有較高的準確率。
Fire is a common type of disaster, whether caused by human activities or natural disasters, often resulting in serious losses. Early detection can reduce the majority of life and property losses.
Among various deep learning algorithms, the YOLO series object detection models play an extremely important role. In this paper, an improved method is proposed based on the YOLOv4 algorithm. The original YOLOv4 model is divided into three architectures: the Backbone for feature extraction, the Neck for feature recognition, and the Head for labeling objects in images, respectively. This study proposes an improvement to the residual network in the Backbone, deepening the smallest convolutional unit in the network, thereby enhancing the overall ability of the YOLOv4 structure to extract features. This structure not only improves the accuracy of fire detection, but it also provides more stable recognition of fire images compared to the original structure. This study is validated with practical verification, demonstrating the feasibility of the proposed structure in both theory and practice. Firstly, 1600 images were collected, and manually labeled the fire locations in the pictures. Secondly, some improvements were made to the backbone of the neural network. Finally, the dataset was trained on the proposed model for 20,000 iterations. The experimental results show that proposed method has higher accuracy compared to the original YOLOv4 algorithm.
摘要 I
Abstract II
目 錄 III
附表目錄 V
附圖目錄 VI
第一章 緒論 1
1.1 研究動機與目的 1
1.2 論文架構 2
第二章 文獻探討 3
第三章 系統架構 13
3.1 物件偵測簡介 13
3.2 YOLOv4簡介 15
3.3 資料蒐集與標記 18
3.4 提出的架構 21
第四章 實驗結果 25
4.1 實驗環境 25
4.2 實驗結果 26
第五章 結論 34
參考文獻 35
[1] Bochkovskiy, A., et al., "Yolov4: Optimal speed and accuracy of object detection," arXiv preprint arXiv:2004.10934, 2020

[2] Zhong, Z., et al., "A convolutional neural network-based flame detection method in video sequence," Signal, Image and Video Processing 12: pg. 1619-1627, 2018

[3] Muhammad, K., et al., "Convolutional neural networks based fire detection in surveillance videos," Ieee Access 6: 18174-18183, 2018

[4] Wang, S., et al., "Forest fire detection based on lightweight yolo,". 2021 33rd Chinese Control and Decision Conference (CCDC), IEEE, 2021

[5] Cao, C., et al., "Study of flame detection based on improved Yolov4,". Journal of Physics: Conference Series, IOP Publishing. 2021

[6] Arunabha M. Roy, et al., "A fast accurate fine-grain object detection model based on YOLOv4 deep neural network," arXiv preprint arXiv:2111.00298, 2021

[7] 【論文筆記】YOLOv4。
[online] Available at:https://ivanfang.coderbridge.io/2021/06/17/fang-yolov4/

[8] 深度學習–什麼是one stage,什麼是two stage 物件偵測。
[online] Available at: https://chih-sheng-huang821.medium.com/深度學習-什麼是one-stage-什麼是two-stage-物件偵測-fc3ce505390f

[9] 深度特征融合–理解add和concat之多层特征融合。
[online] Available at: https://blog.csdn.net/xys430381_1/article/details/88355956
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊