跳到主要內容

臺灣博碩士論文加值系統

(34.204.198.73) 您好!臺灣時間:2024/07/19 13:11
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:張家榮
研究生(外文):CHANG, CHIA-RONG
論文名稱:電腦視覺應用於校園安全可行性研究
論文名稱(外文):Feasibility study on the application of computer vision to campus security
指導教授:蔡明志蔡明志引用關係
指導教授(外文):TSAI, MING-JHIH
口試委員:黃曜輝盧浩鈞
口試委員(外文):HUANG,YAO-HUILU,HAO-JUN
口試日期:2022-06-06
學位類別:碩士
校院名稱:輔仁大學
系所名稱:資訊管理學系碩士在職專班
學門:電算機學門
學類:電算機一般學類
論文種類:學術論文
論文出版年:2022
畢業學年度:110
語文別:中文
論文頁數:76
中文關鍵詞:校園安全校園侵入YOLODeepFace
外文關鍵詞:Campus SafetyCampus intrusionYOLOV4DeepFace
相關次數:
  • 被引用被引用:0
  • 點閱點閱:225
  • 評分評分:
  • 下載下載:43
  • 收藏至我的研究室書目清單書目收藏:0
校園暗藏的危機中,除了侵入校園事件外,更包含了兒童失蹤,而在放學時間
失蹤的比例佔極大部分。對於大多數學童,最容易遭受安全威脅的時間幾乎都落在
放學的時候,在目前的社會形態下,絕大多數的學童下課後不是留在學校課輔就是
把學童送往安親班。為強化確認接送學童下課之人員為授權人員,有必要利用資訊
科技,強化風險管理。電腦視覺是以深度學習為基礎,進行臉部生物特徵辨識,除
了降低人力與時間外,並可以全天候即時偵測與警示,達成預防犯罪並提升校園安
全的目的。
本研究分為兩個部分,第一部分為校園管制區或管制時間偵測,目的是預防校
園侵入事件的發生,偵測區域為校園禁區或管制時間,當偵測到人體時,進行擷取
影像後通知校園安全人員進行處理,使用模型為 YOLOV4。第二部分是上課期間
或下課接送學童時,偵測接送家長臉部特徵,透過臉部辨識進行授權確認。辨識地
點為學校警衛室,利用攝影機將臉部資料與臉部授權資料庫進行比對,藉此驗證是
否為授權人員,使用模型為 DeepFace。
本論文運用 YOLOV4 與 DeepFace 來研究提升校園安全的方法,期望能有效
強化校園安全。經過研究與實際驗證發現,管制區域或管制時間可以有效的偵測並
判斷是否有侵入事件發生。在預防學童於放學時段或上課期間遭未授權人員拐騙
帶走的研究中,證實可以透過程序設計與搭配電腦視覺來預防學童拐騙的發生,有
效的提升校園的安全性。
Among the heightened child endangerment cases in the campus, in addition
to the incident intrusion into the campus, it also includes the disappearance of
children, and the proportion of missing children during school hours accounts for
a large part. For most school children, the time when they are most vulnerable is
usually at the end of school. Currently, the vast majority of school children either
stay in school after class or go to afterschool classes. Information Technology can
be used to enhance safety management and mitigate risks during these hours.
Computer vision is based on deep learning to perform facial biometric recognition.
In addition to reducing manpower and time, it can also detect and warn in real time
around the clock, achieving the purpose of preventing crime and improving
campus safety.
This research is divided into two parts. The first part is the detection of
campus control area or control time. The purpose is to prevent the occurrence of
campus intrusion incidents. The detection area is the campus restricted area or
control time. When a human body is detected, images are captured. After notifying
the campus security personnel for processing, the model is YOLOV4. The second
part is to detect the facial features of the parents who pick up and drop off the
students during or after class, and confirm the authorization through facial
recognition. The identification location is the school guard room, and the camera
is used to compare the facial data with the facial authorization database to verify
whether it is an authorized person. The model used is DeepFace.
iv
This paper uses YOLOV4 and DeepFace to study methods to improve campus
security, hoping to effectively strengthen campus security. After research and
actual verification, it is found that the control area or control time can effectively
detect and judge whether there is an intrusion event. In the research on preventing
school children from being abducted and taken away by unauthorized personnel
during school hours or during class, it has been proved that program design and
computer vision can be used to prevent the occurrence of school children abduction
and effectively improve the safety of schools.
目 錄............................................................................................................................. vi
表 次........................................................................................................................... viii
圖 次............................................................................................................................. ix
第壹章 緒論.............................................................................................................. 1
第一節 研究背景與動機 ..........................................................................1
第二節 研究目的 ......................................................................................5
第三節 研究流程與說明 ..........................................................................6
第貳章 文獻探討...................................................................................................... 7
第一節 電腦視覺 ......................................................................................7
第二節 研究背景技術 ............................................................................10
第三節 人臉偵測與辨識 ........................................................................16
第四節 YOYOLO 模型發展與介紹 ......................................................18
第五節 DeepFace 模型介紹 ...................................................................30
第參章 研究方法.................................................................................................... 33
第一節 研究架構 ....................................................................................33
第二節 研究流程 ....................................................................................35
第三節 研究方法 ....................................................................................36
第肆章 實驗結果.................................................................................................... 41
第一節 實驗環境建置 ............................................................................41
第二節 資料集格式轉換 ........................................................................46
第三節 YOLOV4 評估指標 ...................................................................50
第四節 YOLOV4 訓練與驗證 ...............................................................52
第五節 Deepface 驗證 ............................................................................60
第六節 警示機制 ....................................................................................63
第七節 實驗與結果分析 ........................................................................65
第伍章 結論............................................................................................................ 68
第一節 結論 ............................................................................................68
第二節 未來展望 ....................................................................................69
vii
第三節 研究限制 ....................................................................................70
參考 文 獻 ..................................................................................................................... 72
一、 中文網路資料
1. 胡瑞玲(2021)。 北 市 校 園 電 子圍 籬 頻 壞 議 員 : 形同 虛 設 。 聯 合 報。
https://udn.com/news/story/7323/5462506
2. 教育部 (2021)。 教 育 部 108年 各 級 學 校 校 園 安 全 事 件 統 計 分 析 報 告 。
https://csrc.edu.tw/
3. 移民 署 (2012)。 政 府 機關 資 訊通 報 ,第 291期 。
https://www.dgbas.gov.tw/public/Data/1123014312771.pdf
4. 謦伊 的 閱讀 筆 記(2020),’YOLO 演 進— 2, https://medium.com/chingi/yolo%E6%BC%94%E9%80%B2-2-85ee99d114a1
5. 警政 統 計查 詢 網。 https://ba.npa.gov.tw/npa/stmain.jsp?sys=100

二、 英文網頁
1. Brandon 2016. How do Convolutional Neural Networks work?
https://e2eml.school/how_convolutional_neural_networks_work.html
2. Image-net.(2012). ImageNet Large Scale Visual Recognition Challenge
2012 (ILSVRC2012).
https://image-net.org/challenges/LSVRC/2012/
3. Jeremy M. Norman(2021). Exploring the History of Information and
Media through Timelines.
https://www.historyofinformation.com/story.php?t=About
4. Lollipop Bait, TBWA Kuala Lumpur(2013).
https://www.youtube.com/watch?v=Bx_aj3kG_T4
5. Ndonhong, Vanessa and Bao, Anqi and Germain, Olivier. Wellbore
Schematics to Structured Data Using Artificial Intelligence Tools.
https://www.researchgate.net/publication/332612704_Wellbore_Sche
matics_to_Structured_Data_Using_Artificial_Intelligence_Tools

三、 英文文獻
1. Bochkovskiy, A., Wang, C. Y., & Liao, H. Y. M. (2020). YOLOV4:
Optimal speed and accuracy of object detection. arXiv preprint
arXiv:2004.10934.
2. Deng, J., Guo, J., Xue, N., & Zafeiriou, S. (2019). Arcface: Additive
angular margin loss for deep face recognition. In Proceedings of the
IEEE/CVF Conference on Computer Vision and Pattern
Recognition (pp. 4690-4699).
3. Everingham, M., Van Gool, L., Williams, C. K., Winn, J., & Zisserman,
A. (2010). The pascal visual object classes (voc)
challenge. International journal of computer vision, 88(2), 303-338.
4. Freund, Y., & Schapire, R. E. (1997). A de cision-theoretic
generalization of on-line learning and an application to boosting.
Journal of computer and system sciences, 55(1), 119 -139.
5. Girshick, R. (2015). Fast r-cnn. In Proceedings of the IEEE
international conference on computer vision (pp. 1440-1448).
6. Girshick, R., Donahue, J., Darrell, T., & Malik, J. (2014). Rich feature
hierarchies for accurate object detection and semantic segmentation. In
Proceedings of the IEEE conference on computer vision and pattern
recognition (pp. 580-587).
7. Goldstein, A. J., Harmon, L. D., & Lesk, A. B. (1971). Identification
of human faces. in Proceedings of the IEEE, vol. 59, no. 5, pp. 748-
760. doi: 10.1109/PROC.1971.8254.
8. Gollapudi, S. (2019). Deep learning for computer vision. In Learn
computer vision using OpenCV (pp. 51-69). Apress, Berkeley, CA.
9. He, K., Zhang, X., Ren, S., & Sun, J. (2015). Spatial pyramid pooling
in deep convolutional networks for visual recognition. IEEE
transactions on pattern analysis and machine intelligence, 37(9), 1904-
1916.
10.Huang, G., Liu, Z., Van Der Maaten, L., & Weinberger, K. Q. (2017).
Densely connected convolutional networks. In Proceedings of the IEEE
conference on computer vision and pattern recognition (pp. 4700-4708).
11.Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet
classification with deep convolutional neural networks. Advances in
neural information processing systems, 25, 1097-1105.
12.Lin, M., Chen, Q., & Yan, S. (2013). Network in network. arXiv
preprint arXiv:1312.4400.
13.Lin, T. Y., Dollár, P., Girshick, R., He, K. , Hariharan, B., & Belongie,
S. (2017). Feature pyramid networks for object detection.
In Proceedings of the IEEE conference on computer vision and pattern
recognition (pp. 2117-2125).
14.Lin, T. Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D.,
& Zitnick, C. L. (2014, September). Microsoft coco: Common objects
in context. In European conference on computer vision (pp. 740-755).
Springer, Cham.
15.Liu, S., Qi, L., Qin, H., Shi, J., & Jia, J. (2018). Path aggregation
network for instance segmentation. In Proceedings of the IEEE
conference on computer vision and pattern recognition (pp. 8759-8768).
16.Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C. Y., &
Berg, A. C. (2016, October). Ssd: Single shot multibox detector.
In European conference on computer vision (pp. 21-37). Springer,
Cham.
17.Liu, X., Wang, X., & Ren, C. (2019, June). Research on Intelligent
Campus Monitoring Management System Based on De ep Neural
Network Algorithm. In Journal of Physics: Conference Series (Vol.
1237, No. 2, p. 022143). IOP Publishing.
18.Phillips, P. J., Flynn, P. J., Scruggs, T., Bowyer, K. W., Chang, J.,
Hoffman, K., & Worek, W. (2005, June). Overview of the face
recognition grand challenge. In 2005 IEEE computer society
conference on computer vision and pattern recognition (CVPR'05) (Vol.
1, pp. 947-954). IEEE.
19.Redmon, J., & Farhadi, A. (2017). YOLO9000: better, faster, stronger.
In Proceedings of the IEEE conference on computer vision and pattern
recognition (pp. 7263-7271).
20.Redmon, J., & Farhadi, A. (2018). YOLOV3: An incremental
improvement. arXiv preprint arXiv:1804.02767.
21.Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You only
look once: Unified, real-time object detection. In Proceedings of the
IEEE conference on computer vision and pattern rec ognition (pp. 779-
788).
22.Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster r-cnn: Towards
real-time object detection with region proposal networks. Advances in
neural information processing systems, 28, 91-99.
23.S. Chopra, R. Hadsell, and Y. LeCun. Lear ning a similarity metric
discriminatively, with application to face verification. In CVPR, 2005
24.S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep
network training by reducing internal covariate shift. arXiv preprint
arXiv:1502.03167, 2015. 2, 5.
25.Schroff, F., Kalenichenko, D., & Philbin, J. (2015). Facenet: A unified
embedding for face recognition and clustering. In Proceedings of the
IEEE conference on computer vision and pattern recognition (pp. 815-
823).
26.Simonyan, K., & Zisserman, A. (2014). Very deep convolutional
networks for large-scale image recognition. arXiv preprint
arXiv:1409.1556.
27.Sirovich, L., & Kirby, M. (1987). Low-dimensional procedure for the
characterization of human faces. Journal of the Optical Society of
America A, Vol. 4, Issue 3, pp. 519-524.
https://doi.org/10.1364/JOSAA.4.000519
28.Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., ...
& Rabinovich, A. (2015). Going deeper with convolutions.
In Proceedings of the IEEE conference on computer vision and patt ern
recognition (pp. 1-9).
29.Taigman, Y., Yang, M., Ranzato, M. A., & Wolf, L. (2014). Deepface:
Closing the gap to human-level performance in face verification.
In Proceedings of the IEEE conference on computer vision and pattern
recognition (pp. 1701-1708).
30.Turk, M., & Pentland, A. (1991). Eigenfaces for recognition. Journal
of cognitive neuroscience, 3(1), 71-86.
31.Venkateswarlu, I. B., Kakarla, J., & Prakash, S. (2020, December).
Face mask detection using mobilenet and global pooling block. In 2020
IEEE 4th Conference on Information & Communication Technology
(CICT) (pp. 1-5).
32.Viola, P., & Jones, M. (2001, July). Robust real-time face detection.
In Proceedings Eighth IEEE International Conference on Computer
Vision (Vol. 3, pp. 747-747). IEEE Computer Societ y.
33.Wang, C. Y., Liao, H. Y. M., Wu, Y. H., Chen, P. Y., Hsieh, J. W., &
Yeh, I. H. (2020). CSPNet: A new backbone that can enhance learning
capability of CNN. In Proceedings of the IEEE/CVF conference on
computder vision and pattern recognition workshops (pp. 390-391).
34.Yang, S., Luo, P., Loy, C. C., & Tang, X. (2016). Wider face: A face detection
benchmark. In Proceedings of the IEEE conference on computer vision and pattern
recognition (pp. 5525-5533).
35.Zhang, K., Zhang, Z., Li, Z., & Qiao, Y. (2016). Joint face detection
and alignment using multitask cascaded convolutional networks. IEEE
Signal Processing Letters, 23(10), 1499-1503.


QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top