(54.236.58.220) 您好!臺灣時間:2021/03/01 19:06
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

我願授權國圖
: 
twitterline
研究生:蔡奇恩
研究生(外文):Chi-En Tsai
論文名稱:即時的人臉偵測與人臉辨識之門禁系統
論文名稱(外文):Real time face detection and recognition for access control system application
指導教授:蔡宗漢蔡宗漢引用關係
指導教授(外文):Tsung-Han Tsai
學位類別:碩士
校院名稱:國立中央大學
系所名稱:電機工程學系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2018
畢業學年度:106
語文別:中文
論文頁數:61
中文關鍵詞:深度學習人臉偵測人臉辨識
外文關鍵詞:deep learningface detectionface recognition
相關次數:
  • 被引用被引用:2
  • 點閱點閱:723
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:1
因著智慧城市與智慧家庭推動,人們越來越重視生活的品質,期待用科技改變我們的生活。而近年來隨著GPU進步與大數據時代的來臨,深度學習帶給各領域帶來革命性的進展,電腦視覺方面更是由深度學習引領著。科技始終來自人性,科技使我們的生活更便利。在我們生活週遭有著各式不同形態的門禁系統,從鑰匙、門禁卡、到生物特徵辨識。生物特徵辨識透過每個人獨有的特徵,不須另外攜帶任何形式的鑰匙。人臉由於不需接觸或是做出額外的動作,為所有生物特徵辨識中最為方便的方法,然而卻也是最為複雜的方法。
本論文提出透過深度網路,動態的對使用者進行人臉偵測與人臉辨識的門禁系統,不需特別停下進行辨識。人臉偵測採用卷積神經網路架構SSD(Single Shot MultiBox Detector)為主要方法,人臉辨識則採用VGGFace為主要方法。透過圖庫蒐集,資料增強,影像前後處理與實驗設計,訓練更為健全(robust)的人臉偵測與人臉辨識子系統。透過兩個系統的結合,便利用連續的影像,判斷是否為實驗室成員。連續的影像可避免單張影像誤測而造成的嚴重後果。本論文利用1280x960的彩色影片進行實驗測試,在GPU加速下可達到約30fps的速度。
Through the promotion of smart cities and smart families, people are paying more attention to the quality of life. They look forward to change the style of life by technology. The age of big data and GPU acceleration improvement brought deep learning a revolutionary progress in various fields, especially in computer vision. Technology derives from humanity, and technology makes our lives more convinent. There are various forms of access control system, such as keys, access cards, biometric identification, around us. Biometric identification distinguish different people by the unique characteristics of each person. Therefore users don’t need to bring any forms of keys anymore. Faces identification are the most convenient method for all biometric identification, because they don't need to touch anything or make any extra moves. However, it’s the most complex method.
We propose an access control system that performs face detection and face recognition dynamically, which makes users no need to stop for recognition. We use SSD(Single Shot MultiBox Detector)as the main model for face detection and VGGFace as the main model for face recognition. Through the collection of dataset, data augmentation, pre-processing of image, post-processing of image, and experimental design, we train the robust face detection and face recognition sybsystems. The system use continuous images as input to determine whether it’s a laboratory member. Using continuous frame as input can avoid false positive case make the system output wrong result. We use 1280x960 color video for experimental testing, and achieve about 30fps speed under GPU acceleration.

第一章 Introduction 1
1-1 Motivation 1
1-2 Related Work of Face Detection 2
1-3 Related Work of Face Recognition 4
1-4 Method 6
1-5 Thesis Organization 8
第二章 Convolutional Neural Network-SSD 9
2-1 Introduction 9
2-2 Convolutional Neural Netwrok 10
2-2-1 Local Receptive Fields 12
2-2-2 Shared Weights 13
2-2-3 Pooling Layer 14
2-2-4 Fully Connected Layer 15
2-2-5 Activation Function 16
2-3 SSD 18
2-3-1 Model 18
2-3-2 Training 19
2-3-3 Prediction 23
第三章 Access Control System 24
3-1 Face Detection 24
3-1-1 Data gathering and pre-processing 24
3-1-2 Training Method and parameters 26
3-1-3 Post-processing 28
3-1-4 Experimental Result 29
3-2 Face Recognition 33
3-2-1 VGG Face Model and dataset 34
3-2-2 Data Gathering and Pre-processing 35
3-2-1 Training Method and Parameters 38
3-2-2 Experimental Result 39
3-3 System Architecture and Result 42
第四章 Conclusion 45
第五章 Reference 46
[1] Y. Sun, X. Wang, and X. Tang, “Deep Learning Face Representation by Joint Identification-Verification,” pp. 1–9, 2014.
[2] Y. Sun, D. Liang, X. Wang, and X. Tang, “DeepID3: Face Recognition with Very Deep Neural Networks,” pp. 2–6, 2015.
[3] P. Viola and M. Jones, “Robust real-time face detection,” Int. J. Comput. Vis., vol. 57, no. 2, pp. 137–154, 2004.
[4] Z. Zhuang, Y. Cheng, Q. Sun, and Y. Wang, “Robust face detection and analysis,” J. Electron., vol. 17, no. 3, pp. 193–201, Jul. 2000.
[5] M. P. Beham and S. M. M. Roomi, “Face recognition using appearance based approach: A literature survey,” Proc. Int. Conf. Work. Recent Trends Technol. Mumbai, Maharashtra, India, vol. 2425, no. February 2015, p. 1621, 2012.
[6] C. Author et al., “Facial Features for Template Matching Based Face Recognition,” Am. J. Appl. Sci., vol. 6, no. 11, pp. 1897–1901, 2009.
[7] A. Kour, “Face Recognition using Template Matching,” vol. 115, no. 8, pp. 10–13, 2015.
[8] L. Haoxiang and Lin, “A Convolutional Neural Network Cascade for Face Detection,” IEEE Conf. Comput. Vis. Pattern Recognit., pp. 5325–5334, 2015.
[9] “FDDB : Main.” [Online]. Available: http://vis-www.cs.umass.edu/fddb/index.html. [Accessed: 09-Apr-2018].
[10] Y. Taigman, M. Yang, and M. A. Ranzato, “Deepface: Closing the gap to human -level performance in face verification,” CVPR IEEE Conf., pp. 1701–1708, 2014.
[11] G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller, “Labeled faces in the wild: A database for studying face recognition in unconstrained environments,” Univ. Massachusetts Amherst Tech. Rep., vol. 1, pp. 07–49, 2007.
[12] L. Wolf, T. Hassner, and I. Maoz, “Face recognition in unconstrained videos with matched background similarity,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 529–534, 2011.
[13] W. Liu et al., “SSD: Single shot multibox detector,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 9905 LNCS, pp. 21–37, 2016.
[14] O. M. Parkhi, A. Vedaldi, and A. Zisserman, “Deep Face Recognition.”
[15] “YouTube.” [Online]. Available: https://www.youtube.com/.
[16] N. Zhang, R. Farrell, F. Iandola, and T. Darrell, “Deformable Part Descriptors for Fine-grained Recognition and Attribute Prediction.”
[17] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” Nov. 2013.
[18] R. Girshick, “Fast R-CNN,” Apr. 2015.
[19] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” Jun. 2015.
[20] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You Only Look Once: Unified, Real-Time Object Detection,” Jun. 2015.
[21] K. Fukushima, “Biological Cybernetics Neocognitron: A Self-organizing Neural Network Model for a Mechanism of Pattern Recognition Unaffected by Shift in Position,” Biol. Cybern., vol. 36, no. 193, 1980.
[22] D. H. HUBEL and T. N. WIESEL, “Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex.,” J. Physiol., vol. 160, no. 1, pp. 106–54, Jan. 1962.
[23] M. A. Nielsen, “Neural Networks and Deep Learning.” Determination Press, 2015.
[24] “OVERFITTING.” [Online]. Available: https://zh.wikipedia.org/wiki/過適. [Accessed: 23-Apr-2018].
[25] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proc. IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
[26] “CNN introduction.” [Online]. Available: https://read01.com/oyPn3.html#.WtylqS5uayo. [Accessed: 22-Apr-2018].
[27] “Deep learning for complete beginners: convolutional neural networks with keras.” [Online]. Available: https://cambridgespark.com/content/tutorials/convolutional-neural-networks-with-keras/index.html. [Accessed: 23-Apr-2018].
[28] K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” Sep. 2014.
[29] B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba, “OBJECT DETECTORS EMERGE IN DEEP SCENE CNNS.”
[30] “LFW.” [Online]. Available: http://vis-www.cs.umass.edu/lfw/index.html. [Accessed: 15-Apr-2018].
[31] N. Bodla, B. Singh, R. Chellappa, and L. S. Davis, “Soft-NMS -- Improving Object Detection With One Line of Code,” Apr. 2017.
[32] “YouTube Faces Database : Main.” [Online]. Available: https://www.cs.tau.ac.il/~wolf/ytfaces/. [Accessed: 20-Apr-2018].
[33] T. DeVries and G. W. Taylor, “Improved Regularization of Convolutional Neural Networks with Cutout,” Aug. 2017.
[34] Z. Zhong, L. Zheng, G. Kang, S. Li, and Y. Yang, “Random Erasing Data Augmentation,” Aug. 2017.
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
系統版面圖檔 系統版面圖檔