跳到主要內容

臺灣博碩士論文加值系統

(44.211.31.134) 您好!臺灣時間:2024/07/23 08:57
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:孫興岳
研究生(外文):SUN, HSING-YUEH
論文名稱:應用YOLO及指靜脈實現駕駛者辨識系統
論文名稱(外文):Development of driver identification system using YOLO and finger vein
指導教授:吳建達
指導教授(外文):WU, JIAN-DA
口試委員:吳建達曾文功賴柔雨
口試委員(外文):WU, JIAN-DATSENG, WEN-KUNGLAI, JOU-YU
口試日期:2022-07-06
學位類別:碩士
校院名稱:國立彰化師範大學
系所名稱:車輛科技研究所
學門:工程學門
學類:機械工程學類
論文種類:學術論文
論文出版年:2022
畢業學年度:110
語文別:英文
論文頁數:55
中文關鍵詞:生物特徵指靜脈影像處理YOLO物件偵測駕駛者辨識
外文關鍵詞:BiometricFinger veinImage processingYOLO object detectionDriver identification
相關次數:
  • 被引用被引用:0
  • 點閱點閱:315
  • 評分評分:
  • 下載下載:58
  • 收藏至我的研究室書目清單書目收藏:0
在眾多人員辨識方法中,利用生物特徵做辨識是一種最安全與方便的識別方法之一。在本研究中,嘗試利用人類血液中低氧血紅蛋白經紅外線照射後可呈現人體手指靜脈圖像的特性以建立駕駛人員的識別系統。在過程中,首先設計一組手指靜脈圖像採集系統,並在獲得手指靜脈圖像之後,對圖像進行對比度限制自適應直方圖均衡化(CLAHE)和Gabor濾波影像處理以獲得更清晰的圖像,並利用YOLO物件檢測技術做駕駛者識別。在研究中,辨識系統結構可以分為兩部分,第一部分是指靜脈圖像訓練的部分,主要是處理圖像、建立數據庫和生成權重文件供測試之用。第二部分是測試系統,當測試人員的手指放在指定的拍照區域時,照相機將拍攝指靜脈照片,並將照片與權重文件一起在處理器樹莓派上進行資料處理和運行,以識別驅動程序。由實驗結果顯示本系統可以有效的做人員之辨識。在實驗研究中,並嘗試各種不同之演算法,並比較數據量、辨識時間、辨識率等相關議題。其中使用yolov4-tiny-hy時的識別率能夠有相當不錯之效果,且有效的減短訓練及辨識時間。
Among the many identification methods, biometric identification is one of the safest and most convenient identification methods. In this study, an image of human finger veins can be presented by hypoxic hemoglobin in human blood after being irradiated with infrared light to try to establish a driver identification system. In the process, the first step is to design a set of finger vein image acquisition systems, and after obtaining the finger vein images, perform contrast-limited adaptive histogram equalization (CLAHE) and Gabor filtering image processing on the images to obtain more accurate images, and use the object detection technology YOLO system to identify people. In the research, the structure of the identification system can be divided into two parts. The first part refers to the part of vein image training, which mainly deals with image processing, database establishment and weight file generation for testing. The second part is the test system, when the tester's finger is placed on the designated photo area, the camera will take a photo of the finger vein, and the photo will be processed and run on the processor Raspberry Pi together with the weight file to identify the driver program. In the experimental research, try to use a variety of different algorithms, and compare the data volume, recognition time, recognition rate and other related issues. Among them, the recognition rate when using yolov4-tiny-hy can have a very good effect, and it can effectively shorten the training and recognition time.
中文摘要 i
Abstract ii
誌謝 iii
CONTENTS iv
LIST OF TABLES vi
LIST OF FIGURES vii
LIST OF SYMBLOS ix
Chapter 1 Introduction 1
1-1 Introduction of the thesis 1
1-2 Literature review 3
1-3 Overview of this thesis 5
Chapter 2 The algorithms of finger vein image processing and
identification system 6
2-1 Two-dimensional Gabor filter parameter explanation 6
2-2 Principle of contrast limited adaptive histogram equalization (CLAHE)8
2-3 Principle of deep learning 10
2-4 Principle of convolutional neural network 12
2-5 YOLO object detection 16
2-6 Principle of proposed YOLOv4-tiny-hy 24
2-7 Confusion matrix 26
2-8 Finger vein recognition 28


Chapter 3 The finger vein recognition driver development environment and experimental architecture 30
3-1 Development environment 30
3-2 Experimental architecture 33
Chapter 4 Experimental work and results discussion 36
4-1 Experimental work 36
4-2 Results discussion 47
Chapter 5 Conclusions 49
References 51

Table 2-1 Illustration of confusion matrix. 27
Table 3-1 The environment of training model and establishing database 30
Table 3-2 The operating environment and equipment on Raspberry pi 4. 31
Table 4-1 mAP of each iteration 40
Table 4-2 Test results of Original, Gabor filter, and CLAHE on yolov4-tiny-
hy 41
Table 4-3 Test results of YOLOv3 with different angles 42
Table 4-4 Test results of YOLOv3-tiny with different angles. 42
Table 4-5 Test results of YOLOv3-tiny-3L with different angles. 43
Table 4-6 Test results of YOLOv4 with different angles 43
Table 4-7 Test results of YOLOv4-tiny with different angles 43
Table 4-8 Test results of YOLOv4-tiny-3L with different angles 44
Table 4-9 Test results of Yolo v4-tiny-hy with different angles 44
Table 4-10 Accuracy of different sample sizes on different YOLO models 48
Table 4-11 Test results of each model 48
Table 4-12 Comparison of YOLOv3, YOLOv4 and YOLOV4-tiny-hy. 50

Figure 2-1 Schematic diagram of CLAHE 9
Figure 2-2 Principle of artificial neural network 11
Figure 2-3 Principle of Convolutional Neural Network 13
Figure 2-4 Principle of convolutional layer 14
Figure 2-5 Principle of pooling layer 15
Figure 2-6 YOLO detection system 18
Figure 2-7 Network architecture diagram of YOLOv1 18
Figure 2-8 Network architecture diagram of YOLOv2 19
Figure 2-9 Network architecture diagram of YOLOv3 19
Figure 2-10 Network architecture diagram of YOLOv4 20
Figure 2-11 Principle of Residual Network 20
Figure 2-12 Principle of Feature Pyramid Network 21
Figure 2-13 Bounding box with size prior and position prediction 22
Figure 2-14 Schematic diagram of IoU calculation 23
Figure 2-15 Acquiring finger vein images by light reflection 29
Figure 2-16 Acquiring finger vein images by light direct 29
Figure 3-1 Raspberry Pi Noir Camera V2 31
Figure 3-2 Near-infrared LED 33
Figure 3-3 Schematic diagram of the architecture for photographing finger
veins 33
Figure 3-4 System architecture of training part 35
Figure 3-5 System architecture of test part 35
Figure 4-1 Finger vein images of six drivers 37
Figure 4-2 Frame the finger veins with labelimage 37
Figure 4-3 Test results of six drivers 38
Figure 4-4 Transformation process of finger vein image in test. 39
Figure 4-5 Image of finger veins in three different styles. 41
Figure 4-6 Loss function curve of seven different yolo models 46


[1]N. A. Mashudi, M. J. Nordin, “A review on iris recognition in non-cooperative environment.”, Proceedings of the 2018 International Conference on Information Science and System, pp. 127-132, 2018.
DOI: 10.1145/3209914.3209925

[2]J. R. Lucio-Gutierrez, J. Coello, S. Maspoch, “Application of near infrared spectral fingerprinting and pattern recognition techniques for fast identification of Eleutherococcus senticosus.”, Jounal of Food Research Internal, Vol. 44, NO. 2, pp. 557-565, 2011. DOI: 10.1016/j.foodres.2010.11.037

[3]M. Laadjel, A. Bouridane, F. Kurugollu, S. Boussakta, “Palmprint recognition using Fisher-Gabor feature extraction.”, 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp.1709-1712, 2008. DOI: 10.1109/ICASSP.2008.4517958

[4]T. Sudha, G. Jayalaitha, “Analysis of fuzzy logic and fractals in DNA sequences based on human signature.”, Materials Today: Proceedings, 2021. DOI: 10.1016/j.matpr.2020.12.427

[5]M. Ravanelli, M. Omologo, “Automatic context window composition for distant speech recognition.”, Speech Communication, Vol. 101, pp. 34-44, 2018. DOI: 10.1016/j.specom.2018.05.001




[6]Y. Wang, Y. Y. Tang, L. Li, “Correntropy matching pursuit with application to robust digit and face recognition.”, IEEE Transactions on Cybernetics, Vol. 47, NO. 6, pp. 1354-1366, 2016. DOI: 10.1109/TCYB.2016.2544852

[7]J. F. Yang, Y. Shi, J. Yang, “Personal identification based on finger-vein features.”, Computers in Human Behavior, Vol. 27, NO, 5, pp. 1565-1570, 2011. DOI: 10.1016/j.chb.2010.10.029

[8]K. Miyuki, U. Hironor, U. Shin-ichiro, “Near-infrared finger vein patterns for personal identification.”, Jounal of applied optics, Vol. 42, NO. 35, pp. 7429-7436, 2002. DOI: 10.1364/AO.41.007429

[9]W. Y. Han, J. C. Lee, “Palm vein recognition using adaptive Gabor filter.”, Expert Systems with Applications, Vol. 37, NO. 18, pp. 13225-13234, 2012. DOI: 10.1016/j.eswa.2012.05.079

[10]P. Gupta, S. Srivastava, P. Gupta, “An accurate infrared hand geometry and vein pattern based authentication system.”, Knowledge-Based Systems, Vol. 103, pp. 143-155, 2016. DOI: 10.1016/j.knosys.2016.04.008

[11]R. Das, E. Piciucco, E. Maiorana, P. Campisi, “Convolutional Neural Network for Finger-Vein-Based Biometric Identification.”, IEEE Transactions on Information Forensics and Security, Vol. 14, NO. 2, pp. 360-373, 2018. DOI: 10.23919/ELECO47770.2019.8990612


[12]I. Boucherit, M. O. Zmirli, H. Hentabli, B. A. Rosdi, “Finger vein identification using deeply-fused Convolutional Neural Network.”, Journal of King Saud University - Computer and Information Sciences, 2020. DOI: 10.1016/j.jksuci.2020.04.002

[13]Y. LeCun, L. Bottou, Y. Bengio, P. Haffner, “Gradient-based learning applied to document recognition.”, Proceedings of the IEEE, 86(11), pp. 2278-2324, 1998. DOI: 10.1109/9780470544976.ch9

[14]R. Girshick, J. Donahue, T. Darrell, J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation.”, In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 580-587, 2014. DOI: 10.18127/j00338486-202109-11

[15]K. He, G. Gkioxari, P. Dollár, R. Girshick, ”Mask r-cnn.”, In Proceedings of the IEEE international conference on computer vision, pp. 2961-2969, 2017. DOI: https://arxiv.org/abs/1703.06870

[16]W. Liu , D. Anguelov, D. Erhan , C. Szegedy , S. Reed, C.Y. Fu, A.C. Berg, ”, Ssd: Single shot multibox detector.” In European conference on computer vision . Springer, Cham. pp. 21-37, 2016. DOI: https://arxiv.org/abs/1512.02325

[17]C. Y. Fu, W. Liu, A. Ranga, A. Tyagi, A. C. Berg, ”Dssd: Deconvolutional single shot detector.”, arXiv preprint arXiv:1701.06659, 2017. DOI: https://arxiv.org/abs/1701.06659

[18]J. Redmon, S. Divvala, R. Girshick, A. Farhadi, ”You only look once: Unified, real-time object detection.”, In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 779-788, 2016. DOI: 10.1109/CVPR.2016.91

[19]J. Redmon, A. Farhadi, ”YOLO9000: better, faster, stronger.”, In Proceedings of the IEEE conference on computer vision and pattern recognition , pp. 7263-7271, 2017. DOI: 10.1109/CVPR.2017.690

[20]J. Redmon, A. Farhadi, ”Yolov3: An incremental improvement.”, arXiv preprint arXiv:1804.02767, 2018. DOI: https://arxiv.org/abs/1804.02767

[21]X. Chen, X. Zhang, Y. Yang, and P. Sun, "Research for adaptive audio information hiding approach based on DWT.", The 2008 Chinese Control and Decision Conference, pp. 3029-3033, 2008. DOI: 10.1109/CCDC.2008.4597882

[22]R. Haripriya, L. R. Mathew, and K. Gopakumar, "Performance evaluation of dwt based speech enhancement.", The 2017 International Conference on Networks & Advances in Computational Technologies (NetACT), pp.442-446, 2017. DOI: 10.1109/NETACT.2017.8076812

[23]X. Wang,Z. Han, J. Wang, and Y. Ma, “Speech recognition based on wavelet packet transform and KL expansion.”, The 2008 Chinese Control and Decision Conference, pp. 2490-2493, 2008. DOI: 10.1109/CCDC.2008.4597773

[24]A. Bochkovskiy, C. Y. Wang, H. Y. M. Liao, “YOLOv4: Optimal Speed and Accuracy of Object Detection.”, arXiv:2004.10934 , 2020.
DOI: https://arxiv.org/abs/2004.10934


QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top