跳到主要內容

臺灣博碩士論文加值系統

(18.97.14.80) 您好!臺灣時間:2025/01/26 01:08
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:殷其佑
研究生(外文):YIN, CHI-YU
論文名稱:基於深度學習之人臉識別技術的熱特徵驗證研究
論文名稱(外文):Deep Learning Based Thermal Feature Validation Research of Facial Recognition Technique
指導教授:鐘國家鐘國家引用關係
指導教授(外文):JONG, GWO-JIA
口試委員:施松村王在德彭鵬亮鐘國家
口試委員(外文):SHIH, SUNG-TSUNWANG, TZAI-DERPENG, PENG-LIANGJONG, GWO-JIA
口試日期:2020-06-24
學位類別:碩士
校院名稱:國立高雄科技大學
系所名稱:電子工程系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2020
畢業學年度:108
語文別:英文
論文頁數:71
中文關鍵詞:人臉辨識生物識別熱影像深度學習GoogLeNet
外文關鍵詞:Facial RecognitionBiometricsThermal ImagingDeep LearningGoogLeNet
相關次數:
  • 被引用被引用:1
  • 點閱點閱:164
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
人臉辨識是一種成熟的非接觸式生物識別技術。早期基於RGB影像發展出許多高效能的人臉辨識方法,由於是對影像進行辨識,因此無法分辨真人影像或照片。在熱成像技術開始發展之後,在許多研究之中也將熱影像作為人臉辨識的材料之一。熱影像結合RGB影像可以有效的提升辨識效能。然而熱影像是根據人臉上的溫度特徵而成像的,因此在不同時間拍攝的熱影像特徵是不同的。這一個議題對於長期人臉辨識而言是一個挑戰。因此本論文提出一種基於深度學習的人臉熱影像驗證,探討拍攝時間對於熱影像的影響。
本論文提出的方法中除了使用原始熱影像還有使用熱特徵影像強化特徵效果。實驗設計中,參加者在不同的時間拍攝熱影像。所有的輸入影像使用GoogLeNet模型執行深度學習的任務,預測結果是基於時段的人臉熱影像(熱特徵影像)分類。實驗結果顯示模型的預測準確率僅約30%,不同時間的熱影像並無法被有效預測。對於熱影像而言,雖然絕對溫度受到時間影響,但是相對特徵並不會改變,因此任何時段拍攝的熱影像可以供全時段的熱影像人臉辨識。
Facial recognition is a fully developed non-contact biometric technique. In the early days, many high-performance facial recognition methods were developed based on RGB images. Since facial recognition is based on images, images or photos of real people cannot be recognized. After the development of thermal imaging technology, thermal imaging is also used as one of the materials for facial recognition in many studies. Thermal image combined with RGB image can effectively improve the recognition performance. However, the thermal image is imaged according to the temperature features of the human face, so the features of the thermal image taken at different times are different. This issue is a challenge for long-term facial recognition. Therefore, the aim of this thesis is a kind of facial thermal image validation based on deep learning to discuss the influence of shooting time on thermal images.
In the proposed method of this thesis, in addition to the raw thermal image, the thermal feature image is used to enhance the feature effect. In the experimental design, participants took thermal images at different times. All input images use the GoogLeNet model to perform deep learning tasks, and the prediction results are based on time-based classification of facial thermal images (thermal feature images). Experimental results show that the prediction accuracy of the model is only about 30%, and thermal images at different times cannot be effectively predicted. For thermal images, although the absolute temperature is affected by time, the relative features will not change. Therefore, the thermal images taken at any time can be used for facial thermal recognition at all times.
中文摘要 I
Abstract II
Contents III
List of Figure V
List of Table VI
List of Abbreviations VII
Chapter 1 Introduction 1
1.1 Background and Motivation 1
1.2 Aim and Objective 5
1.3 Thesis Organization 6
Chapter 2 Literatures Review 7
2.1 Thermal Imaging and Imaging Principles 7
2.2 Object Detection Based on Thermal Image 10
2.3 Facial Recognition Based on Numerical Method 13
2.4 Facial Recognition Based on Learning Architecture 17
2.5 Human Facial Features 18
2.6 Evolution of Convolutional Neural Networks 20
Chapter 3 Methodology 23
3.1 Proposed System Architecture 23
3.2 Acquisition of Thermal Image 25
3.3 Thermal Image Pre-processing 27
3.4 Feature Extraction of Thermal Image 29
3.5 Image Classification Based on Machine Learning 31
3.6 Research Procedures 34
Chapter 4 Results and Discussions 35
4.1 Image Pre-Processing and Feature Extraction 35
4.2 Deployment of Image Classification Model 37
4.2.1 Model Deployment for Thermal Image 37
4.2.2 Model Deployment for Feature Image 39
4.3 Testing of Image Classification Model 41
4.3.1 Testing Results in Thermal Image 41
4.3.2 Testing Results in Feature Image 45
4.4 Discussion 49
Chapter 5 Conclusions and Future Works 50
5.1 Conclusions 50
5.2 Future Works 51
References 52
List of Publication 58
Acknowledgments 59
Biography 60
[1] G. Fanelli, J. Gall and L. Van Gool, "Real time head pose estimation with random regression forests," CVPR 2011, Providence, RI, pp. 617-624, 2011.
[2] F. A. Kondori, S. Yousefi, H. Li, S. Sonning and S. Sonning, "3D head pose estimation using the Kinect," 2011 International Conference on Wireless Communications and Signal Processing (WCSP), Nanjing, pp. 1-4, 2011.
[3] F. Al-Khalidi, R. Saatchi, H. Elphick and D. Burke, "An evaluation of thermal imaging based respiration rate monitoring in children," Amer. J. Eng. Appl. Sci., vol. 4, no. 4, pp. 586–597, 2011.
[4] S. Ioannou, V. Gallese and A. Merla, "Thermal infrared imaging in psychophysiology: Potentialities and limits," Psychophysiology, vol. 51, no. 10, pp. 951–963, 2014.
[5] B. B. Lahiri, S. Bagavathiappan, T. Jayakumar and J. Philip, "Medical applications of infrared thermography: A review," Infr. Phys. Technol., vol. 55, no. 4, pp. 221–235, 2012.
[6] A. Rogalski and K. Chrzanowski, "Infrared devices and techniques (revision)," Metrology and Measurement Systems., vol. 21, no. 4, pp. 565–618, 2014.
[7] W. K. Wong, J. H. Hui, J. B. M. Desa, N. I. N. B. Ishak, A. B. Sulaiman and Yante Binti Mohd Nor, "Face detection in thermal imaging using head curve geometry," 2012 5th International Congress on Image and Signal Processing, Chongqing, 2012.
[8] M. Chakraborty, S. K. Raman, S. Mukhopadhyay, S. Patsa, N. Anjum and J. G. Ray, "High precision automated face localization in thermal images: Oral cancer dataset as test case," Proc. SPIE, vol. 10133, pp. 1013326-1–1013326-7, Feb. 2017.
[9] M. Marzec, R. Koprowski and Z. Wróbel, "Methods of face localization in thermograms," Biocybernetics Biomed. Eng., vol. 35, no. 2, pp. 138–146, 2015.
[10] J. Wang and E. Sung, "Facial Feature Extraction in an Infrared Image by Proxy With a Visible Face Image," in IEEE Transactions on Instrumentation and Measurement, vol. 56, no. 5, pp. 2057-2066, 2007.
[11] M. Kopaczka, R. Kolk, J. Schock, F. Burkhard and D. Merhof, "A Thermal Infrared Face Database With Facial Landmarks and Emotion Labels," in IEEE Transactions on Instrumentation and Measurement, vol. 68, no. 5, pp. 1389-1401, 2019.
[12] H. Zhang, Q. Li, Z. Sun and Y. Liu, "Combining Data-Driven and Model-Driven Methods for Robust Facial Landmark Detection," in IEEE Transactions on Information Forensics and Security, vol. 13, no. 10, pp. 2409-2422, Oct. 2018.
[13] I. Pavlidis, J. Levine and P. Baukol, "Thermal imaging for anxiety detection," Proceedings IEEE Workshop on Computer Vision Beyond the Visible Spectrum: Methods and Applications (Cat. No.PR00640), Hilton Head, SC, USA, 2000, pp. 104-109.
[14] L. B. Wolff, D. A. Socolinsky and C. K. Eveland, "Quantitative measurement of illumination invariance for face recognition using thermal infrared imagery," in Proc. SPIE, vol. 4820, Jan. 2003, pp. 140-151.
[15] J. Wang and E. Sung, "Facial Feature Extraction in an Infrared Image by Proxy With a Visible Face Image," in IEEE Transactions on Instrumentation and Measurement, vol. 56, no. 5, pp. 2057-2066, Oct. 2007.
[16] M. Anbar, B. M. Gratt and D. Hong, "Thermology and facial telethermography, Part I: History and technical review," Dentomaxillofac Radiol., vol. 27, no. 2, pp. 61–67, Mar. 1998.
[17] F. Boughorbel, B. Abidi, A. Koschan and M. Abidi, "Registration of infra-red and color images for multimodal face recognition," in Proc. Biometric Symp.—Special Session Research at the Biometric Consortium Conf., Crystal City, VA, Sep. 20–22 2004.
[18] H. Genno, A. Saijo, H. Yoshida, R. Suzuki and M. Osumi, "Using facial skin temperature to objectively evaluate sensations," Int. J. Ind. Ergonom., vol. 19, no. 2, pp. 161–171, Feb. 1997.
[19] M. Hess and G. Martinez, "Facial feature extraction based on the smallest univalue segment assimilating nucles (SUSAN) algorithm," in Proc. Picture Coding Symp., San Franscisco, CA, 2004.
[20] J.-G. Wang and T. H. Lye, "Visualizing skin temperature before, during and after exercise for dynamic area telethermometry," in Proc. 23rd Annu. Int. Conf. IEEE Eng. Med. Biol. Soc., Istanbul, Turkey, 2001, pp. 2831–2835.
[21] J.-G. Wang and E. Sung, "Frontal-view face detection and facial feature extraction using color and morphological operations," Pattern Recognit. Lett., vol. 20, no. 10, pp. 1053–1068, Oct. 1999.
[22] S. M. Smith and J. M. Brady, "SUSAN-A new approach to low level image processing," Int. J. Comput. Vis., vol. 23, no. 1, pp. 45–78, May 1997.
[23] Z. Wang, Z. Chen and F. Wu, "Thermal to Visible Facial Image Translation Using Generative Adversarial Networks," in IEEE Signal Processing Letters, vol. 25, no. 8, pp. 1161-1165, Aug. 2018.
[24] J. Zhu, T. Park, P. Isola and A. A. Efros, "Unpaired image-to-image translation using cycle-consistent adversarial networks," in Proc. IEEE Int. Conf. Comput. Vis., 2017, pp. 2242–2251.
[25] H. Zhang, V. M. Patel, B. S. Riggan and S. Hu, "Generative adversarial network-based synthesis of visible faces from polarimetric thermal faces," in Proc. IEEE Int. Joint Conf. Biometrics, 2017, pp. 100–107.
[26] M. Arjovsky and L. Bottou, "Towards principled methods for training generative adversarial networks," in Proc. Int. Conf. Learn. Represent., 2017.
[27] J. Shi, A. Samal and D. Marx, "How effective are landmarks and their geometry for face recognition?" Comput. Vis. Image Understanding, vol. 102, no. 2, pp. 117–133, 2006.
[28] Z. Zhang, L. Wang, Q. Zhu, S. Chen and Y. Chen, "Pose-invariant face recognition using facial landmarks and Weber local descriptor," Knowl.- Based Syst., vol. 84, pp. 78–88, 2015.
[29] Z. Deng, K. Li, Q. Zhao, Y. Zhang and H. Chen, "Effective face landmark localization via single deep network," in Proc. Chin. Conf. Biometric Recognit., 2017, pp. 68–76.
[30] T. Baltrusaitis, P. Robinson and L. P. Morency, "Openface: An open source facial behavior analysis toolkit," in Proc. IEEE Winter Conf. Appl. Comput. Vis., Mar. 2016, pp. 1–10.
[31] J. Johnson, A. Alahi, and L. Fei-Fei, "Perceptual losses for real-time style transfer and superresolution," in Proc. Eur. Conf. Comput. Vis., 2016, pp. 694–711.
[32] C. Ledig et al., "Photo-realistic single image super-resolution using a generative adversarial network," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2016, pp. 105–114.
[33] H. Zhang, V. Sindagi and V. M. Patel, "Image De-raining Using a Conditional Generative Adversarial Network," in IEEE Transactions on Circuits and Systems for Video Technology, doi: 10.1109/TCSVT.2019.2920407.
[34] Y. Li, S. Liu, J. Yang, and M. Yang, "Generative face completion," 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, 2017, pp. 5892-5900.
[35] D. Kingma and J. Ba, "Adam: A method for stochastic optimization," Int. Conf. Learn. Represent., 2015.
[36] Zhou Wang, A. C. Bovik, H. R. Sheikh and E. P. Simoncelli, "Image quality assessment: from error visibility to structural similarity," in IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600-612, 2004.
[37] S. J. Wang, J. Yang, N. Zhang, and C. G. Zhou, "Tensor Discriminant Color Space for Face Recognition," in IEEE Transactions on Image Processing, vol. 20, no. 9, pp. 2490-2501, Sept. 2011.
[38] S. N. Borade, R. R. Deshmukh and S. Ramu, "Face recognition using fusion of PCA and LDA: Borda count approach," 2016 24th Mediterranean Conference on Control and Automation (MED), Athens, 2016, pp. 1164-1167.
[39] M. A. Turk and A. P. Pentland, "Face recognition using eigenfaces," Proceedings. 1991 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Maui, HI, USA, 1991, pp. 586-591.
[40] Z. Zhang, P. Luo, C. C. Loy and X. Tang, "Learning Deep Representation for Face Alignment with Auxiliary Attributes," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 5, pp. 918-930, 1 May 2016.
[41] G. B. Huang, H. Lee, and E. Learned-Miller, "Learning hierarchical representations for face verification with convolutional deep belief networks," in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 2518–2525, 2012.
[42] S. Lawrence, C. L. Giles, Ah Chung Tsoi, and A. D. Back, "Face recognition: a convolutional neural-network approach," in IEEE Transactions on Neural Networks, vol. 8, no. 1, pp. 98–113, 1997.
[43] B. Jian, C. Chen, M. Huang and H. Yau, "Emotion-Specific Facial Activation Maps Based on Infrared Thermal Image Sequences," in IEEE Access, vol. 7, pp. 48046-48052, 2019.
[44] Cong Geng and X. Jiang, "Face recognition using sift features," 2009 16th IEEE International Conference on Image Processing (ICIP), Cairo, 2009, pp. 3313-3316.
[45] J. Chen, Z. Chen, Z. Chi and H. Fu, "Facial Expression Recognition in Video with Multiple Feature Fusion," in IEEE Transactions on Affective Computing, vol. 9, no. 1, pp. 38-50, 1 Jan.-March 2018.
[46] R. A. Calvo and S. D'Mello, "Affect Detection: An Interdisciplinary Review of Models, Methods, and Their Applications," in IEEE Transactions on Affective Computing, vol. 1, no. 1, pp. 18-37, Jan. 2010.
[47] Y. l. Tian, T. Kanade, and J. F. Cohn, "Recognizing action units for facial expression analysis," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 2, pp. 97–115, Feb. 2001.
[48] K. Scherer and P. Ekman, Handbook of Methods in Nonverbal Behavior Research. Cambridge, U.K.: Cambridge Univ. Press, 1982.
[49] J. F. Cohn and P. Ekman, "Measuring facial action," The new handbook of methods in nonverbal behavior research, New York: Oxford University Press, pp. 9–64, 2005.
[50] P. Ekman and W. V. Friesen, Facial Action Coding System: A Technique for the Measurement of Facial Movement. Washington, DC, USA: Consulting Psychologists Press, 1978.
[51] P. Ekman, W. V. Friesen, and J. C. Hager, Facial Action Coding System: The Manual on CD ROM. Salt lake, Utah, USA: A Human Face, 2002.
[52] P. Ekman, "An argument for basic emotions," Cognition Emotion, vol. 6, pp. 169–200, 1992.
[53] N. Dalal and B. Triggs, "Histograms of oriented gradients for human detection," in Proc. IEEE Conf. Comput. Vision Pattern Recog., 2005, pp. 886–893.
[54] G. Zhao and M. Pietikainen, "Dynamic texture recognition using local binary patterns with an application to facial expressions," IEEE Trans. Pattern Anal. Mach. Intell., vol. 29, no. 6, pp. 915–928, Jun. 2007.
[55] https://web.archive.org/web/20100201190355/http://www.infraredtraininginstitute.com/general-information/infrared-history
[56] https://web.archive.org/web/20100620184517/http://www.avrw.com/article/art_115_5367.htm
[57] https://my.oschina.net/u/876354/blog/1632862
[58] Krizhevsky, Alex & Sutskever, Ilya & Hinton, Geoffrey, "ImageNet Classification with Deep Convolutional Neural Networks," Neural Information Processing Systems., 2012.
[59] C. Szegedy et al., "Going deeper with convolutions," 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, 2015, pp. 1-9.
[60] Z. Wang, G. Horng, T. Hsu, C. Chen and G. Jong, "A Novel Facial Thermal Feature Extraction Method for Non-Contact Healthcare System," in IEEE Access, vol. 8, pp. 86545-86553, 2020.
[61] Y. Lecun, L. Bottou, Y. Bengio and P. Haffner, "Gradient-based learning applied to document recognition," in Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, Nov. 1998.
電子全文 電子全文(網際網路公開日期:20250724)
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊