跳到主要內容

臺灣博碩士論文加值系統

(44.200.82.149) 您好!臺灣時間:2023/06/03 21:59
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:高仲義
研究生(外文):Gao, Jhong-Yi
論文名稱:以正規轉換為基礎之日夜人物辨識
論文名稱(外文):Canonical Transform Based Day-and-Night Person Identification
指導教授:張志永
指導教授(外文):Chang, Jyh-Yeong
學位類別:碩士
校院名稱:國立交通大學
系所名稱:電控工程研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2013
畢業學年度:101
語文別:英文
論文頁數:64
中文關鍵詞:人物辨識
外文關鍵詞:Person Identification
相關次數:
  • 被引用被引用:0
  • 點閱點閱:103
  • 評分評分:
  • 下載下載:9
  • 收藏至我的研究室書目清單書目收藏:0
人物辨識系統在電腦視覺領域是很熱門的研究與應用目標。在監控系統中,最常見的方式是使用固定式攝影機,對拍攝場景的人物進行人物辨識。
本論文實現一套監控系統,此系統是在日夜環境中,分別使用多角度步態辨識系統及人臉辨識系統。本文研究對於使用兩台近紅外線攝影機進行人物辨識,一台近紅外線攝影機設置在遠處,用於擷取不同方向的步態影像,另一台近紅外線攝影機設置在近處,用於擷取人臉正面影像。
在人臉辨識系統方面,我們利用近紅外線攝影機擷取人臉影像。人臉擷取的方法是使用Haar疊層分類器,這是一種基於特徵運算的演算法,這種演算法比基於逐點的更快速,接著人臉影像經過特徵空間轉換與正規空間轉換後,累積五張上述人臉影像後,藉由多數決的方式,完成人物辨識。
在步態辨識系統方面,我們利用近紅外線攝影機擷取步態影像。為了擷取出完整的人體部分,本文使用背景相減法在灰階空間與HSV色彩空間建立背景模型,並提升消除影像中陰影部分,使得擷取前景影像能夠更完整,接著步態影像經過特徵空間轉換與標準空間轉換後,累積五張上述步態影像後,藉由多數決的方式,完成人物辨識。

Human recognition system is a very popular subject for research and application. Using a camera to recognize human is widely seen in surveillance system.
In this thesis, we implement the surveillance system that can recognize multi-angle human gait and human face of a person in the bright and dark environments. We use two near infrared (NIR) cameras for human recognition. One NIR camera, set in remote location, capture the gait images from different angles. And the other NIR camera, set in the vicinity, capture the face images from the person frontal view.
In human face recognition system, face region of an image is extracted based on Haar cascade classifier, which is a feature-based algorithm and works much faster than the pixel-based algorithm. Then, the face image is transformed to a new space by eigenspace and canonical space transformation for better efficiency and separability. The recognition is finally done in canonical space. Moreover, we gather five consecutive face images from video, and use majority vote to recognition the human.
In human gait recognition system, we build two background models, one in grayscale and one in HSV color space to extract the foreground image correctly. Then we reduce the shadowing effect. The gait image is then transformed to a new space by eigenspace and canonical space transformation for better efficiency and separability. The recognition is done in the canonical space. Finally, we gather five consecutive gait images from video, and use majority vote to recognition the person.

摘要 i
ABSTRACT ii
ACKNOWLEDGEMENTS iv
Contents v
List of Figures vii
List of Tables ix
Chapter 1 Introduction 1
1.1 Motivation 1
1.2 Video Frame Preprocessing for Human Recognition 3
1.3 Video Frame Human Recognition Procedure 4
1.4 Thesis Outline 5
Chapter 2 Video Frame Preprocessing for Human Recognition 6
2.1 The HSV color space 6
2.2 Background Model Construction and Foreground Extraction 9
2.2.1 Background Model Construction 9
A. Grayscale Value Background Model 10
B. HSV Color Space Background Model 11
2.2.2 Background Update 12
2.2.3 Foreground Extraction 13
A. Foreground Detection 14
B. Shadow Suppression 16
C. Foreground Object Segmentation 18
D. Foreground Image Compensation 20
2.3 Face Extraction 21
Chapter 3 Video Frame Human Recognition Procedure 24
3.1 Human Representation 24
3.1.1 Eigenspace Transformation (EST) 27
3.1.2 Canonical Space Transformation (CST) 29
3.2 Human Recognition 31
3.2.1 Person Recognition by Gait Image Classification in a Long Distance Setting 31
3.2.2 Person Recognition by Face Image Classification in a Short Distance Setting 32
3.2.3 Majority Vote 33
Chapter 4 Experimental Results 34
4.1 Background Model Construction and Foreground Extraction 38
4.2 Experiments on our LAB Multi-Angle Gait Database 42
4.2.1 Single-Angle Human Gait Recognition 42
4.2.2 Multi-Angle Human Gait Recognition 46
4.3 Recognition Result on the CASIA Multi-View Gait Database 49
4.3.1 Single-View Human Gait Recognition 49
4.3.2 Multi-View Human Gait Recognition 53
4.4 Experiments on our LAB Face Database 55
4.4.1 Human Face Recognition 55
Chapter 5 Conclusion 62

References 63

[1] M. Piccardi, “Background subtraction techniques: a review,” in Proc. IEEE Int. Conf. SMC., vol. 4, pp. 3099–3104, Oct. 2004.
[2] P. Viola and M. Jones, “Robust Real-Time Face Detection,” Int. Journal Computer Vision, vol. 57, no. 2, pp. 137–154, Mar. 2004.
[3] “OpenCV 2.4, Open Source Computer Vision Library,” http://www.intel.com/technology/computing/opencv/, 2012.
[4] H. Saito, A Watanabe, and S Ozawa, “Face pose estimating system based on eigenspace analysis,” in Proc. Int. Conf. Image Processing, vol. 1, pp. 638–642, 1999.
[5] J. Wang, G. Yuantao, K. N. Plataniotis, and A. N. Venetsanopoulos, “Select eigenfaces for face recognition with one training sample per subject,” in Proc.8th Cont., Automat. Robot. Vision Conf., vol. 1, pp. 391–396, Dec. 2004.
[6] P. S. Huang, C. J. Harris, and M. S. Nixon, “Canonical space representation for recognizing humans by gait or face,” in Proc. IEEE Southwest Symp. Image Anal. Interpretation, pp. 180–185, Apr., 1998.
[7] M. M. Rahman and S. Ishikawa, “Robust appearance-based human action recognition,” in Proc. the 17th Int. Conf. Pattern Recog., vol. 3, pp. 165–168, 2004.
[8] L. X. Wang and J. M. Mendel, “Generating fuzzy rules by learning from examples,” IEEE Trans. Syst., Man Cybern, vol. 22, no. 6, pp. 1414–1427, Dec. 1992.
[9] K. Ohba, Y. Sato, and K. Ikeuchi, “Appearance-based visual learning and object recognition with illumination invariance,” Machine Vision and Applications, Vol. 12, No. 4, pp. 189 196, 2000.
[10] I. Haritaoglu, D. Harwood, and L. S. Davis, “W : Real-time surveillance of people and their activities,” IEEE Trans. Pattern Anal. Machine Intell., vol. 22, no. 8, pp. 809–830, Aug. 2000.
[11] Y. C. Luo, “Extracting the Foreground Subject in the HSV Color space and Its Application to Human Activity Recognition System,” Master Thesis, Elect. and Con. Eng. Dept., Chiao Tung Univ., Taiwan, 2007.
[12] J. C. S. Jacques Jr., C. R. Jung, S. R. Musse, “Background subtraction and shadow detection in grayscale video sequences,” in Proc. SIGGRAPH, pp. 189–196, 2005.
[13] R. Gonzalez and R. Woods, Digital Image Processing, 2nd ed. Pearson Education International, pp. 528–532, 2008.
[14] K. Etemad and R. Chellappa, “Discriminant analysis for recognition of human face images,” in Proc. ICASSP, pp. 2148–2151, 1997.
[15] K. Fukunaga, Introduction to Statistical Pattern Recognition, 2nd edition, Massachusetts USA: Academic Press, 1990.
[16] S. Yu, D. Tan, and T. Tan, “A framework for evaluating the effect of view angle, clothing and carrying condition on gait recognition,” in Proc. IEEE 18th Int. Conf. Pattern Recognition, 2006, vol. 4, pp. 441–444.
[17] A. Iosifidis, A. Tefas, and I. Pitas, “Activity-based person identification using fuzzy representation and discriminant learning,” in IEEE Transactions on Information Forensics and Security, vol. 7, 2012, 530–542.

連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top