跳到主要內容

臺灣博碩士論文加值系統

(216.73.216.54) 您好!臺灣時間:2026/01/12 22:40
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:吳泰樺
研究生(外文):Tai-Hua Wu
論文名稱:利用全身資訊進行性別辨識
論文名稱(外文):Using Body Information for Gender Recognition
指導教授:張傳育
學位類別:碩士
校院名稱:國立雲林科技大學
系所名稱:資訊工程研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2010
畢業學年度:98
語文別:中文
論文頁數:83
中文關鍵詞:性別辨識嵌入式隱藏式馬可夫
外文關鍵詞:embedded hidden Markov modelGait
相關次數:
  • 被引用被引用:0
  • 點閱點閱:189
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:1
近年來性別辨識是許多專家學者研究的熱門議題,如果可以讓電腦辨識出人類的性別,那麼不論是人機介面的應用或是在安全方面的應用都可以有大幅度的進步。
本論文使用嵌入式隱藏式馬可夫模型(Embedded Hidden Markov Model, EHMM)來進行性別辨識,在訓練階段,我們將經由前處理後的連續的全身影像進行重新組合,得到組合影像,接著我們利用組合影像針對11個不同的角度,每個角度訓練兩個EHMM,因此共有22個EHMM,最後將組合影像輸入到22個EHMM中進行辨識,在測試的階段,我們將所得的未知組合影像與所有的已訓練的EHMM比較其相似程度(Likelihood)後,擁有最高相似程度的性別就選為我們的辨識結果。
在實驗與討論中,本論文使用CASIA Gait Database (Dataset B)以及Taiwanese Gait database (TWG)進行實驗,由實驗結果可發現,本論文所提出的方法確實可以得到較好的辨識結果。
Gender recognition is a hot research topic in recent years. Human-machine interfaces or video surveillance can be greatly improved if human gender can be recognized automatically.
In this study, an embedded hidden Markov model is used for gender recognition. Video, which is recorded in different angles of view, is utilized to sample properties of each gender. Ten consecutive gait frames are segmented and organized as a composite image, which is used to establish EHMM. For video in each angle of view, two EHMMs are built and trained. The gender of the subject of a testing composite image is decided by the EHMM whose likelihood is most similar to the testing EHMM.
We test the proposed approach using the CASIA Gait Database (Dataset B) and Taiwanese Gait database (TWG) in this study. Experimental results show that the proposed system can identify the gender of human accurately.
摘要.................................................................................................................................. i
ABSTRACT .................................................................................................................... ii
誌謝................................................................................................................................ iii
目錄................................................................................................................................ iv
表目錄............................................................................................................................. vi
圖目錄............................................................................................................................ vii
第 1 章 緒論.................................................................................................................. 1
1.1 研究動機................................................................................................................ 1
1.2 相關文獻................................................................................................................ 2
1.3 研究方法................................................................................................................ 5
1.4 論文組織與架構.................................................................................................... 6
第 2 章 相關理論.......................................................................................................... 7
2.1 色彩空間(COLOR SPACE)....................................................................................... 7
2.1.1 RGB 色彩空間(RGB Color Space) ................................................................ 7
2.1.2 YCbCr 色彩空間(YCbCr Color Space).......................................................... 9
2.1.3 YIQ 色彩空間(YIQ Color Space)................................................................. 10
2.1.4 HSI 色彩空間(HSI Color Space)...................................................................11
2.2 背景相減技術(BACKGROUND SUBSTRACTION TECHNIQUES).............................. 13
2.3 遮罩式標籤法(LABELIG)..................................................................................... 13
2.4 嵌入式隱藏式馬可夫模型(EMBEDDED HIDDEN MARKOV MODEL, EHMM)..... 16
2.4.1 隱藏式馬可夫模型...................................................................................... 16
2.4.2 嵌入式隱藏式馬可夫模型.......................................................................... 18
2.5 離散餘弦轉換(DISCRETE COSINE TRANSFORM, DCT)........................................ 20
第 3 章 研究方法與實作............................................................................................ 23
3.1 架構流程.............................................................................................................. 23
3.2 前處理.................................................................................................................. 24
3.2.1 YCbCr 色彩空間轉換................................................................................... 24
3.2.2 背景相減技術.............................................................................................. 25
3.2.3 遮罩式標籤法.............................................................................................. 26
v
3.2.4 等比例縮放.................................................................................................. 28
3.3 組合影像.............................................................................................................. 31
3.4 EHMM.................................................................................................................. 33
3.4.1 擷取觀測向量.............................................................................................. 34
3.4.2 訓練EHMM................................................................................................. 35
3.4.3 利用EHMM 辨識性別................................................................................ 39
第 4 章 實驗結果與討論............................................................................................ 41
4.1 實驗環境與設計.................................................................................................. 41
4.2 參數實驗.............................................................................................................. 48
4.3 觀測向量取得方式比較...................................................................................... 52
4.4 影像張數對於辨識率的影響.............................................................................. 54
4.5 全身影像的大小對於辨識率的影響.................................................................. 56
4.6 各個角度與各種不同行走方式的辨識率.......................................................... 59
4.7 與其他方法之比較.............................................................................................. 61
第 5 章 結論................................................................................................................ 63
參考文獻........................................................................................................................ 65
[1]S. Tamura, H. Kawai, and H. Mitsumoto, “Male/Female Identification from 8 × 6 Very Low Resolution Face Images by Neural Network,” Pattern Recognition, Vol. 29, No. 2, pp. 331-335, 1996.
[2]Z. Sun, G. Bebis, X. Yuan, and S. J. Louis, “Genetic Feature Subset Selection for Gender Classification: A Comparison Study,” in Proceedings of IEEE Workshop on Application of Computer Vision, pp. 165-170, 2002.
[3]A. Jain, and J. Huang, “Integrating Independent Component Analysis and Linear Discriminant Analysis for Gender Classification,” in Proceedings of IEEE International Conference on Automatic Face & Gesture Recognition (FG), pp. 159-163, 2004.
[4]A. B. A. Graf, and F. A. Wichmann, “Gender Classification of Human Faces,” International Workshop on Biologically Motivated Computer Vision, pp. 491-500, 2002.
[5]K. Balci and V. Atalay, “PCA for Gender Estimation: Which Eigenvectors contribute? ,” in Proceedings of The 16th Intl. Conference on Pattern Recognition, Vol. 3, pp. 363-366, 2002.
[6]S. Hosoi, E. Takikawa, and M. Kawade, “Ethnicity Estimation with Facial Images,” in Proceedings of Sixth IEEE International Conference on Automatic Face and Gesture Recognition, pp. 195-200, 2004.
[7]R. Iga, K. Izumi, and H. Hayashi, “Gender and Age Estimation System from Face Images,” in Proceedings of SICE Annual Conference, pp. 756-761, 2003.
[8]H. C. Lian and B. L. Lu, “Multi-View Gender Classification Using Multi-Resolution Local Binary Patterns and Support Vector Machines,” International Journal of Neural Systems, Vol. 17, No. 6, pp. 479-487, 2007.
[9]L. Cao, M. Dikmen, Y. Fu and T. S. Huang, ”Gender Recognition from Body, “in Proceeding of the 16th ACM international conference on Multimedia, pp. 725-728, 2008.
[10]X. Li, S. J. Maybank, S. Yan, D. Tao, and D. Xu, “Gait Components and Their Application to Gender Recognition,” IEEE Transactions on Systems, Man, and Cybernetics—Part C: Applications and Reviews, Vol. 38, No. 2, pp. 145-155, March 2008.
[11]S. Yu, T. Tan, K. Huang, K. Jia, and X. Wu, “A Study on Gait-Based Gender Classification,” IEEE Transactions on Image Processing, Vol. 18, No. 8, pp. 1905-1910, August 2009.
[12]P. C. Chang, M. C. Tien, J. L. Wu, and C. S. Hu, “Real-time Gender Classification from Human Gait for Arbitrary View Angles,” in Proceedings of 11th IEEE International Symposium on Multimedia, pp. 88-95, 2009.
[13]L. Chen, Y. H. Wang, and Y. Wang, ”Gender Classification Based on Fusion of Weighted Multi-view Gait Component Distance,” in Proceedings of Chinese Conference on Pattern Recognition, pp. 1-5, 2009.
[14]J. Lu, and Y. P. Tan, “Uncorrelated Discriminant Simplex Analysis for View-Invariant Gait Signal Computing,” Pattern Recognition Letters, Vol. 31, pp. 382-393, 2010.
[15]C. Shan, S. Gong and P. W. McOwan, “Fusing Gait and Face Cues for Human Gender Recognition,” Neurocomputing, Vol. 71, pp. 1931-1938, 2008.
[16]D. Zhang, and Y. H. Wang, ”Gender Recognition Based on Fusion of Face and Gait Information,” in Proceedings of International Conference on Machine Learning and Cybernetics, pp. 62-67, 2008.
[17]S. Yu, D. Tan, and T. Tan, “A Framework for Evaluating the Effect of View Angle, Clothing and Carrying Condition on Gait Recognition,” in Proceedings of International Conference on Pattern Recognition (ICPR), pp. 441-444, 2006.
[18]L. Lee and W. E. L. Grimson, “Gait Analysis for Recognition and Classification,” in Proceedings of Fifth IEEE International Conference on Automatic Face and Gesture Recognition, pp. 155-162, May 2002.
[19]G. Huang and Y. Wang, “Gender Classification Based on Fusion of Multi-View Gait Sequences,” in Proceedings of 8th Asian Conference Computer Vision, pp. 462-471, 2007.
[20]S. Kuo, and O. Agazzi, “Keyword Spotting in Poorly Printed Documents Using Pseudo 2-D Hidden Markov Models,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 16, No. 8, pp. 842-848, 1994.
[21]V. N. Ara and H. H. Monson, “Face Recognition Using an Embedded HMM,” in Proceedings of IEEE International Conference on Audio Video Biometric based Person Authentication, Washington, DC, USA, pp. 19-24, 1999.
[22]M. S. Kim, D. Kima, and S. Y. Lee, “Face Recognition Using The Embedded HMM with Second-Order Block-Specific Observations,” Pattern Recognition, Vol. 36, No. 11, pp. 2723-2735, November 2003.
[23]L. Rabiner, “A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition,” in Proceedings of The IEEE, Vol. 77, pp. 257-286, February 1989.
[24]R. C. Gonzalez, and R. E. Woods, Digital Image Processing 2nd Edition, Prentice Hall, 1992.
[25]T. Y. Tsai, “A Figure Extraction and Synthesis System by Neural Networks,” National Yunlin University of Science and Technology, Master Thesis, 2008.
[26]Y. L. Wu, “Integrated the Validation Incremental Neural Networks and Radial-Basis Function Neural Networks for Segmenting Prostate,” National Yunlin University of Science and Technology, Master Thesis, 2009.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top