跳到主要內容

臺灣博碩士論文加值系統

(18.204.48.69) 您好!臺灣時間:2021/07/27 23:35
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:林應璞
研究生(外文):Ying-pu Lin
論文名稱:訓練樣本數對二維與立體人臉辨識之影響探討
論文名稱(外文):Investigation of the Effect of Training Sample Size on Performance of 2D and 2.5D Face Recognition
指導教授:李建興李建興引用關係
指導教授(外文):Chien-hsing Lee
學位類別:碩士
校院名稱:國立成功大學
系所名稱:系統及船舶機電工程學系碩博士班
學門:工程學門
學類:機械工程學類
論文種類:學術論文
論文出版年:2009
畢業學年度:97
語文別:中文
論文頁數:79
中文關鍵詞:歐式距離改良式主成份分析法最接近特徵線Haar小波轉換主成份分析法光學立體法訓練樣本數線性鑑別式分析
外文關鍵詞:Photometric Stereo MethodEuclidean DistanceLinear Discriminant AnalysisNearest Feature LineTraining Sample SizeHaar Wavelet TransformPrinciplal Component AnalysisImproved Principlal Component Analysis
相關次數:
  • 被引用被引用:0
  • 點閱點閱:197
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:1
本文旨在探討訓練樣本數對二維人臉與立體人臉辨識之影響,其辨識方式是由特徵擷取方法(Haar小波、主成份分析法、改良式主成份分析法)搭配決策法則(歐式距離、最接近特徵線與線性鑑別式分析法)線性組合而成,吾人試圖尋找適當的辨識方式,進而推論訓練樣本數與辨識率之關係。
於二維人臉辨識中,先以CCD拍攝整幅人臉影像,再經影像前處理擷取人臉區塊;而立體人臉辨識則使用光學立體法建構人臉立體模型,以獲得人臉深度值與像素值,且因於暗房環境中拍攝,人臉像素值較不受光線所影響,而採以高度值搭配像素值作為辨識所需的特徵向量。有關二維人臉辨識模擬,以ORL (Olivetti Research Lab)、GIT (Georgia Institute of Technology)、CIT (California Institute of Technology)、ESSEX (University of ESSEX)、UMIST (University of Manchester Institute of Science and Technology)與作者自建人臉資料庫進行訓練樣本數之推論,以改良式主成份分析搭配歐式距離所得結果最為優異。當訓練樣本數為13至17時,其辨識率已有85%以上,如再增加訓練樣本數至18以上,辨識率只些微提升,但卻增加辨識時間;由大型資料庫ESSEX之辨識結果可發現,於訓練樣本數13至19間,辨識率維持在92%以上,並無大幅波動的情形,故在大型資料庫中,辨識結果也是相對穩定;而當訓練樣本數增加至25時,辨識率無法顯著提升。由此可知,增加訓練樣本數並無法大幅提升辨識率。有關立體人臉辨識模擬,亦以改良式主成份分析搭配歐式距離進行分析,而立體人臉資料庫則是由作者自建。當訓練樣本數為14至17時,辨識率已達84%以上,且於此區間內辨識率之曲線已有平緩現象;當訓練樣本數17時,辨識率更達93.93%,如再增加訓練樣本數,辨識率並無大幅提升,反而增加辨識時間。
最後,本文藉由Matlab中Instrument Control Toolbox 和 GUI介面(Graphical User Interface),整合軟硬體而架構出即時人臉辨識系統。

關鍵詞:訓練樣本數,光學立體法,Haar小波轉換,主成份分析法,改良式主成份分析法,歐式距離,線性鑑別式分析,最接近特徵線。
The purpose of this thesis is to investigate the effect of training sample size on performance of 2D and 2.5D face recognition. The methods of face recognition are formed by feature extraction (Haar wavelet transform, principlal component analysis and improved principlal component analysis) and classification (Euclidean distance, nearest feature line method and linear discriminant analysis) techniques. This thesis makes an effort on finding a suitable recognition method and concludes the relationship between the size of training sample and the rate of face recognition.
A facial image in 2D face recognition is first captured by a CCD camera and an image pre-processing technique is applied to obtain a facial region. However, a facial image in 2.5D face recognition is established by using Photometric Stereo Method (PSM) to obtain the depth and pixel values in the 2.5D face model. Since the construction of 2.5D face model is performed in a dark room, the pixel values are not affected by the intensity of light. Thus, the combination of depth and pixel value will be used as the feature vector for 2.5D face recognition. The simulation of 2D face recognition based on ORL (Olivetti Research Lab), GIT (Georgia Institute of Technology), CIT (California Institute of Technology), ESSEX (University of ESSEX), UMIST (University of Manchester Institute of Science and Technology) and author’s own databases are performed to derive the the relationship between the size of training sample and the rate of face recognition. As a result, the combination of improved principlal component analysis and Euclidean distance has the best recognition rate. When the size of training sample is between 13 and 17, the recognition rate is over 85%. If the size of training sample increases to be above 18, it slightly increases the recognition rate but it rises the recognition time. From a large scale database (i.e., ESSEX), the recognition rate is over 92% when the size of training sample is between 13 and 19. Thus, the recognition rate is stable for a large scale database. As the training sample size increases to 25, the recognition rate does not significantly increase. Thus, an increase of the size of training sample does not provide better recognition rates. On the other hand, the simulation of 2.5D face recognition is based on author’s own database and the recognition method is the same as the 2D face recognition (i.e., the combination of improved principlal component analysis and Euclidean distance). When the size of training sample is between 14 and 17, the recognition rate is above 84% and is stable. If the size of training sample increases to be above 17, the recognition rate reaches to be 93.93%. However, if the size of training sample continuously increases, it does not increase the recognition rate significantly but it rises the recognition time.
In conclusion, the algorithms of 2D and 2.5D face recognition are integrated to become a real-time face recognition by using Instrument Control Toolbox and Graphical User Interface in the Matlab environment.

Index Terms: Training Sample Size, Photometric Stereo Method, Haar Wavelet Transform, Principlal Component Analysis, Improved Principlal Component Analysis, Euclidean Distance, Linear Discriminant Analysis, Nearest Feature Line.
目 錄
頁次
摘 要 i
誌 謝 iv
目 錄 v
表目錄 xiv
圖目錄 xv
符號說明 xviii
第一章 序 論 1
1.1研究動機與目的 1
1.2二維人臉辨識簡介 2
1.3立體人臉辨識簡介 2
1.4相關文獻回顧 3
1.4.1人臉偵測文獻回顧 3
1.4.2人臉辨識文獻回顧 4
1.4.3立體人臉文獻回顧 5
1.5論文研究架構 8
第二章 二維人臉影像前處理與立體人臉建構方法 9
2.1 人臉偵測之影像前處理 9
2.1.1 膚色分割 9
2.1.2 形態學 11
2.1.3 連接元區域標定 12
2.1.4 人臉偵測驗證 14
2.2 立體模型建構之理論基礎 18
2.2.1由光學投影成形法 19
2.2.2光學立體演算法 19
2.3本章小結 21
第三章 人臉特徵擷取與辨識方法 22
3.1 Haar 小波轉換法 22
3.2主成份分析法 23
3.3改良式主成份分析法 25
3.4線性鑑別式分析法 27
3.5歐式距離 30
3.6最接近特徵線 31
3.7本章小結 32
第四章 模擬結果 33
4.1即時人臉偵測 33
4.2 以光學立體演算法建構立體人臉模型 36
4.3訓練樣本數對二維與立體人臉辨識之影響分析 39
4.3.1 人臉資料庫概述 39
4.3.2 二維人臉辨識模擬結果 43
4.3.3 立體人臉辨識模擬結果 53
特徵擷取後之人臉影像 55
4.3.4 二維與立體人臉辨識之分析 57
4.4 人臉辨識之GUI介面實現 60
4.4.1二維人臉辨識GUI介面 60
4.4.2 立體人臉辨識GUI介面 62
4.5 硬體製作 64
4.5.1 RS-232串列標準簡介 64
4.5.2 電路製作 65
4.6 系統測試結果分析 66
4.6.1 二維人臉辨識實際測試 67
4.6.2 立體人臉辨識實際測試 68
4.6.3 二維與立體人臉辨識綜合比較 70
4.7本章小結 70
第五章 結論與未來展望 72
5.1結論 72
5.2未來展望 73
參考文獻 75
簡 歷 79
參考文獻
[1]W. Zhao, R. Chellappa, A. Rosenfeld, and P. J. Phillips, “Face recognition: a literature survey,” ACM Computing Surveys, vol. 35, no. 4, pp. 399-458, December 2003.
[2]M.-H. Yang, D. J. Kriegman, and N. Ahuja, “Detecting faces in images: a survey,” IEEE Transaction on Pattern Analysis and Machine Intelligence, vol. 24, no. 1, pp.34-58, January 2002.
[3]R. Chellappa, C. L. Willson, and S. Sirohey, “Human and machine recognition of faces: a survey,” Proceeding of the IEEE, vol. 83, no. 5, pp. 705-741, May 1995.
[4]J.-T. Chien and C.-C. Wu, “Discriminant waveletface and nearest feature classifier for face recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 12, pp. 1644-1649, December 2002.
[5]H. Wang, S. Yang, and W. Liao, “An improved PCA face recognition algorithm based on the discrete wavelet transform and the support vector machines,” International Conference on Computational intelligence and Security Workshops, pp. 308-311, December 2007.
[6]M. Kirby and L. Sirovich, “Application of the Karhunen-Loeve Procedure for the Characterization of Human Faces,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12, no. 1, pp. 103-108, January 1990.
[7]L. G. Shapiro and G. C. Stockman, Computer Vision, Upper Saddle River, NJ: Prentice Hall, 2001.
[8]P. N. Belhumeur, J. P. Hespanha, and D. J. Kriegman, “Eigenfaces vs. fisherfaces: recognition using class specific linear projection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 711-720, July 1997.
[9]H. Yin, P. Fu, and S. Meng, “Sampled two-dimensional LDA for face recognition with one training image per person,” ICICIC’06 First International Conference on Innovative Computing, Information and Control, vol. 2, pp. 113-116, August 30-01, 2006.
[10]Y. Nara, J. Yang, and Y. Suematsu, “Face recognition using improved principle component analysis,” IEEE International Symposium on Micromechatronics and Human Science, pp. 77-82, October 19-22, 2003.
[11]I. Kakadiaris, G. Passalis, G. Toderici, N. Murtuza, and T. Theoharis, “3D face recognition,” British Machine Vision Conference, vol. 3, pp. 869-878, September 4-7, 2006.
[12]T. K. Leung, M. C. Burl, and P. Perona, “Finding faces in cluttered scenes using random labeled graph matching,” IEEE Fifth International Conference on Computer Vision, pp. 637-644, June 20-23, 1995.
[13]H. Wang and S.-F. Chang, “A highly efficient system for automatic face region detection in MPGE video,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 7, no. 4, pp. 615-628, August 1997.
[14]C. Garcia and G. Tziritas, “Face detection using quantized skin color regions merging and wavelet packet,” IEEE Transactions on Multinedia, vol. 1, no. 3, pp. 264-277, September 1999.
[15]H. A. Rowley, S. Baluga, and T. Kanade, “Neural network based face detection,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 203-208, June 18-20 1996.
[16]K.-K. Sung and T. Poggio, “Example-based learning for view-based human face detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 1, pp. 39-51, January 1998.
[17]R. Brunelli and T. Poggio, “Face recognition features versus templates,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 15, no. 10, pp. 1042-1052, October 1993.
[18]M. A. Turk and A. P. Pentland, “Eigenfaces for recognition,” Journal of Cognitive Neuroscience, vol. 3, no. 1, pp. 71-86, 1991.
[19]K.-M. Lam and H. Yan, “An analytic-to-holistic approach for face recognition based on a single frontal view,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 7, pp. 673-686, July 1998.
[20]B.-J. Oh, “Face recognition by using neural network classifiers based on PCA and LDA,” IEEE International Conference on Systems, Man and Cybernetics, pp. 1699-1703, October 2005.
[21]R. J. Woodham, “Photometric method for determining surface orientation from multiple images,” Optical Engineering, vol. 19, no. 1, pp. 139-144, January 1980.
[22]A. Tomita, Jr. and R. Ishii, “Determining the orientation of a person’s hand by using the photometric stereo method,” Proceedings of IECON’96, vol. 1, August 5-10, 1996.
[23]X. Lu, D. Colbry, and A. K. Jain “Three-dimensional model based face recognition,” Proceedings of the 17th International Coference on Pattern Recognition , ICPR, vol. 1, pp. 362-366, August 2004.
[24]X. Lu, D. Colbry, and A. K.Jain “Matching 2.5D face scans to 3D models,” Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 1, pp. 31-43, January 2006.
[25]W. A. P. Smith and E. R. Hancock, “Face recognition using 2.5D shape information,” Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 1407-1414, 2006.
[26]S. Negahdaripour, H. Zhang, and X. Han, “Investigation of photometric stereo method for 3-D shape recovery from underwater imagery,” Oceans’02 MTS/IEEE, vol. 2, pp. 1010-1017, October 29-31, 2002.
[27]鄭詔元,使用三組光源之光學立體量測法的3-D模型重建,義守大學電機工程學系,碩士論文,2001。
[28]劉宗諭,3D立體光學量測法重建系統之製作與應用, 義守大學電機工程學系,碩士論文,2002。
[29]曾裕山,使用3D影像資訊之人臉辨識,義守大學電機工程學系,碩士論文,2003。
[30]黃牧常,使用離散餘弦轉換之3D人臉辨識,義守大學電機工程學系,碩士論文,2004。
[31]L. Wang, Y. Zhang, and J. Feng, “On the Euclidean distance of images,” IEEE Transaction on Pattern Analysis and Machine Intelligence, vol. 27, no. 8, pp. 1334-1339, August 2005.
[32]S. Z. Li and J. Lu, “Face recognition using the nearest feature line method,” IEEE Transactions on Neural Networks, vol. 10, no. 2, pp. 439-443, March 1999.
[33]http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html, (01/08/2009 retrieved).
[34]http://www.anefian.com/face_reco.htm, (01/08/2009 retrieved).
[35]http://www.vision.caltech.edu/html-files/archive.html, (01/08/2009 retrieved).
[36]http://cswww.essex.ac.uk/mv/allfaces/index.html, (07/06/2009 retrieved).
[37]http://www.shef.ac.uk/eee/research/vie/research/face.html, (07/06/2009 retrieved).
[38]R. C. Gonzalez and R. E. Woods, Digital Image Processing, Addison-Wesley, 1992.
[39]楊煒達,簡易方法之少量人臉辨識系統,國立中央大學資訊工程學系,碩士論文,2007。
[40]J.-H. Lai, P.-C. Yuen, W.-S. Chen, S. Lao, and M. Kawade, “Robust facial feature point detection under nonlinear illuminations,” IEEE ICCV Workshop on Recognition, Analysis, and Tracking of Faces and Gestures in Real-Time Systems, pp. 168-174, July 2001.
[41]C.-Y. Chen, R. Klette, and C.-F. Chen, “Improved fusion of photometric stereo and shape from contours,” Computer Science Department of The University of Auckland CITR at Tamaki Campus, CITR-TR-95, pp. 1-8, August 2001.
[42]C.-Y. Chen, R. Klette, and C.-F. Chen, “3D reconstruction using shape from photometric stereo and contours,” CITR, Tamaki Campus, The university of Auckland, New Zealand, pp. 1-5. November 2003.
[43]C.-Y. Chen, R. Klette, and C.-F. Chen, “Recovery of coloured surface reflectances using the photometric stereo method,” Computer Science Department of The University of Auckland CITR at Tamaki Campus, CITR-TR-117, pp. 1-6, August 2002.
[44]K. V. Rajaram, G. Parthasarathy, and M. A. Faruqi, “A neural network approach to photometric stereo inversion of real-world reflectance maps for extracting 3-D shapes of objects,” IEEE Transactions on System, Man and Cybernetics, vol. 25, no. 9, pp. 1289-1300, September 1995.
[45]王科翔,多重人臉偵測與識別系統,國立成功大學工程科學系,碩士論文,2005。
[46]廖俊能,圖像辨識之分析與比較:以人臉與印章為例,國立成功大學系統及船舶機電學系,碩士論文,2008。
[47]白中和,RS-232C介面技術應用,全華科技圖書股份有限公司,1989年1月。
[48]http://www.lammertbies.nl/comm/info/RS-232_specs.html, (01/08/2009 retrieved)
[49]http://zh.wikipedia.org/w/index.php?title=RS-232&variant=zh-tw, (01/08/2009 retrieved)
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top