跳到主要內容

臺灣博碩士論文加值系統

(44.192.247.184) 您好!臺灣時間:2023/01/30 12:17
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:郭振輝
研究生(外文):Chen Hui Kuo
論文名稱:兩影像投射轉換法於人臉辨識之研究
論文名稱(外文):Face Recognition Based on a Two-View Projective Transformation
指導教授:李建德李建德引用關係
指導教授(外文):J. D. Lee
學位類別:博士
校院名稱:長庚大學
系所名稱:電機工程學系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2012
畢業學年度:100
論文頁數:112
中文關鍵詞:人臉辨識特徵選取兩影像幾何學支援向量機
外文關鍵詞:Face RecognitionFeature SelectionTwo-View GeometrySupport Vector Machine
相關次數:
  • 被引用被引用:0
  • 點閱點閱:285
  • 評分評分:
  • 下載下載:63
  • 收藏至我的研究室書目清單書目收藏:0
本論文提出兩種人臉辨識方法,分別為多訓練樣本由粗到細策略之串級分類器,及單一訓練樣本由強健預估器作人臉辨識。人臉辨識架構基於串接多階段分類器,包含支援向量機(SVM)、特徵臉(Eigenface)及兩影像投射轉換法(two-view project transformation)。完整決策程序是以串級由粗到細階段進行,第一階段及第二階段以SVM一對全部(one against all, OAA)及一對一(one against one, OAO)原理,從訓練影像中挑選兩個與測試影像差異性最小的類別,再以第三階段特徵臉方法,決定優先次序的影像,最後階段以兩影像投射轉換法作兩影像之幾何配對,找出與測試影像有最大幾何相似度的訓練影像類別。實驗以Olivetti Research Laboratory (ORL)、Yale和中央研究院(IIS)人臉資料庫作多階段人臉辨識系統測試,實驗結果證明比單一分類器辨識精確度較佳。由於SVM之缺點在於辨識模型是以每人多樣本訓練取得,無法使用在每人單樣本,所以為解決此一問題,本文另提出基於兩影像投射轉換法之強健預估系統(Robust Estimation System, RES)。應用人臉影像之區域與全域資料作為強健預估,利用ORL及Yale人臉資料庫的原始影像做測試,而FERET人臉資料庫影像需做前處理,以擷取純臉部區域及執行校正轉換調整,粗略的分割臉部影像為四個最特殊之區域:左眼、右眼、鼻子及嘴巴,特徵擷取階段使用一階梯度大小方法。當處理分類階段時,區域特徵先作粗略配對,接著使用random sample consensus (RANSAC)強健預估處理全域特徵,主要目的為找出兩配對影像之基本矩陣。最後相似度分數被計算出來,分數最高者被指定為正確之類別。使用FERET、ORL及Yale人臉資料庫作實驗,實驗結果證明本文所提方法比其他方法的辨識性能,有顯著改善。
In this dissertation, we propose two novel face recognition algorithms, one is cascade classifiers for face recognition undertaken by coarse-to-fine strategy using multiple samples per subject, and the other is face recognition based on a two-view projective transformation using one sample per subject. Face recognition scheme based on multi-stages classifier, which includes methods of support vector machine (SVM), Eigenface, and two-view projective transformation, is proposed in this dissertation. The whole decision process is undertaken by cascade coarse-to-fine stages. The first and second stages are SVM with one-against-all (OAA) and one-against-one (OAO) method picks out two classes with the least variations to the testing images. From the selected two classes, the third stage with Eigenface method decides the priority of images for a fine match with testing images at final stage with two-view projective transformation method. A fine class with greatest geometric similarity to testing images is thus produced at final stage. This multi-stage face recognition system has been tested on Olivetti Research Laboratory (ORL), Yale and Institute of Information Science (IIS) databases, and the experimental results give evidence that the proposed approach is superior to the previous approaches based on the single classifier in recognition accuracy. The drawback of SVM can’t use one sample per subject to build the recognition model. In order to solve this problem, we propose a novel face recognition algorithm based on two-view projective transformation, called the robust estimation system (RES). Our approach adopts both local and global information for robust estimation. We utilize the original images from the ORL and Yale databases for performance evaluation. The images of FERET database are pre-processed to extract the face region and execute the affine transformation. We roughly divide the face images into the four block images that are most significant for a face: left eye, right eye, nose, and mouth. The feature used here are magnitudes of first-order gradients. While conducting the classification stage, local features are putatively matched before the processing or the global RANSAC robust estimation features, with the aim of identifying the fundamental matrix between two matched face images. Finally, similarity scores are calculated, and the candidate awarded the highest score is designated the correct subject. Experiments were implemented using the FERET, ORL and Yale databases to demonstrate the efficiency of the proposed method. The experimental results show that our algorithm greatly improves recognition performance compared to existing methods.
目錄
指導教授推薦書……………………………………………………………………
口試委員會審定書…………………………………………………………………
授權書………………………………………………………………………………iii
誌謝…………………………………………………………………………………v
Contents
CHAPTER I Introduction ………………………………………………………… 1
1.1 Background …………………………………………………………………1
1.2 Review the Methods of Feature Selection for Face Recognition …………2
1.3 Review the Methods of Classifier for Face Recognition …………………6
1.4 Objective of the Cascade Multi-Stage Classifiers System…………………8
1.5 Objective of the Robust Estimation System ………………………………10
CHAPTER II Background of the Two-View Projective Transformation …………14
2.1 Introduction ………………………………………………………………14
2.2 Two-View Projective Transformation ……………………………………15
2.3 Computing the Fundamental Matrix F ……………………………………16
2.4 Automatic Computation of the Fundamental Matrix F …………………18
2.5 Summary and Discussion …………………………………………………21
CHAPTER III Cascade Multi-Stage Classifier System for Face Recognition …24
3.1 Introduction ………………………………………………………………24
3.2 Feature Selection …………………………………………………………25
3.2.1 Data Sampling ……………………………………………………25
3.2.2 Data Transform ……………………………………………………26
3.2.3 Feature Vector Extraction …………………………………………29
3.2.4 Feature Selection in Our System …………………………………30
3.3 Classification Methods …………………………………………………30
3.3.1 Support Vector Machine for Binary Classifier ……………………30
3.3.2 One-Against-All (OAA) of SVM for Multi-Class Models ………34
3.3.3 One-Against-One (OAO) of SVM for Multi-Class Models ………35
3.3.4 Eigenface Method …………………………………………………36
3.3.5 Our Novel scheme of a Multi-Stage Classifier System ……………38
3.4 Experimental Results ……………………………………………………46
3.4.1 Face Recognition on ORL Database ………………………………47
3.4.2 Comparison with Previous Reported Results on ORL ……………48
3.4.3 Face Recognition on Yale Database ………………………………48
3.4.4 Comparison with Previous Reported Results on Yale ……………49
3.4.5 Face Recognition on the IIS Database ……………………………50
3.5 Summary and Discussion ………………………………………………50
CHAPTER IV Two-View Projective Transformation Using One Sample per Subject
…………………………………………………………………… 55
4.1 Introduction ………………………………………………………………55
4.2 The Novel Architecture of a Robust Estimation System…………………55
4.2.1 Extracting Global and Local Features ……………………………57
4.2.2 Locating Putative Matches of Local Features ……………………58
4.2.3 Defining RANSAC Matches with Global Features ………………59
4.2.4 Calculating the Similarity Score …………………………………60
4.3 Experiments and Results …………………………………………………62
4.3.1 Experiment Setup …………………………………………………63
4.3.2 Parameters used in RES Method …………………………………65
4.3.3 Results for the FERET Database …………………………………69
4.3.4 Results for the ORL and Yale Databases …………………………70
4.4 Summary and Discussion ………………………………………………73
CHAPTER V Conclusions ………………………………………………………77
REFERENCES ………………………………………………………………………79
APPENDIX …………………………………………………………………………92


List of Figures
Fig. 2.1 Geometry of corresponding points. ………………………………16
Fig. 3.1 Data sampling: (a) top-bottom scan, (b) raster scan. …………26
Fig. 3.2 Facial image and its DCT transformed image: (a) original image, (b) 2D plot after 2D-DCT, (c) 3D plot after 2D-DCT. ………………………………………………………………28
Fig. 3.3 Scheme of zig-zag method to extracted 2D-DCT coefficients to a 1D vector or n  m matrix. ………………………………………29
Fig. 3.4 Classification between two classes W1 and W2 using hyperplanes: (a) arbitrary hyperplanes l, m, and n. (b) the optimal separating hyperplane with the largest margin identified by the dashed line, passing the two support vectors. …………31
Fig. 3.5 Structure of the face recognition system. (a) training phase (b) testing phase. …………………………………………………………40
Fig. 3.6 (a) The weight vector T = [w1 w2 … w9] of input image. (b) The Euclidian distance between the input image and ten training samples, respectively. The sample 1, 2, 3, 4, 5 are the same classes and sample 6, 7, 8, 9, 10 are another classes. …42
Fig. 3.7 Four procedures of space information using RANSAC method to find the match and unmatch feature points. (a) Find Harris corners feature points in one testing and two training images. (b) Find putative matches of testing and training images. (c) Using RANSAC method to find testing and training images of match feature points. (d) Count numbers of match and unmatch feature points. ……………………………………………………45
Fig. 3.8 Some sample images from publicly available face database used in the experiments: (a) ORL face database, (b) Yale face database, (c) IIS face database. …………………………………46
Fig. 3.9 Comparison of recognition error versus the number of features of the OAA-SVM, OAO-SVM, Eigenface, and final stage of the Multi-stage classifier system on the ORL face database. ………………………………………………………………52
Fig. 3.10 Comparison of recognition error versus the number of features of the OAA-SVM, OAO-SVM, Eigenface, and final stage of the Multi-stage classifier system on the Yale face database. …53
Fig. 3.11 Comparison of recognition error versus the number of features of the OAA-SVM, OAO-SVM, Eigenface, and final stage of the Multi-stage classifier system on the IIS face database. …54
Fig. 4.1 System architecture for RES ……………………………………56
Fig. 4.2 The procedure of feature extraction. (a) Input image and define the block images with correlated positions. (b) Determine the local feature points in each block image. (c) Develop the global feature information. …………………………………………………57
Fig. 4.3 The RANSAC matches (a) Distinct subject (b) Identical subject ……………………………………………………………………61
Fig. 4.4 Define the parameters of distance d and angle  between RANSAC matches feature points. ……………………………62
Fig. 4.5 Some sample images from publicly available face database used in the experiments. (a) FERET database. The first row is original images and the second row is pre-processed images. (b) ORL database. (c) Yale database. ………………………………65
Fig. 4.6 The cumulative scores of the RES and control algorithms on the fb, fc, dup I and dup II probe sets. (a) fb set. (b) fc set. (c) dup I set. (d) dup II set. …………………………………………70
Fig. 4.7 Performance analysis of non-frontal facial variation, with/without glasses factor, and image noise ………………75
Fig. 4.8 Error analysis. First row: probe images. Second row: error results with most similar gallery images. Third row: Ground true gallery images ………………………………………………76

List of Tables
Table 2.1 The distance threshold for a probability of =0.95 that the point (correspondence) in an inlier. ………22
Table 2.2 The number N of samples required to ensure, with a probability p=0.99, that at least one sample has no outliers for a given size of sample, s, and proportion of outliers, . …………23
Table 3.1 Recognition performance comparison of different approaches (ORL database) …………………………………………………52
Table 3.2 Recognition performance comparison of different approaches (Yale database) …………………………………………………53
Table 3.3 Recognition performance comparison of different approaches (IIS database) …………………………………………………54
Table 4.1 Average trial numbers of identical subjects for RANSAC robust estimation (FERET database) ………………………67
Table 4.2 Average trial numbers of distinct subjects for RANSAC robust estimation (FERET database) ………………………67
Table 4.3 Feature numbers of identical subject in gallery and probe sets (FERET database, the magnitude of first-order gradients: T = 20) ………………………………………………………………68
Table 4.4 The recognition rates of the RES and comparison algorithms (FERET database) ……………………………………………69
Table 4.5 Recognition performance on ORL database in deterministic manner. …………………………………………………………72
Table 4.6 Recognition performance on YALE database in deterministic manner. …………………………………………………………72
Table 4.7 Recognition performance on ORL and YALE databases in random manner. ………………………………………………72
Table 4.8 The recognition rates of the RES and comparison algorithms in deterministic manner (ORL database) …………………73
Table 4.9 The recognition rates of the RES and comparison algorithms in deterministic manner (Yale database) …………………73
Table 4.10 The recognition rates of the RES and comparison algorithms in random manner (ORL and Yale databases) ……………73
[1] A. Samal, P.A. Iyengar, “Automatic recognition and analysis of human faces and facial expressions: a survey”, Pattern Recognition, vol. 25, no. 1, pp. 65-77, Jan. 1992.
[2] D. Valentin, H. Abdi, A.J. O’Toole, G.W. Cottrell, “Connectionist models of face processing: a survey”, Pattern Recognition, vol. 27, no. 9, pp. 1209-1230, Sep. 1994.
[3] A.F. Abate, M. Nappi, D. Riccio, G. Sabatino, “2D and 3D face recognition: A survey”, Pattern Recognition Letters, vol. 28, no. 14, pp. 1885-1906, Oct. 2007.
[4] W. Zhao, R. Chellappa, A. Rosenfeld, P.J. Phillips, “Face recognition: A literature survey”, ACM Computing Surveys, vol. 35, no. 4, pp. 399-458, Dec. 2003.
[5] Z.L. Stan, R.F. Chu, S.C. Liao, L. Zhang, “Illumination invariant face recognition using near-infrated images”, IEEE Trans. Pattern Anal. Mach. Intell., vol. 29, no. 4, pp. 627-639, Apr. 2007.
[6] J. Wright, A.Y. Yang, A. Ganesh, S.S. Sastry, Y. Ma, “Robust face fecognition via sparse representation”, IEEE Trans. Pattern Anal. Mach. Intell., vol. 31, no. 2, pp. 210-227, Feb. 2009.
[7] R. Brunelli, T. Poggio, “Face recognition: Features versus templates”, IEEE Trans. Pattern Anal. Mach. Intell., vol. 15, no. 10, pp. 1042-1053, Oct. 1993.
[8] L. Wiskott, J.M. Fellous, N. Kruger, C. von der Malsburg, “Face recognition by elastic bunch graph matching”, IEEE Trans. Pattern Anal. Mach. Intell., vol. 19, no. 7, pp. 775-779, Jul. 1997.
[9] M. Turk, A. Pentland, “Eigenfaces for recognition”, Journal of Cognitive Neuroscience, vol. 3, no. 1, pp. 71-86, 1991.
[10] B. Raytchev, H. Murase, “Unsupervised face recognition by associative chaining”, Pattern Recognition, vol. 36, no. 1, pp. 245-257, Jan. 2003.
[11] S. Pang, D. Kim, S.Y. Bang, “Membership authentication in the dynamic group by face classification using SVM ensemble”, Pattern Recognition Letters, vol. 24, no. 1-3, pp. 215- 225, Jan. 2003.
[12] J. Lu, K.N. Plataniotis, A.N. Venetsanopoulos, “Face recognition using kernel direct discriminant analysis algorithms”, IEEE Trans. Neural Networks, vol. 14, no. 1, pp. 117-126, Jan. 2003.
[13] P. Belhumeur, J. Hespanha, D. Kriegman, “Eigenfaces vs. fisherfaces: recognition using class specific linear projection”, IEEE Trans. Pattern Anal. Mach. Intell., vol. 19, no. 7, pp. 711-720, Jul. 1997.
[14] M. Bartlett, J. Movellan, T. Sejnowski, “Face recognition by independent component analysis”, IEEE Trans. Neural Network, vol. 13, no. 6, pp. 1450-1464, Nov. 2002.
[15] J. Lai, P. Yuen, G. Feng, “Face recognition using holistic fourier invariant features”, Pattern Recognition, vol. 34, no. 1, pp. 95-109, Jan. 2001.
[16] X.Y. Jing, H.S. Wong, D. Zhang, “Face recognition based on discriminant fractional Fourier feature extraction”, Pattern Recognition Letters, vol. 27, no. 13, pp. 1465-1471, Oct. 2006.
[17] Z. Hafed, M. Levine, “Face recognition using the discrete cosine transform”, Int. J. Computer Vision, vol. 43, no. 3, pp. 167-188, July 2001.
[18] A. Penev, J. Atick, “Local feature analysis: A general statistical theory for object representation”, Network: Comput. Neural Syst., vol. 7, pp. 477-500, 1996.
[19] T. Ahonen, A. Hadid, M. Pietikainen, “Face description with local binary patterns: Application to face recognition”, IEEE Trans. Pattern Anal. Mach. Intell., vol. 28, no. 12, pp. 2037-2041, Dec. 2006.
[20] X. Zhang, Y. Jia, “Face recognition with local steerable phase feature”, Pattern Recognition Letters, vol. 27, no. 16, pp. 1927-1933, Dec. 2006.
[21] K.C. Kwak, W. Pedrycz, “Face recognition: A study in information fusion using fuzzy integral”, Pattern Recognition Letters, vol. 26, no. 6, pp. 719-733, May 2005.
[22] D. Zhou, X. Yang, N. Peng, Y. Wang, “Improved-LDA based face recognition using both facial global and local information”, Pattern Recognition Letters, vol. 27, no. 6, pp. 536-543, Apr. 2006.
[23] A.N. Rajagopalan, K.S. Rao, Y.A. Kumar, “Face recognition using multiple facial features”, Pattern Recognition Letters, vol. 28, no. 3, pp. 335-341, Feb. 2007.
[24] M.J. Er, W. Chen, S. Wu, “High-speed face recognition based on discrete cosine transform and RBF neural networks”, IEEE Trans. Neural Networks, vol. 16, no. 3, pp. 679-691, May 2005.
[25] C. Xiang, X. Fan, A.T.H. Lee, “Face recognition using recursive Fisher linear discriminant”, IEEE Trans. on Image Processing, vol. 15, no. 8, pp. 2097-2105, Aug. 2006.
[26] G. Guo, S.Z. Li, K.L. Chan, “Support vector machines for face recognition”, Image and Vision Computing, vol. 19, pp. 631-638, Aug. 2001.
[27] H. Othman, T. Aboulnasr, “A separable low complexity 2D HMM with application to face recognition”, IEEE Trans. Pattern Anal. Mach. Intell., vol. 25, no. 10, pp. 1229-1238, Oct. 2003.
[28] J. Lu, K.N. Plataniotis, A.N. Venetsanopoulos, S.Z. Li, “Ensemble-based discriminant learning with boosting for face recognition”, IEEE Trans. on Neural Networks, vol. 17, no. 1, pp. 166-178, Jan. 2006.
[29] K.C. Kwak, W. Pedrycz, “Face recognition: A study in information fusion using fuzzy integral”, Pattern Recognition Letters, vol. 26, pp. 719-733, May 2005.
[30] Z.Q. Zhao, D.S. Huang, B.Y. Sun, “Human face recognition based on multi-features using neural networks committee”, Pattern Recognition Letters, vol. 25, no. 12, pp. 1351-1358, Sep. 2004.
[31] A. Lemieux, M. Parizeau, “Flexible multi-classifier architecture for face recognition systems”, in 16th Int. Conf. on Vision Interface, pp. 1-8, Jun. 2003.
[32] X. Tan, S. Chen, Z.H. Zhou, F. Zhang, “Face recognition from a single image per person: A survey”, Pattern Recognition, vol. 39, no. 9, pp. 1725-1745, Sep. 2006.
[33] A.M. Martinez, “Recognition imprecisely localized, partially occluded, and expression variant faces from a single sample per class”, IEEE Trans. Pattern Anal. Mach. Intell., vol. 25, no. 6, pp. 748-763, Jun. 2002.
[34] D. Zhang, S. Chen, Z.H. Zhou, “A new face recognition method based on SVD perturbation for single example image per person”, Applied Mathematics and Computation, vol. 163, no. 2, pp. 895-907, Apr. 2005.
[35] X. Xie, K.M. Lam, “Face recognition using elastic local reconstruction based on a single face image”, Pattern Recognition, vol. 41, no. 1, pp. 406-417, Jan. 2008.
[36] Y.M. Chen, J.H. Chiang, “Face recognition using combined multiple feature extraction based on Fourier-Mellin approach for single example image per person”, Pattern Recognition Letters, vol. 31, no. 13, pp. 1833-1841, Oct. 2010.
[37] W. Deng, J. Hu, J. Guo, W. Cai, D. Feng, “Robust, accurate and efficient face recognition from a single training image: A uniform pursuit approach”, Pattern Recognition, vol. 43, no. 5, pp. 1748-1762, May 2010.
[38] R. Hartley, A. Zisserman, Multiple view geometry in computer vision, Cambridge University Press, 2nd ed., 2003
[39] P.H.S. Torr, “The development and comparison of robust methods for estimating the fundamental matrix”, Int. J. Computer Vision, vol. 24, no. 3, pp. 271-300, Sep. 1997.
[40] P.H.S. Torr, “MLESAC: A new robust estimator with application to estimating image geometry”, Computer Vision and Image Understanding, vol. 78, pp. 138-156, Apr. 2000.
[41] A.S. Brahmachari, S. Sarkar, “BLOGS: Balanced local and global search for non-degenerate two view epipolar geometry”, IEEE 12th Int. Conf. on Computer Vision, Kyoto, Japan, pp. 1685-1692, Oct. 2009.
[42] F. Teng, X.H. Liang, Z.Y. He, G.L. Hua, “A registration method based on nature feature with KLT tracking algorithm for wearable computers”, Int. Conf. on Cyberworlds, Hangzhou, China, pp. 416-421, Sep. 2008.
[43] C. Takada, Y. Sugaya, “Detecting Incorrect Feature Tracking by Affine Space Fitting”, PSIVT, LNCS 5414, Tokyo, Japan, pp. 191-202, Jan. 2009.
[44] C.H. Kuo, J.D. Lee, T.J. Chan, “A novel multi-stage classifier for face recognition”, 8th Asian Conf. on Computer Vision, LNCS 4844, Tokyo, Japan, pp. 631-640, Nov. 2007.
[45] P. Dreuw, P. Steingrube, H. Hanselmann, H. Ney, “SURF-face: Face recognition under viewpoint consistency constraints”, British Machine Conf., BMVA Press, London, British, pp. 7.1-7.11, Sep. 2009.
[46] ORL face database, http://www.uk.rearch.att.com/facedatabase.html.
[47] Yale face database, http://cvc.yale.edu/projects/yalefaces/yalefaces.html
[48] IIS face database, http://smart.iis.sinica.edu.tw/.
[49] A.V. Nefian, M.H. Hayes III, “Hidden Markov models for face recognition”, in Proc. IEEE Int. Conf. Acoustic, Speech, and Signal Processing, 5, pp. 2721-2724, May 1998.
[50] V.V. Kohir, U.B. Desai, “Face recognition using a DCT-HMM approach”, in Proc. 4th IEEE Int. Conf. Application of Computer Vision, pp. 226-231, Oct. 1998.
[51] M. Bicego, U. Castellani, V. Murino, “Using hidden Markov models and wavelets for face recognition”, in Proc. 12th IEEE Int. Conf. Image Analysis and Processing, pp. 52-56, Sep. 2003.
[52] V. Vapnik, Statistical Learning Theory, John Wiley &; Sons, Inc., 1998.
[53] D. Price, S. Knerr, L. Personnaz, G. Dreyfus, “Pairwise neural network classifiers with probabilistic outputs”, in Conf. on Advances in Neural Information Processing Systems 7 (NIPS*94), MIT Press, Cambridge, MA, pp. 1109-1116, 1995.
[54] T. Hastie, R. Tibshirani, “Classification by pairwise coupling”, in Conf. on Advances in Neural Information Processing Systems, MIT Press, Cambridge, MA, pp. 507-513, 1998.
[55] C. Harris, M. Stephens, “A combined corner and edge detector”, In 4th Alvey Vision Conference, pp. 147-151, 1988.
[56] M.A. Fischler, R.C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography”, Communications of the ACM, vol. 24, no. 6, pp. 381-395, 1981.
[57] C.H. Chen, C.T. Chu, “High efficiency feature extraction based on 1-D wavelet transform for real-time face recognition”, WSEAS Trans. on Information Science and Application, vol. 1, pp. 411-417, 2004.
[58] B. Li, Y. Liu, “When eigenfaces are combined with wavelets”, Knowledge-Based Systems, vol. 15, pp. 343-347, Dec. 2002.
[59] T. Phiasai, S. Arunrungrusmi, K. Chamnongthai, “Face recognition system with PCA and moment invariant method”, in Proc. of the IEEE International Symposium on Circuits and Systems, vol. 2, pp. 165-168, May 2001.
[60] J. Lu, K.N. Plataniotis, A.N. Venetsanopoulos, “Face recognition using LDA-based algorithms”, IEEE Trans. on Neural Networks, vol. 14, pp. 195-200, Jan. 2003.
[61] J. Yang, D. Zhang, A.F. Frangi, J.Y. Yang, “Two-dimensional PCA: a new approach to appearance-based face representation and recognition”, IEEE Trans. Pattern Anal. Mach. Intell., vol. 26, pp. 131-137, Jan. 2004.
[62] K.C. Kwak, W. Pedrycz, “Face recognition using a fuzzy fisferface classifier”, Pattern Recognition, vol. 38, pp. 1717-1732, Oct. 2005.
[63] J.T. Chien, C.C. Wu, “Discriminant waveletfaces and nearest feature classifiers for face recognition”, IEEE Trans. Pattern Anal. Mach. Intell., vol. 24, pp. 1644-1649, Dec. 2002.
[64] X. Jing, Y.D. Zhang, “A face and palmprint recognition approach based on discriminant DCT feature extraction”, IEEE Trans. on Systems, Man, and Cyb., vol. 34, no. 6, pp. 2405-2415, Dec. 2004.
[65] C.T. Chu, C.H. Chen, J.H. Dai, “Multiple facial features representation for real-time face recognition”, Journal of Information Science and Engineering, vol. 22, pp. 1601-1610, 2006.
[66] J.P. Lewis, “Fast template matching”. Vision Interface 95, Quebec, Canada, pp. 120-123, May 1995.
[67] F. Klein, Elementary mathematics from an advanced standpoint: geometry, Macmillan, New York, 1939.
[68] J. Li, J.S. Pan, “A novel pose and illumination robust face recognition with a single training image per person algorithm”, Chinese Optics Letters, vol. 6, no. 4, pp. 255-257, 2008.
[69] A.V. Nefian, M.H. Hayes, “Face detection and recognition using hidden Markov models”, In Proc. IEEE Int. Conf. Image Processing, pp. 141-145, Oct. 1998.
[70] V.V. Kohir, U.B. Desai, “Face recognition using a DCT-HMM approach”, In Proc. 4th IEEE Int. Conf. Application of Computer Vision, pp. 226-231, Oct. 1998.
[71] M. Bicego, U. Castellani, V. Murino, “Using hidden Markov models and wavelets for face recognition”, In Proc. 12th IEEE Int. Conf. Image Analysis and Processing, pp. 52-56, Sep. 2003.
[72] F. Samaria, A. Harter, “Parameterisation of a stochastic model for human face identification”, In Proc. 2th IEEE Int. Conf. Application of Computer Vision, pp. 138-142, Dec. 1994.
[73] A. Grossmann, J. Morlet, “Decomposition of hardy function into square integrable wavelets of constant shape”, SLAM J. Math., vol. 15, pp. 723-736, 1984.
[74] C. Nastar, N. Ayach, “Frequency-based nonrigid motion analysis”, IEEE Trans. Pattern Anal. Mach. Intell., vol. 18, pp. 1067-1079, Nov. 1996.
[75] J.C. Peincipe, N.R. Euliano, W.C. Lefebvre, Neural and Adaptive Systems, John Wiley &; Sons, Inc., 1999.
[76] M.S. Kim, D. Kim, S.Y. Lee, “Face recognition using the embedded HMM with second-order block-specific observations”, Pattern Recognition, vol. 36, pp. 2723-2735, Nov. 2003.
[77] S. Eickeler, S. Birlinghoven, “Face database retrieval using pseudo 2D hidden Markov models”, in Proc. of Fifth IEEE Int. Conf. on Automatic Face and Gesture Recognition, pp. 58-63, May 2002.
[78] C. W. Hsu, and C. J. Lin, “A comparison of methods for multi-class support vector machines”, IEEE Trans. Neural Network, vol. 13, no. 2, pp. 415-425, Mar. 2002.
[79] J.C. Platt, N. Cristianini, J. Shawe-Taylor, “Large margin DAGs for multiclass classification”, In Advances in Neural Information Processing Systems, MIT Press, vol. 12, pp. 547-553, 2000.
[80] G. Ratsch, T. Onoda, K. R. Muller, “Soft margins for adaBoost”, Machine Learning, vol. 42, pp. 287-320, 2001.
[81] L. Bottou, C. Cortes, J. Denker, H. Drucker, I. Guyon, L. Jackel, Y. LeCun, U. Muller, E. Sackinger, P. Simard, and V. Vapnik, “Comparison of classifier methods: a case study in handwriting digit recognition”, In International Conference on Pattern Recognition, IEEE Computer Society Press, pp. 77-87, Oct. 1994.
[82] J.H. Lai, P.C. Yuen, G.C. Feng, “Face recognition using holistic Fourier invariant features”, Pattern Recognition, vol. 34, pp. 95-109, Jan. 2001.
[83] Y. Gao, M.K.H. Leung, “Line segment Hausdorff distance on face matching”, Pattern Recognition, vol. 35, no. 2, pp. 361-371, Feb. 2002.
[84] S. Arca, P. Campadelli, R. Lanzarotti, “A face recognition system based on automatically determines facial fiducial points”, Pattern Recognition, vol. 39, no. 3, pp. 432-443, Mar. 2006.
[85] C.D. Castillo, D.W. Jacobs, “Using stereo matching with general epipolar geometry for 2D face recognition across pose”, IEEE Trans. Pattern Anal. Mach. Intell., vol. 31, no. 12, pp. 2298-2304, Dec. 2009.
[86] H. Shin, S.D. Kim, H.C. Choi, “Generalized elastic graph matching for face recognition”, Pattern Recognition Letters, vol. 28, no. 9, pp. 1077-1082, July 2007.
[87] X. Xie, K.M. Lam, “Elastic shape-texture matching for human face recognition”, Pattern Recognition, vol. 41, no. 1, pp. 396-405, Jan. 2008.
[88] S. Zhao, Y. Gao, B. Zhang, “Gabor feature constrained statistical model for efficient landmark localization and face recognition”, Pattern Recognition Letters, vol. 30, no. 10, pp. 922-930, July 2009.
[89] M.S. Bartlett, J.R. Movellan, T.J. Sejnowski, “Face recognition by independent component analysis”, IEEE Trans. on Neural Networks, vol. 13, no. 6, pp. 1450-1464, Nov. 2002.
[90] B. Li, Y. Liu, “When eigenfaces are combined with wavelets”, Knowledge-Based Systems, vol. 15, no. 5-6, pp. 343-347, July 2002.
[91] S.W. Lee, J. Park, S.W. Lee, “Low resolution face recognition based on support vector data description”, Pattern Recognition, vol. 39, no. 9, pp. 1809-1812, Dec. 2006.
[92] G. Dai, D.Y. Yeung, Y.T. Qian, “Face recognition using kernel fractional-step discriminant analysis algorithm”, Pattern Recognition, vol. 40, no. 1, pp. 229-243, Jan. 2007.
[93] W. Yu, “Two-dimension discriminant locality preserving projections for face recognition”, Pattern Recognition Letters, vol. 30, no. 15, pp. 1378-1383, Nov. 2009.
[94] S. Dabbaghchian, M.P. Ghaemmaghami, A. Aghagolzadeh, “Feature extraction using discrete cosine transform and discrimination power analysis with a face recognition technology”, Pattern Recognition, vol. 43, no. 4, pp. 1431-1440, Apr. 2010.
[95] Y.W. Wong, K.P. Seng, L.M. Ang, “Radial basis function neural network with incremental learning for face recognition”, IEEE Trans. on Systems, Man, and Cybernetics, vol. 41, no. 4, pp. 940-949, 2011.
[96] Y.N. Chen, C.C. Han, C.T. Wang, K.C. Fan, “Face recognition using nearest feature space embedding”, IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, no. 6, pp. 1073-1086, June 2011
[97] Y. Yan, Y.J. Zhang, “A novel class-depandence feature analysis method for face recognition”, Pattern Recognition Letters, vol. 29, no. 14, pp. 1907-1914, Oct. 2008.
[98] C. Tenllado, J.I. Gomez, J. Setoain, D. Mora, M. Prieto, “Improving face recognition by combination of natural and Gabor face”, Pattern Recognition Letters, vol. 31, no. 11, pp. 1453-1460, Aug. 2010.
[99] B.L. Zhang, H. Zhang, S.S. Ge, “Face recognition by applying wavelet subband representation and kernel associative memory”, IEEE Trans. on Neural Networks, vol. 15, no. 1, pp.166-177, Jan. 2004.
[100] A. Serrano, I.M. de Diego, C. Conde, E. Cabello, “Recent advances in face biometrics with Gabor wavelets: A review”, Pattern Recognition Letters, vol. 31, no. 5, pp. 372-381, Apr. 2010.
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊