(54.236.58.220) 您好!臺灣時間:2021/02/28 23:18
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

我願授權國圖
: 
twitterline
研究生:林淑華
研究生(外文):Lin, Suhua
論文名稱:運用單一張影像做人臉辨識
論文名稱(外文):Face Recognition Using a Single Image
指導教授:唐文華唐文華引用關係韓欽銓韓欽銓引用關係
指導教授(外文):Tarng, WernhuarHan, Chinchuan
學位類別:碩士
校院名稱:國立新竹教育大學
系所名稱:資訊科學研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2011
畢業學年度:99
語文別:中文
論文頁數:80
中文關鍵詞:人臉辨識小樣本單一影像子空間特徵臉圖像增強演算法
外文關鍵詞:Face RecognitionSmall Sample SizeSingle Image SubspaceeigenfaceRetinex
相關次數:
  • 被引用被引用:0
  • 點閱點閱:239
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:51
  • 收藏至我的研究室書目清單書目收藏:2
摘 要:
人臉辨識是一個相當成熟、普遍的題目,若要達到較高的辨識率,仍需要克服許多問題。目前公認的主要問題包含了光照方向、表情變化等因素,另外,樣本數不足也會造成辨識率下降。本研究針對現實面的樣本取得問題,提出使用單一影像訓練樣本的方法,以提升人臉辨識率。
本研究提出了一個運用單一張影像做人臉辨識的方法,先使用影像處理方式產生多張虛擬影像以取得適合的訓練樣本數,並利用Retinex演算法降低光線的影響,然後根據人臉資料庫的特性來計算適合的特徵維度數,再使用最近特徵空間的分類方法保留訓練樣本的區域結構資訊,以得到較佳的辨識率。實驗結果顯示,本研究提出的辨識方法能達到七成以上的辨識效果。

英文摘要:
Face recognition has attracted much attention in recent years. In order to achieve the high recognition rates, many problems such as poses, illumination, face expression, the training samples, simple dimensions, etc. should be solved.
In this thesis, a face recognition algorithm using a single training sample is designed. Basically, the main idea is to increase the training samples from a single face sample. Multiple simulated images are first composed to generate a suitable training sample set. A Retinex algorithm is next used to reduce the impact of lighting. Second, a PCA subspace is found to efficiently represent the faces. The discriminant projection axes are next found using a nearest feature space(NFS) embedding method. The NFSE method embeds the distance metric of point to spaces into the discriminant analysis. Some experimental results are conducted to show the validity of the proposed method. Several benchmark face databases are adopted to evaluate the performance. From the results, the proposed method achieves 70% recognition rates.

論文目錄:
第 一 章 緒論 1
第一節 研究動機與目的 1
第二節 研究方法與流程 2
第三節 研究環境與限制 4
第四節 研究架構 7
第 二 章 文獻探討 8
第一節 人臉辨識的發展概述 8
第二節 人臉辨識演算法 10
一、基於整幅人臉圖像的識別演算法 11
二、基於人臉特徵的識別演算法 12
三、基於模板的識別演算法 12
四、利用神經網路進行識別的演算法 14
第三節 投影轉換特徵空間分類器 15
一、主軸成份分析 16
二、線性鑑別分析 17
三、非參數式鑑別分析與鄰近特徵點分析 18
四、區域保留投影分析 20
五、最近特徵線與空間嵌入 20
第 三 章 系統規劃 22
第一節 影像尺寸 22
第二節 虛擬影像 23
第三節 PCA降維、去雜訊 25
第四節 Retinex圖像自動白平衡 29
第五節 NFSE分類器 39
第 四 章 系統實作 44
第一節 實作環境與流程 44
第二節 人臉資料庫介紹 46
一、 CMU 人臉資料庫 47
二、 AR 人臉資料庫 48
三、 CDS 人臉資料庫 49
四、 IIS 人臉資料庫 50
五、 ORL 人臉資料庫 51
第三節 虛擬影像製作 53
一、平移(Shift) 54
二、左右轉(Rotation) 55
三、縮放(ZOOM) 56
四、鏡射(Mirror) 57
第四節 Retinex白平衡處理 59
第 五 章 實驗結果與分析 67
第一節 實驗結果 67
第二節 實作分析 75
第 六 章 結論與建議 79
第一節 結論 79
第二節 建議 80
參考文獻 81


參考文獻
中文部分
[1] 吳明芳、詹慧珊、魏育誠,“人臉特徵於身份確認上的應用”,崑山科技大學學報第三期,第51~70 頁,2006。
[2] 陳宏文,“用單一樣本影像做姿勢與表情不變的人臉辨識”,亞洲大學碩士論文, 2008.
[3] 張光佑,"探討特徵萃取要素於小樣本分類問題",國立台中教育大學教育測驗統計研究所碩士論文,2006。
[4] 電子工程世界.“一種視頻增強的新方法”.取自http://www.eeworld.com/afdz/2010/1023/article_3118_2.html, 2010.
[5] 廖弘源,“多媒體的繽紛世界(一)” 中央研究院計算中心通訊,取自http://newsletter.ascc.sinica.edu.tw/index.php?lid=89
英文部分
[6] A. J. Goldstein, L. D. Harmon, and A. B. Lesk, “Identification of human faces,” Proc. IEEE, vol. 59, no. 5, pp. 748-760, 1971.
[7] A. K. Jain, and B. Chandrasekaran, “Dimensionality and sample size considerations in pattern recognition practice,” In Handbook of Statistics, vol. 2, pp. 835-855, 1982.
[8] A. Lanitis, C. J. Taylor, and T. F. Cootes, “Automatic face identification system using flexible appear ance models,” Image Vis. Comput., vol. 13, no. 5, pp. 393-401, 1995.
[9] A. M. Martinez, “Recognizing imprecisely localized, partially occluded, and expression variant faces from a single sample per class,” IEEE Trans. Patt. Anal. and Mach. Intell., vol. 24, no. 6, pp. 748-763, June 2002.
[10] B. Kepenekci, F. B. Tek, and G. B. Akar, “Occluded face recognition based on Gabor wavelets,” Proc. of International Conference on Image Processing, Rochester, NY, MP-P3.10, 2002.
[11] B. Moghaddam, and A. Pentland, “Face Recognition using view-based and modular eigenspaces,” Proc. SPIE on Automatic Systems for the Identification and Inspection of Humans, 1994.
[12] C. P. Chen, and C. S. Chen, “Lighting normalization with generic intrinsic ilumination subspace for face recognition,” Proc. IEEE Int'l Conf. Computer Vision, vol. 2, pp. 1089-1096, 2005.
[13] D. Beymer, and T. Poggio, “Face recognition from one example view,” IEEE International Conference on, vol. 0(23 June 1995), pp. 500-507, 1995.
[14] D. Q. Zhang, S. C. Chen, and Z. H. Zhou, “A new face recognition method based on SVD perturbation for single example image per person,” Applied Mathematics and Computation, vol. 163, no. 2, pp. 895-907, 2005.
[15] E. Land, “An alternative technique for the computation of the designator in the Retinex theory of color vision,” In Proc. Of the National Academy of Science, vol. 83, pp. 3078-3080, 1986.
[16] G. Guo, S. Z. Li, and K. Chan, “Face recognition by support vector machines,” Fourth IEEE International Conf. on Automatic Face and Gesture Recognition. pp. 196, 2000.
[17] H. Wang, S. Z. LI, and Y. Wang, “Face recognition under varying lighting conditions using self quotient image,” Proc. of Sixth IEEE International Conference on Automatic Face and Gesture Recognition, pp. 819-824, 2004.
[18] J. T. Chien, and C. C. Wu, “Discriminant waveletfaces and nearest feature classifiers for face recognition,” IEEE Trans. Patt. Anal. and Mach. Intell., vol. 24, no. 12, pp. 1644-1649, 2002.
[19] J. Wu, and Z. H. Zhou, “Face recognition with one training image per person,” Pattern Recognition Letters, vol. 23, no. 14, pp. 1711-1719, 2002.
[20] K. Fukunaga, Statistical Pattern Recognition, Academic Press, 1990.
[21] L. G. Shapiro, and G. C. Stockman, Computer Vision, Prentice-Hall, NJ, pp.65-68, 2001.
[22] L. Sirovich, and M. Kirby, “Low-dimensional procedure for the characterization of human faces,” Journal of the Optical Society of America A, vol. 4, no. 3, pp. 519-524, 1987.
[23] L. Wiskott, J. M. Fellous, and C. V. D. Malsburg, “Face recognition by elastic bunch graph matching,” IEEE Trans. Patt. Anal. and Mach. Intell., vol. 19, no. 7, pp. 775-779, 1997.
[24] M. Kirby, and L. Sirovich, “Application of the Karhunen-Loeve procedure for the characterization of human faces,” IEEE Trans. Patt. Anal. and Mach. Intell., vol. 12, no. 1, pp. 103 - 108, 1990.
[25] M. Turk, and A. Pentland, “Eigenfaces for recognition,” J. Cognitive Neuroscience, vol. 3, no. 1, pp. 71-86, 1991.
[26] P. N. Belhumeur, and D. J.Kriegman, “What is the set of images of an object under all possible lighting conditions?” In Proc. IEEE Conf. on Computer Vision and Pattern Recognition, pp. 270-277, 1996.
[27] P. N. Belhumeur, J. P. Hespanha, and D. J. Kriegman, “Eigenfaces vs. Fisherfaces: recognition using class specific linear projection,”IEEE Trans. Patt. Anal. and Mach. Intell., vol. 19, no. 7, pp. 711-720, 1997.
[28] P. Niyogi, F. Girosi, T. Poggio, “Incorporating prior information in machine learning by creating virtual examples,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2196-2209, 1998.
[29] R. Brunelli, and T. Poggio, “Face recognition: Features versus templates,” IEEE Trans. Patt. Anal. and Mach. Intell., vol. 15, no. 10, pp. 1042-1052, 1993.
[30] R. J. Baron, “Mechanisms of human facial recognition,” Int. J. Man-Machine Studies, vol. 15, no. 2, pp. 137-178, 1981.
[31] S. Lawrence, C. L. Giles, A. C. Tsoi, and A. D. Back, “Face recognition: A convolutional neural-network approach,” IEEE Trans. on Neural Networks, vol. 8, no. 1, pp. 98-113, 1997.
[32] T. Kanade, “Picture processing by computer complex and recognition of human faces,” Tech. Rep, Kyoto Univ., Dept. Inform. Sci, 1973.
[33] T. R. Raviv, and A. Shashua, “The quotient image: Class based re-rendering and recognition with varying illuminations,” In Proc. IEEE Conf. on Computer Vision and Pattern Recognition, vol. 23, no. 2, pp. 129-139, 2001.
[34] T. Vetter, and T. Poggio, “Linear object classes and image synthesis from a single example image,” IEEE Trans. Patt. Anal. and Mach. Intell., vol. 19, no. 7, pp. 733-742, 1997.
[35] T. Vetter, “Synthesis of novel views from a single face image,” International Journal of Computer Vision, vol. 28, no. 2, pp. 102-116, 1998.
[36] V. Blanz, and T. Vetter, “Face recognition based on fitting a 3D morphable model,” IEEE Trans. Patt. Anal. and Mach. Intell., vol. 25, no. 9, pp. 1063-1074, 2003.
[37] V. V. Kohir, and U. B. Desai, “Face recognition using a DCT-HMM approach,” Proc. IEEE Workshop on Applications of Computer Vision (WACV’98), Princeton, NJ, pp. 226-231, 1998.
[38] X. He, S. Yan, Y. Hu, P. Niyogi, and H. Zhang, “Face recognition using Laplacianfaces,” IEEE Trans. Patt. Anal. and Mach. Intell., vol. 27, no. 3, pp. 328-340, 2005.
[39] Y. N. Chen, and C. C. Han, C. T. Wang, K. C. Fan, “Face Recognition Using Nearest Feature Space Embedding,” IEEE Trans. Patt. Anal. and Mach. Intell., vol. 33, no. 6, pp. 1073-1086, 2011.
[40] Y. Nara, J. Yang, and Y. Suematsu, “Face recognition using improved principle component analysis,” Proc. IEEE International Symposium on Micromechatronics and Human Science, 19-22 Oct, pp. 77-82, 2003.
[41] Z. Li, D. Lin and X. Tang, “Nonparametric discriminant analysis for face recognition,” IEEE Trans. Patt. Anal. and Mach. Intell., vol. 31, no. 4, pp. 755-761, 2009.
[42] Z. Pan, R. Adams, and H. Bolouri, “Dimensionality reduction of face images using discrete cosine transforms for recognition,” IEEE Conf. on Computer Vision and Pattern Recognition (CVPR2000), South Carolina, US, 2000.
[43] Z. Pan, and H. Bolouri, “High speed face recognition based on discrete cosine transforms and neural net-works,” Technical report, Univ. of Hertfordshire, 1999.

連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
系統版面圖檔 系統版面圖檔