(3.238.250.105) 您好!臺灣時間:2021/04/18 20:06
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

我願授權國圖
: 
twitterline
研究生:張家豪
研究生(外文):Chang, Chia-Hao
論文名稱:基於深度學習技術之行人年齡與性別辨識
論文名稱(外文):Age and Gender Recognition of Full Body Pedestrian Images Based on Deep Learning
指導教授:王才沛
指導教授(外文):Wang, Tsai-Pei
口試委員:莊政宏彭文孝王才沛
口試委員(外文):Chuang, Cheng-HungPeng, Wen-Hsiao
口試日期:2018-10-08
學位類別:碩士
校院名稱:國立交通大學
系所名稱:資訊科學與工程研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2018
畢業學年度:107
語文別:中文
論文頁數:34
中文關鍵詞:全身人物影像性別辨識全身人物影像年齡辨識
外文關鍵詞:Gender Recognition of Full BodyAge Recognition of Full Body
相關次數:
  • 被引用被引用:2
  • 點閱點閱:264
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:38
  • 收藏至我的研究室書目清單書目收藏:1
本論文研究目標為利用深度學習技術實作人物影像之年齡與性別的辨識,主要研究對象為街道上行人的全身影像,擷取出單一行人之全身影像,進行辨識後得出其年齡性別之辨識結果。
基於深度學習(Deep Learning)技術,我們使用卷積神經網路(CNN)作為網路主架構提取特徵(feature),並提供年齡與性別之分類器共享特徵來優化特徵提取過程,最後評估各式實驗間的差異,彙整成我們的結論。
The research goal is to implement a deep learning model for age and gender recognition. The main object of study is the full body images of pedestrian. After cropping the full body image of each pedestrian in the frame, these pedestrian images are inferenced their recognition results.
Based on deep learning model, we use convolution neural network as our backbone of the network to extract features, and provide age and gender classifier sharing features to optimize the feature extraction process. Finally, we evaluate the differences between all experiments in this paper and integrate them into our conclusions.
第一章 緒論 1
1.1 研究背景 1
1.2 研究動機 2
1.3 論文架構 5
第二章 文獻探討 6
2.1 人物影像性別辨識 6
2.2 人物影像年齡辨識 7
2.3 深度學習模型 9
第三章 研究方法 11
3.1 研究流程 11
3.2 影像資料庫收集 11
3.2.1 行人影像蒐集 11
3.2.2 屬性、輪廓標記 14
3.3 影像前處理 15
3.4 特徵擷取 16
3.4.1 ResNet 16
3.4.2 GoogLeNet 17
3.5 性別年齡辨識 18
3.5.1 人物影像、輪廓性別年齡網路 18
3.5.2 Softmax 19
3.5.3 損失函數 20
3.5.4 訓練方式 20
第四章 實驗結果 22
4.1 實驗環境平台與流程 22
4.2 評估方式 23
4.3 影像辨識範例 24
4.4 預訓練之性別年齡模型結果 25
4.5 資料擴增對平衡性的改善 26
4.6 影像裁切的影響 27
4.7 最佳的模型配置 29
4.8 PETA 29
第五章 結論與未來展望 31
參考文獻 33
[1] D.T. Lawrence, B.A. Golomb, and T.J. Sejnowski, “Sexnet: A neural network identifies sex from human faces”, Neural Information Processing Systems, pp. 572–577, 1991.
[2] Lu, L. and P. Shi. “A novel fusion-based method for expression-invariant gender classification”, in Acoustics, Speech and Signal Processing, 2009. ICASSP 2009. IEEE International Conference on. 2009. IEEE.
[3] Li, B., X.-C. Lian, and B.-L. Lu, “Gender classification by combining clothing, hair and facial component classifiers”, Neurocomputing, 2012. 76(1): p. 18-27.
[4] Cao, L., Dikmen, M., Fu, Y., Huang, T.: “Gender recognition from body”, In: ACM Multimedia (2008)
[5] M. Collins, J. Zhang, P. Miller, and H. Wang, “Full body image feature representations for gender profiling”, Proc.ICCV Workshops, pp.1234-1242.2009.
[6] [Online]https://vision.soe.ucsc.edu/node/178.
[7] Liu, X., Li, S., Kan, M., Zhang, J., Wu, S., Liu, W., Han, H., Shan, S., & Chen, X. (2015). “Agenet: Deeply learned regressor and classifier for robust apparent age estimation”, IEEE International Conference on Computer Vision (ICCV) Workshops
[8] Y. Ge, J. Lu, X. Feng, and D. Yang, “Body-based human age estimation at a distance”, Proc. ICME Workshops, 2013.
[9] G. Levi and T. Hassncer. “Age and gender classification using convolutional neural networks”, In IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pages 34–42, 2015.
[10] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E.Howard, W. Hubbard, and L. D. Jackel. “Backpropagation applied to handwritten zip code recognition”, Neural computation, 1(4):541–551, 1989. 1, 3
[11] A. Krizhevsky, I. Sutskever, and G. Hinton. “ImageNet classification with deep convolutional neural networks”, In NIPS, 2012.
[12] K. Simonyan and A. Zisserman. “Very deep convolutional networks for large-scale image recognition”, In ICLR, 2015.
[13] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, et al., “Going deeper with convolutions”, in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 1-9.
[14] S. Ioffe and C. Szegedy. “Batch normalization: Accelerating deep network training by reducing internal covariate shift”, In Proceedings of The 32nd International Conference on Machine Learning, pages 448–456, 2015.
[15] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. “Rethinking the Inception architecture for computer vision”, arXiv preprint, 1512.00567, 2015.
[16] C. Szegedy, S. Ioffe, and V. Vanhoucke. “Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning”, In CoRR (2016)
[17] K. He, X. Zhang, S. Ren, and J. Sun. “Deep residual learning for image recognition”, In Proceedings of CVPR, pages 770–778, 2016.
[18] J. Zhu, S. Liao, Z. Lei, and S. Z. Li. “Multi-label convolutional neural network based pedestrian attribute classification”, Image and Vision Computing, 2016. 1, 2 Felzenszwalb, P., Huttenlocher, D.: “Efficient graph-based image segmentation”, IJCV (2004) 167–181
[19] Yutian Lin, Liang Zheng, Zhedong Zheng, Yu Wu, Yi Yang. “Improving Person Re-identification by Attribute and Identity Learning”, 2017, arXiv:1703.07220
[20] Y. Deng, P. Luo, C.C. Loy, X. Tang, “Pedestrian attribute recognition at far distance”, International Conference on Multimedia, ACM. 2014, pp. 789–792.
[21] R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Ssstrunk. “SLIC Superpixels”, Technical report, 2010.
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
系統版面圖檔 系統版面圖檔