跳到主要內容

臺灣博碩士論文加值系統

(18.97.14.91) 您好!臺灣時間:2025/01/15 10:10
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:趙美琁
研究生(外文):Mei-HsuanChao
論文名稱:應用權重三視角運動歷史直方圖於人體動作辨識
論文名稱(外文):Human Action Recognition Using Weighted 3-Viewpoints Motion History Histogram
指導教授:楊竹星楊竹星引用關係
指導教授(外文):Chu-Sing Yang
學位類別:碩士
校院名稱:國立成功大學
系所名稱:電腦與通信工程研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2012
畢業學年度:100
語文別:中文
論文頁數:59
中文關鍵詞:深度影像運動歷史影像動作辨識
外文關鍵詞:Depth ImageMotion History ImageAction Recognition
相關次數:
  • 被引用被引用:0
  • 點閱點閱:331
  • 評分評分:
  • 下載下載:16
  • 收藏至我的研究室書目清單書目收藏:0
本篇論文提出一個以深度影像為基礎的動作辨識系統,運用深度資訊(Depth Data)不易受環境影響的特性,將前景人體目標物擷取出來。並將深度及二維資訊投影到三個垂直平面,除了平面運動之外,垂直於攝影機方向的深度運動也可以藉由其他平面清楚的描述其運動軌跡。基於運動能量(Motion Energy)及三維運動向量夾角(Angle of 3-Dimensional Motion Orientations)的變化,系統能夠從複雜動作中,自動偵測出多個簡單行為的開始及結束時間,能夠解決目前運動歷史影像(Motion History Image)方法中,可能出現的運動軌跡自我遮蔽(Self-Occlusion)、及目標物運動速度不一的問題。接下來運用三維深度資訊,得到三視角運動歷史軌跡,由三張不同視角的運動歷史影像描述目標物動作,並加入三視角運動梯度對應各自權重值,找出最能夠充分描述目標動作的運動視角給予較高權重值。特徵擷取的部分採用多重解析度運動歷史直方圖(Multi-Resolution Motion History Histogram),可有效的降低運算量並且達到一定的辨識率。實驗結果顯示本論文所提出的方法流程,除了能夠解決運動軌跡自我遮蔽、目標物運動速度不一的問題,同時可以在連續的影像中,有效的辨識出目標物的運動行為。
A human action recognition system based on depth image is proposed in this paper. Extracting foreground human object by depth data is more robust with environment affect. Besides of the motion with main direction parallel to the camera, depth motion trajectory can also be clearly presented by projecting depth data to three-dimensional volume object. First, the system can solve the self-occlusion and different speed problem in Motion History Image (MHI) method by detecting the start, continued and end time of a simple motion automatically. Then, project three-dimensional volume object to three orthogonal planes. Three-dimensional motion history trajectory can be described by different viewpoint of MHIs. The three-viewpoint MHIs are weighted to increase importance to the planes with greatest detail. For the efficiency purpose, Motion History Histogram (MHH) is extracted as motion feature. From the proposed method, any actions with different action speed and different main directions in 3D space can be efficiently recognized. The experimental results demonstrate the accuracy and effectiveness of proposed weighted three-viewpoint Motion History Histogram in different situations.
摘要 I
Abstract II
目錄 IV
圖目錄 V
表目錄 VII
第一章 緒論 1
第二章 文獻回顧與動機 2
2.1 深度影像 2
2.2 動作辨識方法簡介 3
2.3 運動歷史影像 4
2.4 問題描述 6
第三章 系統設計與實作 8
3.1 系統架構圖 8
3.2 資料擷取與三視角前景人體輪廓擷取 9
3.3 時間切割模組 13
3.4 特徵擷取與辨識 22
第四章 實驗結果 33
4.1 實驗環境說明 33
4.2 運用三視角運動歷史影像進行動作辨識 36
4.3 MHH辨識結果與執行速度 43
4.4 在連續影片串流中進行動作辨識 46
第五章 結論與未來研究方向 54
第六章 參考文獻 55

[1]Ahad, M.A.R., Tan, J.K., Kim, H. S., & Ishikawa, S. (2010), “Analysis of motion self-occlusion problem due to motion overwriting for human activity recognition. (2010). Journal of Multimedia 5(1): 36-46.
[2]Ahad, M.A.R., Tan, J.K., Kim, H. S., & Ishikawa, S. (2009), “Human activity analysis: Concentrating on Motion History Image and its variants. International Joint Conference of ICCAS-SICE.
[3]Ahad, M.A.R., Ogata, T., Tan, J.K., Kim, H. S., & Ishikawa, S. (2008), “Motion recognition approach to solve overwriting in complex actions. IEEE International Conference on Automatic Face & Gesture Recognition, 2008.
[4]Bergh, M. V. D., Koller-Meier, E. & Gool, L. V. (2008), “Fast body posture estimation using volumetric features. Proceedings of IEEE Workshop on Motion and video Computing.
[5]Bobick, A.F., Davis, J.W. (2001), “The recognition of human movement using temporal templates. IEEE Transactions on Pattern Analysis and Machine Intelligence 23(3): 257-267.
[6]Bradski, G., & Kaehler, A. (2008), Learning OpenCV: Computer Vision with the OpenCV Library(1st ed), Sebastopol, CA: Oreilly & Associates Inc.
[7]Bradski, G. R., & Davis, J.W. (2000), “Motion segmentation and pose recognition with motion history gradients. IEEE Workshop on Application of Computer Vision.
[8]Chen, Y., Wu, Q., He, X., Du, C., & Yang, J. (2008), “Extracting key postures in a human action video sequence. IEEE Workshop on Multimedia Signal Processing.
[9]Cutler, R. & Turk, M. (1998), “View-based interpretation of real-time optical flow for gesture recognition. IEEE International Conference on Automatic Face and Gesture Recognition.
[10]Cui, Y., & Lee, C. (2008), “An approach to event recognition for visual surveillance systems. International Conference on Future Generation Communication and Networking Symposia.
[11]Gonzalez, R. C., & Woods, R. E. (2007), Digital Image Processing (3rd Edition), N.J.: Prentice Hall.
[12]Hadid, A., & Pietikainen, M. (2009), “Combining appearance and motion for face and gender recognition from videos. Pattern Recognition 42(11): 2818-2827.
[13]Holte, M. B., Moeslund, T. B., & Fihl, P. (2010), “View-invariant gesture recognition using 3D optical flow and harmonic motion context. Computer Vision and Image Understanding 114(12): 1353-1361.
[14]Isard, M., & Blake, A. (1998), “CONDENSATION – Conditional density propagation for visual tracking, International Journal of Computer Vision.
[15]Izadi, M., & Saeedi, P. (2008), “Robust region-based background subtraction and shadow removing using color and gradient information. International Conference on Pattern Recognition.
[16]Jia, K., & Yeung, D. Y. (2008), “Human action recognition using local spatio-temporal discriminant embedding. IEEE Conference on Computer Vision and Pattern Recognition.
[17]Juang, C. F., Chang, C. M., Wu, J. R., & Lee, D. (2009), “Computer vision-based human body segmentation and posture estimation. IEEE Transactions on Systems, Man, and Cybernetics 39(1): 119-133.
[18]Liao, S., Law, M. W. K., & Chung, A. C. S. (2009), “Dominant local binary patterns for texture classification. IEEE Transactions on Image Processing 18(5): 1107-1118.
[19]Liaw, Y. C., Chen, W. C., & Huang, T. J. (2010), “Video objects behavior recognition using fast MHI approach. International Conference on Computer Graphics, Imaging and Visualization.
[20]Marr, D., Vaina, L. (1982), “Representation and recognition of the movements of shapes. Proceedings of the Royal Society of London.
[21]Martinez-Contreras, F., Orrite-Urunuela, C., Herrero-Jaraba, E., Ragheb, H., & Velastin, S. A. (2009), “Recognizing human actions using silhouette-based HMM. IEEE International Conference on Advanced Video and Signal Based Surveillance.
[22]Meng, H., Pears, N., & Bailey, C. (2007), “A human action recognition system for embedded computer vision application. IEEE Conference on Computer Vision and Pattern Recognition.
[23]Meng, H., Pears, N., & Bailey, C. (2006), “Recognizing human actions based on motion information and SVM. IET International Conference on Intelligent Environments.
[24]Muñoz-Salinas, R., Medina-Carnicer, R., Madrid-Cuevas, F., J., & Carmona-Poyato, A. (2008), “Depth silhouettes for gesture recognition. Pattern Recognition Letters 29(3): 319-329.
[25]Roh, M. C., Shin, H. K., & Lee, S. W. (2010), “View-independent human action recognition with Volume Motion Template on single stereo camera. Pattern Recognition Letters 31(7): 639-647.
[26]Shao, L., & Ji, L. (2010), “A descriptor combining MHI and PCOG for human motion classification. Proceedings of the ACM International Conference on Image and Video Retrieval.
[27]Shimada, N., Shirai, Y., & Kuno, Y. (2007), “Hand gesture recognition using computer vision based on model-matching method. International Conference on Human-Computer Interaction.
[28]Song, B. C., Kim, M. J., & Ra, J. B. (2001), “A fast multiresolution feature matching algorithm for exhaustive search in large image database. IEEE Transactions on Circuits and Systems for Video Technology 11(5): 673-678.
[29]Shen, Y., & Foroosh, H. (2009), “View-invariant action recognition from point triplets. IEEE Transactions on Pattern Analysis and Machine Intelligence 31(10): 1898-1905.
[30]Wang, L., & Suter, D. (2007), “Learning and matching of dynamic shape manifolds for human action recognition. IEEE Transactions on Image Processing 16(6):1646-1661.
[31]Weinland, D., Ronfard, R., & Boyer, E. (2006), “Automatic discovery of action taxonomies from multiple views. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.
[32]Weinland, D., Ronfard, R., & Boyer, E. (2006), “Free viewpoint action recognition using motion history volumes. Computer Vision and Image Understanding 104: 249-257.
[33]Yu, C. C., Cheng, H. Y., Cheng, C. H., & Fan, K. C. (2010), “Efficient human action and gait analysis using multiresolution motion energy histogram. EURASIP Journal on Advances in Signal Processing - Special issue on video analysis for human behavior understanding.
[34]Yuan, Xin. & Yang, X. (2009), “A robust human action recognition system using single camera. International Conference on Computational Intelligence and Software Engineering.
[35]Zhang, L., & Liang, Y. (2010), “Motion human detection based on background subtraction. International Workshop on Education Technology and Computer Science.
[36]Zhao, H., & Liu, Z. (2009), Shape-based human activity recognition using edit distance. International Congress on Image and Signal Processing.

連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊