跳到主要內容

臺灣博碩士論文加值系統

(44.210.149.205) 您好!臺灣時間:2024/04/12 23:23
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:許永政
研究生(外文):Yung-Cheng Hsu
論文名稱:結合色彩及深度資訊之自適應性均值位移影像物件追蹤
論文名稱(外文):Adaptive Mean-shift Video Object Tracking with Color and Depth Information
指導教授:歐陽振森
指導教授(外文):Chen-Sen OuYang
學位類別:碩士
校院名稱:義守大學
系所名稱:資訊工程學系碩士班
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2011
畢業學年度:99
語文別:中文
論文頁數:57
中文關鍵詞:追蹤物件均值位移追蹤自適應性學習
外文關鍵詞:Tracking ObjectsMean-shift TrackingAdaptive Learning
相關次數:
  • 被引用被引用:0
  • 點閱點閱:710
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:1
在影像序列中,均值位移法是一個追蹤物件眾所皆知的方法,然而在整體均值位移追蹤過程中,也面臨到因為色彩特徵及靜態模型等假設,讓追蹤過程中的分辨能力受到限制。因此,我們提出一種結合色彩及深度資訊之自適應性均值位移物件追蹤方法,用來解決這些問題。利用結合深度特徵及色彩特徵,以用來對目標物件及目標候選項進行特徵建模。此外,自適應性學習法是用來在追蹤的過程中讓目標物件模型進行更新。實驗結果顯示,我們的方法比傳統均值位移法有著更佳的追蹤效果。
Mean-shift is a well-known approach for tracking objects in video sequences. However, it suffers from the problems of limited discriminative ability of color features and static target model in the whole tracking process. Therefore, we propose an adaptive Mean-shift tracking approach with color and depth information for solving these problems. The depth feature is combined with color features for modeling the target object and target candidates. Besides, an adaptive learning is employed to update the target model in the tracking process. Experimental results have shown that our approach achieves better tracking results than traditional mean-shift.
摘要I
英文摘要II
致謝III
目錄IV
圖目錄V
表目錄VII
第一章 緒論1
1.1 研究背景與動機1
1.2 研究目的2
第二章 文獻探討4
2.1 數位影像處理分析4
2.2 特徵選取及分類5
2.2.1 特徵選取5
2.2.2 特徵分類6
2.3 影像追蹤8
2.3.1 區域式追蹤8
2.3.2 主動式輪廓追蹤9
2.3.3 特徵追蹤9
2.3.4 模型追蹤9
2.4 均值位移追蹤演算法11
2.5 Kinect簡介14
2.6 OpenNI簡介17
第三章 研究方法與步驟20
3.1 導入深度資訊方法20
3.2 更新樣版模型22
第四章 實驗結果與分析24
4.1 相關實驗參數設定及說明24
4.2 實驗結果24
範例一24
範例二33
範例三36
範例四38
範例五43
範例六43
第五章 結論與未來展望46
參考文獻47
圖目錄
圖一 影像處理流程圖5
圖二 Epanchnikov函數之輪廓12
圖三 均值位移演算法流程圖13
圖四 Kinect圖示14
圖五 Kinect擷取出之深度影像15
圖六 Kinect擷取出之彩色影像15
圖七 OpenNI的基本架構圖17
圖八 由OpenNI所擷取出的深度影像18
圖九 由OpenNI所擷取出的彩色影像19
圖十 研究方法流程圖20
圖十一 實驗一偵測結果(α1為1.0α2為0)23
圖十二 實驗一偵測結果(α1為1.0α2為0)每幅畫面均值位移執行次數23
圖十三 實驗一偵測結果(α1為0.9α2為0.1)24
圖十四 實驗一偵測結果(α1為0.9α2為0.1)每幅畫面均值位移執行次數24
圖十五 實驗一偵測結果(α1為0.8α2為0.2)25
圖十六 實驗一偵測結果(α1為0.8α2為0.2)每幅畫面均值位移執行次數25
圖十七 實驗一偵測結果(α1為0.7α2為0.3)26
圖十八 實驗一偵測結果(α1為0.7α2為0.3)每幅畫面均值位移執行次數26
圖十九 實驗一偵測結果(α1為0.6α2為0.4)27
圖二十 實驗一偵測結果(α1為0.6α2為0.4)每幅畫面均值位移執行次數27
圖二十一 實驗一偵測結果(α1為0.5α2為0.5)28
圖二十二 實驗一偵測結果(α1為0.5α2為0.5)每幅畫面均值位移執行次數28
圖二十三 實驗一偵測結果(α1為0.4α2為0.6)失敗追蹤29
圖二十四 實驗一偵測結果(α1為0.4α2為0.6)29
圖二十五 實驗一偵測結果(α1為0.4α2為0.6)每幅畫面均值位移執行次數29
圖二十六 實驗一偵測結果(α1為0.3α2為0.7)30
圖二十七 實驗一偵測結果(α1為0.3α2為0.7)每幅畫面均值位移執行次數30
圖二十八 實驗二偵測結果(α1為1.0α2為0)32
圖二十九 實驗二偵測結果(α1為1.0α2為0)每幅畫面均值位移執行次數33
圖三十 實驗二偵測結果(α1為0.7α2為0.3)33
圖三十一 實驗二偵測結果(α1為0.7α2為0.3)每幅畫面均值位移執行次數34
圖三十二 實驗三偵測結果(α1為1.0α2為0)35
圖三十三 實驗三偵測結果(α1為1.0α2為0)每幅畫面均值位移執行次數35
圖三十四 實驗三偵測結果(α1為0.5α2為0.5)35
圖三十五 實驗三偵測結果(α1為0.5α2為0.5)每幅畫面均值位移執行次數36
圖三十六 實驗四偵測結果(α1為1.0α2為0)37
圖三十七 實驗四偵測結果(α1為1.0α2為0)每幅畫面均值位移執行次數38
圖三十八 實驗四偵測結果(α1為0.3α2為0.7)39
圖三十九 實驗四偵測結果(α1為0.3α2為0.7)每幅畫面均值位移執行次數39
圖四十 驗五偵測結果(α1為1.0α2為0)40
圖四十一 實驗五偵測結果(α1為1.0α2為0)每幅畫面均值位移執行次數41
圖四十二 實驗五偵測結果(α1為0.9α2為0.1)42
圖四十三 實驗五偵測結果(α1為0.9α2為0.1)每幅畫面均值位移執行次數42
圖四十四 Kinect無法正確使用之實例43
圖四十五 Kinect於反射面使用下之實例43
表目錄
表一 inect規格表 16
表二 數設定實驗一(α1為1.0α2為0)22
表三 數設定實驗一(α1為0.9α2為0.1)23
表四 數設定實驗一(α1為0.8α2為0.2)24
表五 數設定實驗一(α1為0.7α2為0.3)25
表六 數設定實驗一(α1為0.6α2為0.4)26
表七 數設定實驗一(α1為0.5α2為0.5)27
表八 數設定實驗一(α1為0.4α2為0.6)29
表九 數設定實驗一(α1為0.3α2為0.7)30
表十 數設定實驗二(α1為1.0α2為0)32
表十一 參數設定實驗二(α1為0.7α2為0.3)33
表十二 參數設定實驗三(α1為1.0α2為0)35
表十三 參數設定實驗三(α1為0.5α2為0.5)35
表十四 參數設定實驗四(α1為1.0α2為0)37
表十五 參數設定實驗四(α1為0.3α2為0.7)38
表十六 參數設定實驗五(α1為1.0α2為0)40
表十七 參數設定實驗五(α1為0.9α2為0.1)41
[1]A. Anjulan and N. Canagarajah, “Object Based Video Retrieval with Local Region Tracking,” Signal Processing: Image Communication, vol. 22, pp. 607-621, 2007.
[2]A. B. Chan, N. Vasconcelos, and P. J. Moreno, “A family of probabilistic kernels based on information divergence,” Dept. Electr. Comput. Eng., Univ. California, San Diego, CA, Tech. Rep. SVCL-TR 2004/01, 2004.
[3]C. C. Jou, “Fuzzy clustering using fuzzy competitive learning networks,” Neural Networks, IJCNN., International Joint Conference on , vol. 2 , pp. 714–719. 1992,
[4]C. E. Erdem, “Video Object Segmentation and Tracking using Region-based Statistics,” Signal Processing: Image Communication, vol. 22, pp. 891-905, 2007.
[5]C. R. Wren, A. Azarbayejani, T. Darrell, and A. P. Pentland, “Pfinder: Real-time Tracking of the Human Body,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 19, pp. 780-785, Jul. 1997.
[6]C. Shen, J. Kim, and H. Wang, “Generalized Kernel-Based Visual Tracking” IEEE Transactions on Circuits and Systems for Video Technology, vol. 20, no. 1, Jan. 2010.
[7]C. Shen, M. J. Brooks, and A. Hengel, “Fast global kernel density mode seeking: Applications to localization and tracking,” IEEE Trans. Image Process., vol. 16, no. 5, pp. 1457–1469, May 2007.
[8]C. Yang, R. Duraiswami, and L. Davis, “Efficient Mean-shift Tracking via a New Similarity Measure,” in Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 176-183, June 2005.
[9]D. Comaniciu and P. Meer, “Mean shift: A robust approach toward feature space analysis,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 24, no. 5, pp. 603–619, May. 2002.
[10]D. Comaniciu, V. Ramesh, and P. Meer, “Kernel-based Object Tracking,” IEEE Trans.Pattern Analysis and Machine Intelligence, vol. 25, no. 5, pp. 564-577, May 2003.
[11]D. Comaniciu, V. Ramesh, and P. Meer, “Real-Time Tracking of Non-Rigid Objects using Mean Shift,” IEEE Computer Vision and Pattern Recognition, vol.II, pp.142-149, 2000.
[12]D. S. Jang and H. I. Choi, “Active Models for Tracking Moving Objects,” Pattern Recognition, vol. 33, no. 7, pp. 1135-1146, 2000.
[13]E. Polat and M. Ozden, “A nonparametric adaptive tracking algorithm based on multiple feature distributions,” IEEE Trans. Multimedia, vol.8, no. 6, pp. 1156–1163, Dec. 2006.
[14]G. R. Bradski and S. Clara, “Computer Vision Face Tracking For Use in a Perceptual User,” CA, Intel Corporation, Intel Technology Journal , 1998
[15]H. Fujiyoshi and A. J. Lipton “Real-time Human Motion Analysis by Image Skeletonization” 4th IEEE Workshop on Applications of Computer Vision (WACV''98), pp. 15-21, Oct. 19-21, 1998 , Princeton, New Jersey.
[16]J. Friedman and T. Hastie.and R. Tibishirani,“Additive logistic regression:a statistical view of boosting.” Annals of statistics, pp. 337-407, 2000.
[17]J. S. Lin, K. S. Cheng, and C. W. Mao, “Segmentation of multispectral magnetic resonance image using penalized fuzzy competitive learning network,” Computers and Biomedical Research 29, pp. 314-326, 1996
[18]M. Dewan and G. D. Hager, “Toward optimal kernel-based tracking,” in Proc. IEEE Conf. Comp. Vis. Pattern Recognit., vol. 1, New York, pp. 618–625, Jun. 2006.
[19]M. Isard and A. Blake, “Condensation—conditional density propagation for visual tracking,” Int. J. Comput. Vision, vol. 29, no. 1, pp. 5–28, Aug. 1998.
[20]M. Isard and A. Blake, “Contour Tracking by Stochastic Propagation of Conditional Density,” in Proc. European Conference Computer Vision, pp. 343-356, 1996.
[21]N. Paragios and R. Deriche, “Geodesic Active Contours and Level Sets for the Detection and Tracking of Moving Objects,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 22, pp. 266–280, Mar. 2000.
[22]N. Peterfreund, “Robust Tracking of Position and Velocity with Kalman Nnakes,” IEEE Trans. Pattern Anal. Machine Intelligence, vol. 22, pp. 564-569, Jun. 2000.
[23]P. Pan and D. Schonfeld, “Dynamic proposal variance and optimal particle allocation in particle filtering for video tracking,” IEEE Trans. Circuits Syst. Video Technol., vol. 18, no. 9, pp. 1268–1279, Sep. 2008.
[24]P. Tissainayagama and D. Suter, “Object Tracking in Image Sequences using Point Features,” Pattern Recognition, vol. 38, pp. 105-113, 2005.
[25]P. Viola and M. Jones, “Robust Real-Time Face Detection”, International Journal of Computer Vision, pp. 137-154, 2004.
[26]P. Viola, M. J. Jones, and D. Snow. “Detecting pedestrians using patterns of motion and appearance,” IEEE International Conference on Computer Vision, vol.1, pages 734-741, 2003.
[27]Q. Delamarre and O. Faugeras, “3D Articulated Models and Multi-view Tracking with Physical Forces,” Comput. Vis. Image Understanding, vol. 81, no. 3, pp. 328-357, 2001.
[28]R. Collins, “Mean-shift blob tracking through scale space,” in Proc. IEEE Conf. Comp. Vis. Pattern Recognit., vol. 2, pp. 234–240, Madison, WI, 2003.
[29]R. E. Schapire and Y. Singer, “Improved boosting algorithms using confidence-rated predictions” Machine Learning, vol. 37, no. 3, pp. 297--336, Dec. 1999.
[30]R. J. Quinlan, "Bagging, boosting, and c4.5," in AAAI/IAAI:Proceedings of the 13th National Conference on Artificial Intelligenceand 8th Innovative Applications of Artificial Intelligence Conference.Portland, Oregon, AAAI Press / The MIT Press, vol. 1, pp.725-730, 1996
[31]R. T. Collins, Y. Liu, and M. Leordeanu, “Online selection of discriminative tracking features,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 27, no. 10, pp. 1631–1643, Oct. 2005.
[32]R.-S. Lin, D. Ross, J. Lim, and M.-H. Yang, “Adaptive Discriminative Generative Model and Its Applications,” Proc. Conf. Neural Information Processing System, 2004.
[33]S. Avidan, “Ensemble tracking,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 29, no. 2, pp. 261–271, Feb. 2007.
[34]S. McKenna, S. Jabri, Z. Duric, A. Rosenfeld, and H. Wechsler, “Tracking Groups of People,” Comput. Vis. Image Understanding, vol. 80, no. 1, pp. 42-56, 2000.
[35]T. Kurata, T. Okuma, M. Kourogi, and K. Sakaue, ”The Hand Mouse: GMM Hand-color Classification and Mean Shift Tracking,” in Proc. IEEE ICCV Workshop on Recognition, Analysis, and Tracking of Faces and Gestures in Real-Time Systems, pp. 119-124, Jul. 2001.
[36]T. Liu and H. Chen, “Real-time tracking using trust-region methods,” IEEE Trans. Pattern Anal. Mach. Intelligence., vol. 26, no. 3, pp. 397–402, Mar. 2004.
[37]T. Zhao, R. Nevatia, and B. Wu, “Segmentation and tracking of multiple humans in crowded environments,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 30, no. 7, pp. 1198–1211, Jul. 2008.
[38]W. Qu and D. Schonfeld, “Robust control-based object tracking,” IEEE Trans. Image Process., vol. 17, no. 9, pp. 1721–1726, Sep. 2008.
[39]Y. Cheng, “Mean Shift, Mode Seeking, and Clustering,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 17, no. 8, pp.790-799, Aug. 1995.
[40]Y. Freund and R. E. Schapire, “A decision-theoretic generalization of on-line learning and an application to boosting,” J. Comput. Syst. Sci., vol. 55, no. 1, pp. 119–139, Aug. 1997.
[41]Y. Freund and R. E. Schapire, “Experiments with a New Boosting Algorithm,” proceedings of 13th International Conference on Machine Learning, pp.148-146, Jan. 1996.
[42]Y. J. Yeh and C. T. Hsu, “Online selection of tracking features using adaboost,”in First Int. Workshop on Multimedia Anal. Process. (IMAP), 2007.
[43]Y. T. Hsiao, C. L. Chuang, Y. L. Lu, and J. A. Jiang, “Robust Multiple Objects Tracking using Image Segmentation and Trajectory Estimation Scheme in Video Frames, Image and Vision Computing, vol. 24, pp. 1123-1136, 2006.
電子全文 電子全文(本篇電子全文限研究生所屬學校校內系統及IP範圍內開放)
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top