(3.230.173.249) 您好!臺灣時間:2021/04/21 05:03
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

我願授權國圖
: 
twitterline
研究生:楊東翰
研究生(外文):Dung-Han Yang
論文名稱:基於色彩與紋理特徵的超像素追蹤法
論文名稱(外文):Superpixel tracking with color and texture features
指導教授:林信鋒林信鋒引用關係
指導教授(外文):David Lin
學位類別:碩士
校院名稱:國立東華大學
系所名稱:資訊工程學系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2016
畢業學年度:104
論文頁數:96
中文關鍵詞:物件追蹤超像素觀測模型線上學習
外文關鍵詞:object trackingsuperpixelappearance modelonline-learning model
相關次數:
  • 被引用被引用:1
  • 點閱點閱:135
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:6
  • 收藏至我的研究室書目清單書目收藏:0
近年來電腦視覺相關研究蓬勃發展並導入到各種領域,例如交通監控、智能監視系統以及人機互動等,許多高效並且強健的物體追蹤演算法被提出來面對挑戰,例如物體大量形變、尺度變化、嚴重遮蔽以及追蹤飄移等。

在此篇論文中,我們提出一套基於貝氏理論與超像素的追蹤演算法,此方法包含三個部分 : (1) 觀測模型、(2) 運動模型以及 (3) 線上學習模型。我們基於超像素追蹤 (SPT) 方法並增加紋理特徵以減少相似顏色干擾,並提升灰階影像的追蹤成功率。觀測模型使用三種信心地圖(顏色、紋理及兩者混和)用以分離前景與背景來進行目標追蹤,透過分別計算追蹤目標在三種地圖中各種位置的機率值,我們將三種結果加總後求出追蹤位置最佳解。下一步透過運動模型,對觀測結果進行加權,減低干擾並增進最佳解的可靠性。為了減少追蹤飄移的情況,線上學習模式我們使用 K-means 分群演算法對特徵進行分類,並於每十張影像進行一次訓練樣本更新。

我們從文獻中選擇27部具有各種挑戰性的測試影片,在複雜背景、遮蔽、灰階影像、尺度及姿態變化的影像追蹤上取得可靠性的提升。實驗結果證明我們的方法SPT-LBP比相關研究更加優異,可在彩色以及灰階影像測試上表現出強健的追蹤能力。

In recent years, there has been a significant growth of visual tracking in the related applications such as traffic monitoring, intelligent surveillance, and human-computer interaction. Many effective and efficient tracking algorithms were proposed to face various challenges, including large variation of scale, heavy occlusion and drifts.

In this thesis, we apply the Bayesian theory and propose a visual tracking method using superpixel. This tracking algorithm contains three models: (1) appearance model, (2) motion model, and (3) online learning model. Our method has improved the previous superpixel tracking (SPT) method by adding texture feature to reduce the interference of neighboring similar color and increase the success rate of grayscale sequences. Three kinds of confidence maps (HSI color, LBP texture and color+texture combination) are generated in appearance model to separate the target from the background. We compute the probability of these three confidence maps separately, and then sum up their final results to obtain the most possible object position of the next frame. After that, the motion model is used to weigh the appearance model. Finally in the online learning model, we utilize the k-means clustering algorithm to characterize the features, and update the training set every ten frames to avoid tracking drift.

We conducted the experiments using 27 videos from literature, and most of the sequences achieved better results than previous work. The system shows an improvement on the tracking drift caused by background clutter, occlusion, grayscale tracking, scale and pose variation. These results prove that the SPT-LBP is a robust tracking method for both color and grayscale images.

1 Introduction
1.1 Motivation
1.2 Thesis organization
2 Background and Related Work
2.1 Background
2.1.1 SLIC superpixels
2.1.2 Particle filter
2.1.3 Local binary patterns
2.2 Related Work
2.2.1 Robust superpixel tracking
2.2.2 Online human tracking via superpixel-based collaborative appearance model
2.2.3 Superpixel-driven level set tracking
2.2.4 Crossed traffic surveillance using superpixel tracking and vehicle trajectory analysis
3 The Proposed Method
3.1 The Bayesian Theory
3.2 Appearance Model and Confidence Map
3.2.1 Extraction of the surrounding region of the target
3.2.2 Performing a superpixel segmentation
3.2.3 Building a confidence map of superpixels
3.2.4 Creating an appearance model based on the confidence map
3.3 Motion Model
3.4 Online Learning Model
4 Experimental Results
4.1 Experimental Parameters
4.2 Experimental Datasets
4.3 Experimental Results
4.3.1 Grayscale tracking
4.3.2 Background clutt
4.3.3 Occlusioner
4.3.4 Scale and pose variations
4.3.5 Shape deformation
5 Conclusions
[1] D. A. Ross, J. Lim, R.-S. Lin, and M.-H. Yang, “Incremental learning for robust visual
tracking,” Int. J. Comput. Vision, vol. 77, no. 1-3, pp. 125-141, May 2008.

[2] A. Adam, E. Rivlin, and I. Shimshoni, “Robust fragments-based tracking using the
integral histogram,” in Computer Vision and Pattern Recognition, 2006 IEEE Com-
puter Society Conference on, vol. 1, June 2006, pp. 798-805.

[3] B. Babenko, M.-H. Yang, and S. Belongie, “Visual tracking with online multiple in-
stance learning,” in Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, June 2009, pp. 983-990.

[4] X. Mei and H. Ling, “Robust visual tracking using l1 minimization,” in Computer
Vision, 2009 IEEE 12th International Conference on, Sept 2009, pp. 1436-1443.

[5] J. Kwon and K. M. Lee, “Visual tracking decomposition,” in Computer Vision and
Pattern Recognition (CVPR), 2010 IEEE Conference on, June 2010, pp. 1269-1276.

[6] Z. Kalal, J. Matas, and K. Mikolajczyk, “P-n learning: Bootstrapping binary classifiers
by structural constraints,” in Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, June 2010, pp. 49-56.
[7] S. Hare, S. Golodetz, A. Saffari, V. Vineet, M.-M. Cheng, S. Hicks, and P. Torr,
“Struck: Structured output tracking with kernels,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. PP, no. 99, pp. 1-1, 2015.

[8] J. Santner, C. Leistner, A. Saffari, T. Pock, and H. Bischof, “Prost: Parallel robust
online simple tracking,” in Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, June 2010, pp. 723-730.

[9] M. Godec, P. Roth, and H. Bischof, “Hough-based tracking of non-rigid objects,” in
Computer Vision (ICCV), 2011 IEEE International Conference on, Nov 2011, pp.
81-88.

[10] R. Collins, “Mean-shift blob tracking through scale space,” in Computer Vision and
Pattern Recognition, 2003. Proceedings. 2003 IEEE Computer Society Conference on, vol. 2, June 2003, pp. II-234-40 vol.2.

[11] K. Nummiaro, E. Koller-Meier, and L. V. Gool, “An adaptive color-based particle
filter,” Image and Vision Computing, vol. 21, no. 1, pp. 99 - 110, 2003.

[12] Z. Kalal, K. Mikolajczyk, and J. Matas, “Tracking-learning-detection,” Pattern Anal-
ysis and Machine Intelligence, IEEE Transactions on, vol. 34, no. 7, pp. 1409-1422, July 2012.

[13] B. Fulkerson, A. Vedaldi, and S. Soatto, “Class segmentation and object localization
with superpixel neighborhoods,” in Computer Vision, 2009 IEEE 12th International Conference on, Sept 2009, pp. 670-677.

[14] X. Li and H. Sahbi, “Superpixel-based object class segmentation using conditional
random fields,” in Acoustics, Speech and Signal Processing (ICASSP), 2011 IEEE International Conference on, May 2011, pp. 1101-1104.

[15] R. Achanta, “Finding Objects of Interest in Images using Saliency and Superpixels,”
Ph.D. dissertation, IC, 2011.

[16] Z. Liu, L. Meur, and S. Luo, “Superpixel-based saliency detection,” in Image Analysis
for Multimedia Interactive Services (WIAMIS), 2013 14th International Workshop on, July 2013, pp. 1-4.

[17] Z. Liu, X. Zhang, S. Luo, and O. Le Meur, “Superpixel-based spatiotemporal saliency
detection,” Circuits and Systems for Video Technology, IEEE Transactions on, vol. 24, no. 9, pp. 1522-1540, Sept 2014.

[18] H.-M. Zhu and C.-M. Pun, “An adaptive superpixel based hand gesture tracking and
recognition system,” The Scientific World Journal, 2014.

[19] F. Liu, Y. Yin, G. Yang, L. Dong, and X. Xi, “Finger vein recognition with superpixel-
based features,” in Biometrics (IJCB), 2014 IEEE International Joint Conference on, Sept 2014, pp. 1-8.

[20] F. Yang, H. Lu, and M.-H. Yang, “Robust superpixel tracking,” Image Processing,
IEEE Transactions on, vol. 23, no. 4, pp. 1639-1651, April 2014.

[21] H. Zhang, J. Zhan, Z. Su, Q. Chen, and X. Luo, “Online human tracking via
superpixel-based collaborative appearance model,” in Multimedia and Expo Work-
shops (ICMEW), 2014 IEEE International Conference on, July 2014, pp. 1-6.

[22] D.-T. Lin and C.-H. Hsu, “Crossroad traffic surveillance using superpixel tracking and
vehicle trajectory analysis,” in Pattern Recognition (ICPR), 2014 22nd International Conference on, Aug 2014, pp. 2251-2256.

[23] X. Zhou, X. Li, T.-J. Chin, and D. Suter, “Superpixel-driven level set tracking,” in
Image Processing (ICIP), 2012 19th IEEE International Conference on, Sept 2012, pp. 409-412.

[24] R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Susstrunk, “Slic superpixels
compared to state-of-the-art superpixel methods,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 34, no. 11, pp. 2274-2282, Nov. 2012.

[25] M. Isard and A. Blake, “Condensation - conditional density propagation for visual
tracking,” International Journal of Computer Vision, vol. 29, pp. 5-28, 1998.

[26] M. Arulampalam, S. Maskell, N. Gordon, and T. Clapp, “A tutorial on particle fil-
ters for online nonlinear/non-gaussian bayesian tracking,” Signal Processing, IEEE Transactions on, vol. 50, no. 2, pp. 174-188, Feb 2002.

[27] T. Ojala, M. Pietikainen, and D. Harwood, “Performance evaluation of texture mea-
sures with classification based on kullback discrimination of distributions,” in Pattern
Recognition, 1994. Vol. 1 - Conference A: Computer Vision Image Processing.,
Proceedings of the 12th IAPR International Conference on, vol. 1, Oct 1994, pp. 582-
585 vol.1.

[28] Y. Guo, Y. Chen, F. Tang, A. Li, W. Luo, and M. Liu, “Object tracking using
learned feature manifolds,” Computer Vision and Image Understanding, vol. 118, pp. 128 - 139, 2014. [Online]. Available: http://www.sciencedirect.com/science/article/
pii/S1077314213001835

[29] J. B. MacQueen, “Some methods for classification and analysis of multivariate ob-
servations,” in Proc. of the fifth Berkeley Symposium on Mathematical Statistics and Probability, L. M. L. Cam and J. Neyman, Eds., vol. 1. University of California Press, 1967, pp. 281-297.

[30] C. Y. Ren and I. Reid, “gslic: a real-time implementation of slic superpixel segmen
tation,” University of Oxford, Department of Engineering Science, Tech. Rep., 2011.

連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
系統版面圖檔 系統版面圖檔