跳到主要內容

臺灣博碩士論文加值系統

(18.97.9.172) 您好!臺灣時間:2025/01/16 07:03
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:林思妤
研究生(外文):Szu-Yu Lin
論文名稱:基於前景與背景關係之超像素追蹤演算法
論文名稱(外文):Foreground and Background Correlation based Superpixel Tracking
指導教授:黃春融
口試委員:林彥宇李建誠
口試日期:2016-06-03
學位類別:碩士
校院名稱:國立中興大學
系所名稱:資訊科學與工程學系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2016
畢業學年度:104
語文別:英文
論文頁數:35
中文關鍵詞:追蹤演算法超像素
外文關鍵詞:Object trackingSuperpixel
相關次數:
  • 被引用被引用:0
  • 點閱點閱:215
  • 評分評分:
  • 下載下載:11
  • 收藏至我的研究室書目清單書目收藏:0
近年來軌跡的資訊常被使用於各研究領域中,如異常事件偵測、人群影像分析等。然而追蹤軌跡時,過去的演算法容易受到追蹤過程中物體外觀改變的影響,使得所得到的軌跡並不正確。在本論文中,我們提出了一個考量前景與背景關係的超像素追蹤演算法。為了區分物體的前景與背景,我們使用了超像素提供的顏色與邊緣資訊,並建構基於前景與背景的外觀模型。除了外觀之外,我們也考慮量到前景物體超像素之間的結構特性。在追蹤時,我們的方法會週期性地更新外觀與結構模型,來適應物體外觀隨著時間的變化。實驗結果證明,我們的演算法比其他的演算法更能夠準確地追蹤到移動的物體。

Recently, many object tracking methods have been proposed to obtain trajectories of foreground objects for a variety of applications. However, the tracking results are easily affected by appearance variations. Thus, we propose a superpixel-based tracking algorithm by considering both of the foreground and background correlations. We also consider the structure and appearance information to construct the foreground and background models. To handle the appearance variations, we present a mechanism to detect occlusions of objects and periodically update the models. The experimental results show that our method outperforms the state-of-the-art tracking algorithms.

摘要 i
Abstract ii
Index iii
Figure and Table Index iv
1. Introduction 1
2. Related Work 4
2.1. Online Appearance Model 4
2.2. Tracking-by-Detection 5
3. Method 6
3.1. Superpixel Construction 6
3.2. Tracking Formulation 12
3.3. Appearance Models 15
3.4. Structure Model 17
3.5. Confidence map 18
3.6. Model Update 21
4. Experimental Results 23
4.1. Datasets and Evaluation Metrics 23
4.2. Comparisons 24
5. Conclusion 29
6. References 30


[1]R. Laxhammar and G. Falkman, "Online Learning and Sequential Anomaly Detection in Trajectories," IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 36, no. 6, pp. 1158-1173, June 2014.
[2]Y. H. Lai and C. K. Yang, "Video Object Retrieval by Trajectory and Appearance," IEEE Trans. on Circuits and Systems for Video Technology, vol. 25, no. 6, pp. 1026-1037, June 2015.
[3]C. R. Huang, Y. J. Chang, Z. X. Yang and Y. Y. Lin, "Video Saliency Map Detection by Dominant Camera Motion Removal," IEEE Trans. on Circuits and Systems for Video Technology, vol. 24, no. 8, pp. 1336-1349, Aug. 2014.
[4]S. Wu, B. E. Moore and M. Shah, "Chaotic invariants of Lagrangian particle trajectories for anomaly detection in crowded scenes," in Proc. IEEE Conf. on Computer Vision and Pattern Recognition, pp. 2054-2060, 2010.
[5]B. Zhou, X. Wang and X. Tang, "Random field topic model for semantic region analysis in crowded scenes from tracklets," in Proc. IEEE Conf. on Computer Vision and Pattern Recognition, pp. 3441-3448, 2011.
[6]B. Solmaz, B. E. Moore and M. Shah, "Identifying Behaviors in Crowd Scenes Using Stability Analysis for Dynamical Systems," IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 34, no. 10, pp. 2064-2070, Oct. 2012.
[7]J. Shao, C. C. Loy and X. Wang, "Scene-Independent Group Profiling in Crowd," in Proc. IEEE Conf. on Computer Vision and Pattern Recognition, pp. 2227-2234, 2014.
[8]A. Adam, E. Rivlin and I. Shimshoni, "Robust Fragments-based Tracking using the Integral Histogram," in Proc. IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, pp. 798-805, 2006.
[9]J. Kwon and K. M. Lee, "Visual tracking decomposition," in Proc. IEEE Conf. on Computer Vision and Pattern Recognition, pp. 1269-1276, 2010.
[10]A. D. Jepson, D. J. Fleet, and T. F. El-Maraghi, "Robust online appearance models for visual tracking," in Proc. IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, pp. 415-422, Dec. 2001.
[11]R. T. Collins and Y. Liu, "On-line selection of discriminative tracking features," in Proc. IEEE International Conf. on Computer Vision, pp. 346-352, Oct. 2003.
[12]D. A. Ross, J. Lim, R.-S. Lin, and M.-H. Yang, "Incremental Learning for Robust Visual Tracking," International Journal of Computer Vision, vol. 77, no. 1, pp. 125-141, May 2008.
[13]B. Han, Y. Zhu, D. Comaniciu and L. S. Davis, "Visual Tracking by Continuous Density Propagation in Sequential Bayesian Filtering Framework," IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 31, no. 5, pp. 919-930, May 2009.
[14]S. Hare, A. Saffari and P. H. S. Torr, "Struck: Structured output tracking with kernels," in Proc. International Conf. on Computer Vision, pp. 263-270, 2011.
[15]F. Yang, H. Lu and M. H. Yang, "Robust Superpixel Tracking," IEEE Trans. on Image Processing, vol. 23, no. 4, pp. 1639-1651, April 2014.
[16]M. Isard and A. Blake, "Contour tracking by stochastic propagation of conditional density," in Proc. European Conf. on Computer Vision, pp. 343-356, 1996.
[17]J. Kwon and K. M. Lee, "Tracking of a non-rigid object via patch-based dynamic appearance modeling and adaptive Basin Hopping Monte Carlo sampling," in Proc. IEEE Conf. on Computer Vision and Pattern Recognition, pp. 1208-1215, 2009.
[18]B. Ma, J. Shen, Y. Liu, H. Hu, L. Shao and X. Li, "Visual Tracking Using Strong Classifier and Structural Local Sparse Descriptors," IEEE Trans. on Multimedia, vol. 17, no. 10, pp. 1818-1828, Oct. 2015.
[19]A. Li and S. Yan, "Object Tracking With Only Background Cues," IEEE Trans. on Circuits and Systems for Video Technology, vol. 24, no. 11, pp. 1911-1919, Nov. 2014.
[20]J. Wang and Y. Yagi, "Many-to-Many Superpixel Matching for Robust Tracking," IEEE Trans. on Cybernetics, vol. 44, no. 7, pp. 1237-1248, July 2014.
[21]J. Santner, C. Leistner, A. Saffari, T. Pock and H. Bischof, "PROST: Parallel robust online simple tracking," in Proc. IEEE Conf. on Computer Vision and Pattern Recognition, pp. 723-730, 2010.
[22]J. F. Henriques, R. Caseiro, P. Martins and J. Batista, "Exploiting the circulant structure of tracking-by-detection with kernels," in Proc. European Conf. on Computer Vision, Oct. 2012.
[23]L. Wen, W. Li, J. Yan, Z. Lei, D. Yi and S. Z. Li, "Multiple Target Tracking Based on Undirected Hierarchical Relation Hypergraph," in Proc. IEEE Conf. on Computer Vision and Pattern Recognition, pp. 1282-1289, 2014.
[24]M. T. Orchard and C. A. Bouman, "Color quantization of images," IEEE Trans. on Signal Processing, vol. 39, no. 12, pp. 2677-2690, Dec. 1991.
[25]X. Chen, S. Kwong, and J.-F. Feng, "A new compression scheme for color-quantized images," IEEE Trans. on Circuits and Systems for Video Technology, vol. 12, no. 10, pp. 904-908, Oct. 2002.
[26]L. P. Cordella, P. Foggia, C. Sansone and M. Vento, "A (sub)graph isomorphism algorithm for matching large graphs," IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 26, no. 10, pp. 1367-1372, Oct. 2004.
[27]M. Everingham, L. V. Gool, C. K. I. Williams, J. Winn, and A. Zisserman, "The PASCAL Visual Object Classes (VOC) Challenge," International Journal of Computer Vision, vol. 88, no. 2, pp. 303-338, 2010.
[28]L. Sevilla-Lara and E. Learned-Miller, "Distribution fields for tracking," in Proc. IEEE Conf. on Computer Vision and Pattern Recognition, pp. 1910-1917, 2012.
[29]L. Kratz and K. Nishino, "Tracking with local spatio-temporal motion patterns in extremely crowded scenes," in Proc. IEEE Conf. on Computer Vision and Pattern Recognition, pp. 693-700, 2010.
[30]Z. Sun, H. Wang, H. Wang, B. Shao and J. Li, "Efficient Subgraph Matching on Billion Node Graphs," in Proc. International Conf. on Very Large Data Bases, vol. 5, no. 9, pp. 788-799, 2012.
[31]W. Ge, R. T. Collins and R. B. Ruback, "Vision-Based Analysis of Small Groups in Pedestrian Crowds," IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 34, no. 5, pp. 1003-1016, May 2012.
[32]Y. Wu, J. Lim and M. H. Yang, "Online Object Tracking: A Benchmark," in Proc. IEEE Conf. on Computer Vision and Pattern Recognition, pp. 2411-2418, 2013.
[33]Y. Yuan, S. Emmanuel, Y. Fang and W. Lin, "Visual Object Tracking Based on Backward Model Validation," IEEE Trans. on Circuits and Systems for Video Technology, vol. 24, no. 11, pp. 1898-1910, Nov. 2014.
[34]A. Milan, S. Roth and K. Schindler, "Continuous Energy Minimization for Multitarget Tracking," IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 36, no. 1, pp. 58-72, Jan. 2014.
[35]S. Zhang, J. Wang, Z. Wang, Y. Gong and Y. Liu, "Multi-target tracking by learning local-to-global trajectory models," Pattern Recognition, vol. 48, no. 2, pp. 580-590, Feb. 2015.
[36]M. Godec, P. M. Roth and H. Bischof, "Hough-based tracking of non-rigid objects," in Proc. International Conf. on Computer Vision, pp. 81-88, 2011.
[37]C. Bao, Y. Wu, H. Ling and H. Ji, "Real time robust L1 tracker using accelerated proximal gradient approach," in Proc. IEEE Conf. on Computer Vision and Pattern Recognition, pp. 1830-1837, 2012.
[38]B. Babenko, M. H. Yang and S. Belongie, "Visual tracking with online Multiple Instance Learning," in Proc. IEEE Conf. on Computer Vision and Pattern Recognition, pp. 983-990, 2009.
[39]Z. Kalal, J. Matas and K. Mikolajczyk, "P-N learning: Bootstrapping binary classifiers by structural constraints," in Proc. IEEE Conf. on Computer Vision and Pattern Recognition, pp. 49-56, 2010.
[40]W. Zhong, H. Lu and M. H. Yang, "Robust object tracking via sparsity-based collaborative model," in Proc. IEEE Conf. on Computer Vision and Pattern Recognition, pp. 1838-1845, 2012.

QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top