跳到主要內容

臺灣博碩士論文加值系統

(3.236.50.201) 您好!臺灣時間:2021/08/05 19:20
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:洪銘鴻
研究生(外文):Ming-Hong Hung
論文名稱:基於光流分群與粒子濾波器之突然移動物影像追蹤
論文名稱(外文):Abrupt Motion Visual Tracking Based on Optical Flow Clustering and Particle Filtering
指導教授:張文中黃正民黃正民引用關係
指導教授(外文):Wen-Chung ChangCheng-Ming Huang
口試委員:簡忠漢練光祐
口試日期:2012-07-27
學位類別:碩士
校院名稱:國立臺北科技大學
系所名稱:電機工程系研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2012
畢業學年度:100
語文別:中文
論文頁數:57
中文關鍵詞:粒子濾波器視覺追蹤光流運動補償
外文關鍵詞:Particle filtervisual trackingoptical flowmotion compensation
相關次數:
  • 被引用被引用:2
  • 點閱點閱:193
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
本論文提出一個利用粒子濾波器結合光流做運動補償之追蹤系統,以克服攝影機運動造成目標物在影像上位移過大或是攝影機劇烈晃動造成影像模糊的追蹤問題。透過光流分群與分析,可估計出目標物上之光流,對粒子濾波器之動態模型進行運動補償,使系統有效地估計出目標物之狀態機率分布。此外,提出對於攝影機晃動造成模糊影像之追蹤策略,透過Mean Shift在影像模糊期間,給予粒子濾波器產生估測粒子位置的指引,而可得到較佳的目標物估測狀態。對於攝影機切換造成目標物位置突然改變,利用重要性採樣提供粒子濾波器另一層產生估測粒子的方式,有效地將粒子產生在目標物附近,以達到目標物追蹤之目的。實驗結果證實本文方法使粒子濾波器可以少量粒子有效地克服目標物快速移動以及攝影機畫面切換之追蹤問題。

In this paper, we propose a tracking system which based optical flow clustering and particle filtering for the target motion compensation. In order to overcome the large movement in the image caused by the camera motion and the target motion or the blur image tracking problem result from the camera shaking. We can estimate the target motion through optical flow clustering and analysis of optical flow. Dynamic model of the particle filter predicts the target states in the image by the target motion compensation from the result of optical flow clustering. The particle filter tracking system can estimate the target of probability distribution more effectively by the target motion compensation. Besides,we propose the tracking strategy for the blur image tracking problem which caused by the camera shaking. In the frames of the blur image, we adopt Mean Shift as guide and corrected the target state in the image based on observation. For the problem of camera switching, which causes abrupt motion of the target position in the image , we can successfully cope with this large motion uncertainty by the importance sampling particle filter. The importance sampling efficiently provides the particle filter more important posterior samples to achieve the purpose of the target tracking. Experimental results show that our proposed tracking method with a small number of particles can efficiently overcome abrupt motion tracking problem and the camera switching tracking problem.

摘 要 i
ABSTRACT ii
誌謝 iii
目錄 iv
表目錄 v
圖目錄 vi
第一章 緒論 1
1.1 前言 1
1.2 研究動機 1
1.3 相關文獻 2
1.4 研究成果與貢獻 3
1.5 論文架構 4
第二章 追蹤系統之運動補償 5
2.1 追蹤器 5
2.2 運動補償 5
2.3 光流分群之運動補償 7
2.3.1分群方式及光流群聚之權重 9
2.3.2 加權平均法 11
2.3.3權重篩選法 12
2.4 光流分群運動補償之追蹤系統架構 13
2.4.1 動態模型 13
2.4.2 觀察模型 13
2.4.3 觀察模型之動態權重調整 14
第三章 影像晃動模糊與攝影機畫面切換時之追蹤 16
3.1 光流於模糊影像之錯誤追蹤策略 16
3.1.1 錯誤光流之判斷策略 17
3.1.2 錯誤光流之追蹤策略 19
3.2 攝影機切換之追蹤策略 20
3.3 重要性採樣之追蹤 20
第四章 實驗結果 23
4.1 快速移動目標物追蹤 24
4.3.1目標物與攝影機相對運動之目標物追蹤 25
4.3.2 具有模糊影像之目標物追蹤 35
4.3.3無明顯特徵背景之目標物追蹤 44
4.3.4重複性特徵背景之目標物追蹤 47
4.4 攝影機切換之目標物追蹤 50
第五章 結論與未來展望 55
5.1 結論 55
5.2 未來展望 55
參考文獻 56



[1]R. E. Kalman, “A new approach to linear filtering and prediction problems,” Transactions of the ASME, Journal of Basic Engineering, pp. 35-45, Mar. 1960.
[2]M. Isard and A. Blake, “Condensation—Conditional density propaga-tion for visual tracking,” Int. J. Comput. Vis., vol. 29, no. 1, pp. 5–28,1998.
[3]L. Sun and G. H. Liu, “Visual object tracking based on combination of local description and global representation,” IEEE Trans. Circuits and Systems for Video Technology , vol. 21, no. 4, pp. 408-420, April 2011.
[4]D. Lowe, “Distinctive image features from scale invariant keypoints,” Int. J. Comput. Vis., vol. 60, no. 2, pp. 91–110, 2004.
[5]M. Z. Islam, C. M. Oh, J. S. Lee, and C. W. Lee , “Multi-part histogram based visual tracking with maximum of posteriori,” International Conf. Computer Engineering and Technology, vol. 6, pp. 16-18, April 2010.
[6]J. Kwon and K. M. Lee, “Tracking of a non-rigid object via patch-based dynamic appearance modeling and adaptive Basin Hopping Monte Carlo sampling,” IEEE Conf. Computer Vision and Pattern Recognition, pp. 1208-1215, June 2009.
[7]Huan Shen, S. M. Li, F. C. Bo, X. D. Miao, F. P. Li, and H. P. Zhou, “Vision based navigation system of intelligent vehicles: A robust object tracking approach,” International Conf. Hybrid Intelligent Systems, vol. 3, pp. 359-364, Aug. 2009.
[8]J. Y. Lu, Y. C. Wei, and A.C.-W. Tang , “Visual tracking using compensated motion model for mobile cameras,” IEEE Conf. Image Processing , pp. 489-492, 11-14 Sept. 2011.
[9]H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, “SURF: Speeded up robust features,” Computer Vision and Image Understanding, vol. 110, No. 3, pp. 346–359, 2008.
[10]J. Kwon and K. Lee, “Tracking of abrupt motion using Wang–Landau Monte Carlo estimation,” in Proc. ECCV, pp. 387–400, 2008.
[11]X. H. Zhou, Y. Lu, J. Lu, and J. Zhou, “Abrupt motion tracking via intensively adaptive Markov-chain Monte Carlo sampling,” IEEE Trans. Image Processing, vol. 21, no. 2, pp. 789-801, Feb. 2012.
[12]M. Isard and A. Blake, “ICONDENSATION: Unifying low-level and high-level tracking in a stochastic framework,” in Proc. ECCV, pp. 893–908, 1998.
[13]陶冠廷,微型飛行器於室內環境之視覺導航與控制,碩士論文,國立臺北科技大學電機工程研究所,台北,2012。
[14]F. J. Aherne, N. A. Thacker and P. I. Rockett, “The Bhattacharyya metric as an absolute similarity measure for frequency coded data,” Kybemetika, vol. 32, no. 4. pp. 1-7, 1997.
[15]S. Hinterstoisser, C. Cagniart, S. Ilic, P. Sturm, N. Navab, P. Fua, and V. Lepetit, “Gradient response maps for real-time detection of textureless objects,” IEEE Trans. PAMI, vol. 34, no. 5, pp. 876-888, May 2012.
[16]R. T. Collins, Y. Liu, and M. Leordeanu, “Online selection of discriminative tracking features,” IEEE Trans. PAMI, vol. 27, pp. 1631–1643, Oct. 2005.
[17]J. Y. Bouguet, “Pyramidal implementation of the Lucas Kanade feature tracker: description of the algorithm,” Technical Report, Intel Corporation, Microprocessor Research Labs, 1999.
[18]D. Comaniciu, V. Ramesh, and P. Meer, “Kernel-based object tracking,” IEEE Trans. PAMI, vol. 25, no. 5, pp. 564-577, May 2003.
[19] [Online]. Available: http://vision.stanford.edu/~birch/headtracker/seq/
[20]A. Senior, A. Hampapur, Y.-L. Tian, L. Brown, S. Pankanti, and R. Bolle, “Appearance models for occlusion handling,” Image and Vision Computing, vol. 24, no. 11, pp. 1233–1243, Nov.2006.
[21]L. Matthews, T. Ishikawa, and S. Baker, “The template update problem,” IEEE Trans. PAMI, vol. 26, no. 6, pp. 810-815, June 2004


QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊