(18.206.177.17) 您好!臺灣時間:2021/04/23 05:03
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

我願授權國圖
: 
twitterline
研究生:王志豪
研究生(外文):Zhi-hao Wang
論文名稱:移動物體追蹤演算法設計
論文名稱(外文):The design of moving object tracking algorithm
指導教授:蕭宇宏
指導教授(外文):Yu-hong Xiao
學位類別:碩士
校院名稱:國立雲林科技大學
系所名稱:電機工程系碩士班
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2013
畢業學年度:101
語文別:中文
論文頁數:56
中文關鍵詞:動態偵測物體追蹤
外文關鍵詞:moving object trackingmoving object detecting
相關次數:
  • 被引用被引用:1
  • 點閱點閱:508
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
自動化視覺監控系統在電腦視覺領域中一直是十分熱門的課題,過去的自控監視系統研究當中為了精確的追蹤到目標,使用了大量複雜度極高的運算以符合需求,若是應用在一般嵌入式系統中,雖然達到了精準的追蹤與判斷,但增加了許多的成本,甚至無法達到及時運算的效果。在本論文我們發展一套低複雜度運算的系統用於移動攝影機之移動物體追蹤,主要分為移動物體偵測跟移動物體追蹤。移動物體偵測使用移動向量估計法找出畫面背景的移動向量,透過移動向量補償消除畫面背景的移動,再配合連續畫面差值法,偵測出移動物體,接著透過投影法來求得物體的位置。而移動物體追蹤採用色彩直方圖持續對移動物體進行顏色特徵比對來追蹤物體。此外,為了達到即時的運算,我們將系統中移動向量估計法設計成硬體電路,藉由Verilog硬體描述語言實現,使用Synopsys的Design Vision在TSMC 0.13μm製程下合成,時脈週期為20ns,工作時脈可達50MHz,電路的邏輯閘數為46K。
Automatic visual surveillance system has been a very popular subject in the field of computer vision. In order to establish a comprehensive automatic visual surveillance system, many past researches aimed at tracking objects for high-precision and recognized objects for high accuracy. The process of these technologies involves a lot of computation load and even highly complex algorithms to meet the requirement. However, while implementing these technologies for the application in embedded systems, the methods can achieve excellent results but increase the cost of the product, and might not meet the requirement of real-time applications. In this thesis, we develop a low-complexity moving object detection system for mobile cameras. There are two main parts of the system which are moving object detecting and moving object tracking. Moving object detecting uses motion estimation method to find the motion vector of the image background. Then, the moving background is compensated by the motion vector. Once the backgrounds of two consecutive frames have been compensated, a moving object can be sketched out with the temporal differencing method. Finally, the position of the object can be obtained by using the projection method. Moving object tracking applies the color histogram comparison method to track the moving object. Besides, to meet the requirement of real-time applications, the motion estimation in the proposed system is implemented to a hardware core. We use Verilog hardware description language to design the hardware core and Synopsys Design Vision to synthesize our design with TSMC 0.13μm cell library. It works with a clock period of 20ns, and the clock frequency is 50MHz. The logic gate count is 46K gates.
中文摘要 i
ABSTRACT ii
誌謝 iii
目錄 iv
圖目錄 vi
第一章 緒論 1
1.1 研究背景與動機 1
1.2 研究方向 1
第二章 移動物體追蹤演算法相關研究 2
2.1動態偵測 2
2.1.1環境建模 2
2.1.2動態分割 3
2.1.3物體分類 4
2.2物體追蹤 6
2.2.1區域式追蹤法 6
2.2.2主動式輪廓追蹤法 7
2.2.3特徵參數式追蹤法 8
2.2.4建模式追蹤法 9
第三章 室內動態背景移動物體追蹤系統 12
3.1 影像前處理 13
3.1.1 彩色影像轉換灰階影像 13
3.1.2 全域直方圖等化 14
3.1.3 平滑化濾波 15
3.2  移動向量估計 16
3.2.1  全搜尋法 17
3.2.2  二維對數搜尋法 17
3.2.3  三步搜尋法 18
3.2.4  四步搜尋法 19
3.2.5  區塊梯度遞減搜尋法 20
3.2.6  灰色預測搜尋法 21
3.2.7  鑽石搜尋法 23
3.3  前張影像補償差值 24
3.4  區塊平均化 26
3.5 移動物體中心位置判斷 26
3.5.1 垂直投影法 26
3.5.2 幾何中心判斷 27
3.6 色彩直方圖匹配 27
第四章 移動向量偵測硬體設計 29
4.1 前張影像區塊模組 30
4.1.1 前張影像區塊記憶體 30
4.1.2 前張影像控制模組 30
4.2 當前影像區塊模組 32
4.2.1 當前影像區塊記憶體 32
4.2.2 當前影像區塊控制 32
4.3  最小絕對差值相加模組 34
4.4 鑽石演算法座標控制模組 34
4.5 移動向量偵測區塊座標控制模組 36
4.6 移動向量統計 37
第五章 實驗結果 38
5.1  室內固定背景場景實驗 38
5.2  室內移動背景場景實驗 38
5.3  走廊移動背景場景實驗 38
第六章 結論與未來工作 43
參考文獻 44
[1]R. T. Collins, A. J. Lipton, T. Kanade, H. Fujiyoshi, D. Duggins, Y. Tsin, D. Tolliver, N. Enomoto, O. Hasegawa, P. Burt, and L.Wixson, “A system for video surveillance and monitoring,” Carnegie Mellon Univ., Pittsburgh, PA, Tech. Rep., CMU-RI-TR-00-12, 2000.
[2]I. Haritaoglu, D. Harwood, and L. S. Davis, “W : Real-time surveillance of people and their activities,” IEEE Trans. Pattern Anal. Machine Intell., vol. 22, pp. 809–830, Aug. 2000.
[3]A. Baumberg and D. C. Hogg, “Learning deformable models for tracking the human body,” in Motion-Based Recognition, M. Shah and R. Jain, Eds. Norwell, MA: Kluwer, 1996, pp. 39–60.
[4]C. R. Wren, A. Azarbayejani, T. Darrell, and A. P. Pentland, “Pfinder: real-time tracking of the human body,” IEEE Trans. Pattern Anal. Machine Intell., vol. 19, pp. 780–785, July 1997.
[5]A. J. Lipton, H. Fujiyoshi, and R. S. Patil, “Moving target classification and tracking from real-time video,” in Proc. IEEE Workshop Applications of Computer Vision, 1998, pp. 8–14.
[6]S. McKenna, S. Jabri, Z. Duric, A. Rosenfeld, and H. Wechsler, “Tracking groups of people,” Comput. Vis. Image Understanding, vol. 80, no. 1, pp. 42–56, 2000.
[7]C. Stauffer andW. Grimson, “Adaptive background mixture models for real-time tracking,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, vol. 2, 1999, pp. 246–252.
[8]D. Meyer, J. Denzler, and H. Niemann, “Model based extraction of articulated objects in image sequences for gait analysis,” in Proc. IEEE Int. Conf. Image Processing, 1998, pp. 78–81.
[9]N. Friedman and S. Russell, “Image segmentation in video sequences: a probabilistic approach,” in Proc. 13th Conf. Uncertainty in Artificial Intelligence, 1997, pp. 1–3.
[10]E. Stringa, “Morphological change detection algorithms for surveillance applications,” in Proc. British Machine Vision Conf., 2000, pp. 402–412.
[11]A. J. Lipton, “Local application of optic flow to analyze rigid versus nonrigid motion,” in Proc. Int. Conf. Computer Vision Workshop Frame-Rate Vision, Corfu, Greece, 1999.
[12]C. Stauffer, “Automatic hierarchical classification using time-base co-occurrences,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, vol. 2, 1999, pp. 335–339.
[13]D. Meyer, J. Psl, and H. Niemann, “Gait classification with HMM’s for trajectories of body parts extracted by mixture densities,” in Proc. British Machine Vision Conf., 1998, pp. 459–468.
[14]J. K. Aggarwal, Q. Cai,W. Liao, and B. Sabata, “Non-rigid motion analysis: articulated &; elastic motion,” Comput.Vis. Image Understanding, vol. 70, no. 2, pp. 142–156, 1998.
[15]I. A. Karaulova, P. M. Hall, and A. D. Marshall, “A hierarchical model of dynamics for tracking people with a single video camera,” in Proc. British Machine Vision Conf., 2000, pp. 262–352.
[16]S. Ju, M. Black, and Y. Yaccob, “Cardboard people: a parameterized model of articulated image motion,” in Proc. IEEE Int. Conf. Automatic Face and Gesture Recognition, 1996, pp. 38–44.
[17]K. Rohr, “Toward model-based recognition of human movements in image sequences,” CVGIP: Image Understanding, vol. 59, no. 1, pp. 94–115, 1994.
[18]S. Wachter and H.-H. Nagel, “Tracking persons in monocular image sequences,” Comput. Vis. Image Understanding, vol. 74, no. 3, pp. 174–192, 1999.
[19]N. Paragios and R. Deriche, “Geodesic active contours and level sets for the detection and tracking of moving objects,” IEEE Trans. Pattern Anal. Machine Intell., vol. 22, pp. 266–280, Mar. 2000.
[20]D.-S. Jang and H.-I. Choi, “Active models for tracking moving objects,” Pattern Recognit., vol. 33, no. 7, pp. 1135–1146, 2000.
[21]M. Kohle, D. Merkl, and J. Kastner, “Clinical gait analysis by neural networks: Issues and experiences,” in Proc. IEEE Symp. Computer-Based Medical Systems, 1997, pp. 138–143.
[22]A. Mohan, C. Papageorgiou, and T. Poggio, “Example-based object detection in images by components,” IEEE Trans. Pattern Recognit. Machine Intell., vol. 23, pp. 349–361, Apr. 2001.
[23]A. Galata, N. Johnson, and D. Hogg, “Learning variable-length Markov models of behavior,” Comput. Vis. Image Understanding, vol. 81, no. 3, pp. 398–413, 2001.
[24]Y. Wu and T. S. Huang, “A co-inference approach to robust visual tracking,” in Proc. Int. Conf. Computer Vision, vol. II, 2001, pp. 26–33.
[25]H. Z. Sun, T. Feng, and T. N. Tan, “Robust extraction of moving objects from image sequences,” in Proc. Asian Conf. Computer Vision, Taiwan, R.O.C., 2000, pp. 961–964.
[26]W. E. L. Grimson, C. Stauffer, R. Romano, and L. Lee, “Using adaptive tracking to classify and monitor activities in a site,” in Proc. IEEE Conf. Computure Vision and Pattern Recognition, Santa Barbara, CA, 1998, pp. 22–31.
[27]C. Ridder, O. Munkelt, and H. Kirchner, “Adaptive background estimation and foreground detection using Kalman-filtering,” in Proc. Int. Conf. Recent Advances in Mechatronics, 1995, pp. 193–199.
[28]D. Koller, J. Weber, T. Huang, J. Malik, G. Ogasawara, B. Rao, and S. Russel, “Toward robust automatic traffic scene analysis in real-time,” in Proc. Int. Conf. Pattern Recognition, Israel, 1994, pp. 126–131.
[29]K. Toyama, J. Krumm, B. Brumitt, and B. Meyers, “Wallflower: principles and practice of background maintenance,” in Proc. Int. Conf. Computer Vision, 1999, pp. 255–261.
[30]T. Tian and C. Tomasi, “Comparison of approaches to egomotion computation,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 1996, pp. 315–320.
[31]Z. Y. Zhang, “Modeling geometric structure and illumination variation of a scene from real images,” in Proc. Int. Conf. Computer Vision, Bombay, India, 1998, pp. 4–7.
[32]K. Karmann and A. Brandt, “Moving object recognition using an adaptive background memory,” in Time-Varying Image Processing and Moving Object Recognition, V. Cappellini, Ed. Amsterdam, The Netherlands: Elsevier, 1990, vol. 2.
[33]M. Kilger, “A shadow handler in a video-based real-time traffic monitoring system,” in Proc. IEEE Workshop Applications of Computer Vision, Palm Springs, CA, 1992, pp. 11–18.
[34]J. Malik, S. Russell, J. Weber, T. Huang, and D. Koller, “A machine vision based surveillance system for Californaia roads,” Univ. of California, PATH project MOU-83 Final Rep., Nov. 1994.
[35]T. J. Fan, G. Medioni, and G. Nevatia, “Recognizing 3-D objects using surface descriptions,” IEEE Trans. Pattern Recognit. Machine Intell., vol. 11, pp. 1140–1157, Nov. 1989.
[36]B. Coifman, D. Beymer, P.McLauchlan, and J. Malik, “A real-time computer vision system for vehicle tracking and traffic surveillance,” Transportation Res.: Part C, vol. 6, no. 4, pp. 271–288, 1998.
[37]J. Malik and S. Russell, “Traffic surveillance and detection technology development (new traffic sensor technology),” Univ. of California, Berkeley, 1996.
[38]C. A. Pau and A. Barber, “Traffic sensor using a color vision method,” in Proc. SPIE—Transportation Sensors and Controls: Collision Avoidance, Traffic Management, and ITS, vol. 2902, 1996, pp. 156–165.
[39]B. Schiele, “Vodel-free tracking of cars and people based on color regions,” in Proc. IEEE Int. Workshop Performance Evaluation of Tracking and Surveillance, Grenoble, France, 2000, pp. 61–71.
[40]Q. Delamarre and O. Faugeras, “3D articulated models and multi-view tracking with physical forces,” Comput. Vis. Image Understanding, vol. 81, no. 3, pp. 328–357, 2001.
[41]“3D articulated models and multi-view tracking with silhouettes,” in Proc. Int. Conf. Computer Vision, Kerkyra, Greece, 1999, pp. 716–721.
[42]C. Sminchisescu and B. Triggs, “Covariance scaled sampling for monocular 3D body tracking,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, Kauai, HI, 2001, pp. I:447–I:454.
[43]R. Plankers and P. Fua, “Articulated soft objects for video-based body modeling,” in Proc. Int. Conf. Computer Vision,Vancouver, BC, Canada, 2001, pp. 394–401.
[44]T. Zhao, T. S. Wang, and H. Y. Shum, “Learning a highly structured motion model for 3D human tracking,” in Proc. Asian Conf. Computer Vision, Melbourne, Australia, 2002, pp. 144–149.
[45]J. C. Cheng and J. M. F. Moura, “Capture and representation of human walking in live video sequence,” IEEE Trans. Multimedia, vol. 1, pp. 144–156, June 1999.
[46]C. Bregler, “Learning and recognizing human dynamics in video sequences,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, San Juan, Puerto Rico, 1997, pp. 568–574.
[47]H. Sidenbladh and M. Black, “Stochastic tracking of 3D human figures using 2D image motion,” in Proc. European Conf. Computer Vision, Dublin, Ireland, 2000, pp. 702–718.
[48]E. Ong and S. Gong, “A dynamic human model using hybrid 2D-3D representation in hierarchical PCA space,” in Proc. British Machine Vision Conf., U.K., 1999, pp. 33–42.
[49]D. G. Lowe, “Fitting parameterized 3-D models to images,” IEEE Trans. Pattern Anal. Machine Intell., vol. 13, pp. 441–450, May 1991.
[50]J. Hoshino, H. Saito, and M. Yamamoto, “A match moving technique for merging CG cloth and human video,” J. Visualiz. Comput. Animation, vol. 12, no. 1, pp. 23–29, 2001.
[51]J. E. Bennett, A. Racine-Poon, and J. C. Wakefield, “MCMC for nonlinear hierarchical models,” in Markov Chain Monte Carlo in Practice, W. R. Gilks, S. Richardson, and D. J. Spiegelhalter, Eds. London, U.K.: Chapman and Hall, 1996, pp. 339–357.
[52]O. Javed and M. Shah, “Tracking and object classification for automated surveillance,” in Proc. European Conf. Computer Vision, vol. 4, 2002, pp. 343–357.
[53]Y. Kuno, T. Watanabe, Y. Shimosakoda, and S. Nakagawa, “Automated detection of human for visual surveillance system,” in Proc. Int. Conf. Pattern Recognition, 1996, pp. 865–869.
[54]R. Polana and R. Nelson, “Low level recognition of human motion,” in Proc. IEEE Workshop Motion of Non-Rigid and Articulated Objects, Austin, TX, 1994, pp. 77–82.
[55]Shan Zhu, Kai-Kuang Ma, “A new diamond search algorithm for fast block-matching motion estimation”, IEEE Trans. on Image Processing, vol. 9, no. 2, pp.287-290, Feb. 2000.
[56]J. Barron, D. Fleet, and S. Beauchemin, “Performance of optical flow techniques,” Int. J. Comput.Vis., vol. 12, no. 1, pp. 42–77, 1994.
[57]R. Cutler and L. S. Davis, “Robust real-time periodic motion detection, analysis, and applications,” IEEE Trans. Pattern Anal. Machine Intell., vol. 22, pp. 781–796, Aug. 2000.
[58]JPL, “Traffic surveillance and detection technology development,” Sensor Development Final Rep., Jet Propulsion Laboratory Publication no. 97-10, 1997.
[59]N. Peterfreund, “Robust tracking of position and velocity with Kalman snakes,” IEEE Trans. Pattern Anal. Machine Intell., vol. 22, pp. 564–569, June 2000.
[60]M. Isard and A. Blake, “Contour tracking by stochastic propagation of conditional density,” in Proc. European Conf. Computer Vision, 1996, pp. 343–356.
[61]J. Malik and S. Russell, “Traffic Surveillance and Detection Technology Development: New Traffic Sensor Technology,” Univ. of California, Berkeley, California PATH Research Final Rep., UCB-ITS-PRR-97-6, 1997.
[62]S. A. Niyogi and E. H. Adelson, “Analyzing and recognizing walking figures in XYT,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 1994, pp. 469–474.
[63]L. M. Po and W. C. Ma, “A novel four-step search algorithm for fastL. M. Po and W. C. Ma, “A novel four-step search algorithm for fast block motion estimation,” IEEE Trans. Circuits Syst. Video Technol.,vol. 6, pp. 313–317, June 1996.
[64]L. K. Liu and E. Feig, “A block-based gradient descent search algorithm for block motion estimation in video coding,” IEEE Trans. Circuits Syst. Video Technol., vol. 6, no. 4, pp. 419–422, Aug. 1996.
[65]J. M. Jou, P.-Y. Chen, and J.-M. Sun, “The grey prediction search algorithmfor block motion estimation,” IEEE Trans. Circuits Syst. Video Technol., vol. 9, pp. 843–848, June 1999.
[66]鐘宜岑,2007, “應用於動態背景中的移動物體影像之偵測與即時追蹤系統”交通大學,碩士論文
[67]蕭宇宏, "適用於娛樂機器人之移動物體追蹤IP 模組設計", 101年國科會計畫書
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
系統版面圖檔 系統版面圖檔