跳到主要內容

臺灣博碩士論文加值系統

(3.237.38.244) 您好!臺灣時間:2021/07/24 15:07
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:韓仁智
研究生(外文):Jen-ChihHan
論文名稱:PTZ攝影機對物體動態偵測與追蹤之研究
論文名稱(外文):The Study of Dynamic Object Detection and Tracking with PTZ Camera
指導教授:楊竹星楊竹星引用關係
指導教授(外文):Chu-Sing Yang
學位類別:碩士
校院名稱:國立成功大學
系所名稱:電腦與通信工程研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2012
畢業學年度:100
語文別:中文
論文頁數:60
中文關鍵詞:PTZ攝影機動態偵測物體追蹤區域交集影像群組
外文關鍵詞:PTZ CameraDynamic DetectionObject TrackingLocal Joint Image Group
相關次數:
  • 被引用被引用:0
  • 點閱點閱:299
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
物體追蹤一直是電腦視覺領域中的重要研究主題之一。在過去大部分的研究都是在靜態攝影機的基礎下來做,但由於靜態攝影機的視野有限,在某些場景的應用上無法滿足需求,因此擁有靈活視野的PTZ(Pan-Tilt-Zoom)攝影機也漸漸成為研究的主角。但也因為PTZ攝影機具備可移動的特性,使得過去為靜態攝影機所研發的背景模型無法直接使用,於是在實務上PTZ攝影機往往必須先靜態偵測出移動物體後才能開始進行動態追蹤。
為了能讓PTZ攝影機擁有動態偵測與自主追蹤物體的能力,本論文提出區域交集影像群組(Local Joint Image Group)的構想來建立適用於PTZ攝影機的背景模型,其中搭配使用尺度不變特徵轉換(Scale Invariant Feature Transform, SIFT)與RANSAC(Random Sample Consensus)這兩種方法組成影像縫合(Image Stitching)的技術,藉以將離散的背景空間模擬成連續的空間。並使用高斯混合模型(Gaussian Mixture Model, GMM)建立背景配合背景相減法將前景移動物體擷取出來,輔以被追蹤物體的HSV(Hue-Saturation-Value)顏色直方圖與區域二元圖樣(Local Binary Patterns, LBP)紋理特徵整合粒子濾波器(Particle Filter)演算法來進行追蹤以達成本論文的目標。
本論文中藉由探討不同的背景建構方法來比較其在PTZ攝影機的應用效果,也研究不同的追蹤方法與策略以比較其執行速度與追蹤成效,實驗結果顯示本研究所提出的方法確實可行,能夠符合真實環境的需求。
Object tracking is one of challenging research areas in Computer Vision. In the past, most of the approaches are based on the static cameras. Sometimes, static cameras are improper for some scene because the limitation of FOV. For this reason, the studies of PTZ cameras are increasing recently. However, the characteristic of dynamic FOV accompanies some problems such as background construction. For practical, PTZ cameras usually have to detect moving object statically, then start dynamic tracking procedures.
For the ability of PTZ cameras which could detect and track objects dynamically, we propose the Local Joint Image Group idea to build the suitable background for PTZ cameras. This approach uses image stitching technique including SIFT(Scale Invariant Feature Transform) and RANSAC(Random Sample Consensus) to simulate a continuing background space. Besides, we adopt GMM(Gaussian Mixture Model) and background subtraction to extract the moving objects from images. Finally, we use innovative tracking strategy which integrates particle filter algorithm with HSV(Hue-Saturation-Value) and LBP(Local Binary Patterns) features to reach our goal.
In the experiments, we test different background construction methods for PTZ cameras and made the validity comparison of background subtraction. Moreover, we survey different tracking methods and made a comparison between them. The results show the effectiveness of the proposed method.
摘要 I
Abstract III
誌謝 V
目錄 VI
圖目錄 VIII
表目錄 XI
第一章 序論 1
1.1 研究背景 1
1.2 研究目的與方法 2
1.3 章節概要 2
第二章 相關研究 3
2.1 移動物體偵測 3
2.1.1 連續影像相減法 3
2.1.2 背景相減法 4
2.1.3 區塊匹配法 6
2.1.4 光流法 7
2.2 影像縫合 8
2.3 物體追蹤 10
第三章 PTZ攝影機動態偵測與追蹤方法 14
3.1 架構與流程 14
3.2 建立背景模型 16
3.2.1 背景影像擷取 16
3.2.2 高斯混合模型 17
3.2.3 區域交集影像群組 19
3.3 物體追蹤 32
3.3.1 HSV顏色直方圖 32
3.3.2 區域二元圖樣直方圖 34
3.3.3 粒子濾波器 36
3.3.4 相似度函數 40
第四章 實驗結果 42
4.1 背景模型實驗 42
4.1.1 實驗一 42
4.1.2 實驗二 43
4.1.3 實驗三 44
4.2 物體追蹤 45
4.2.1 實驗一 45
4.2.2 實驗二 49
第五章 結論與未來的研究方向 55
5.1 結論 55
5.2 未來研究方向 55
參考文獻 57
[1]Avidan, S. (2007), Ensemble Tracking, IEEE Trans. on Pattern Analysis and Machine Intelligence, 29 pp.261~271.
[2]Bhat, K. S., M. Saptharishi and P. K. Khosla (2000), Motion Detection and Segmentation Using Image Mosaics, IEEE Intl. Conf. on Multimedia and Expo., pp.1577~1580.
[3]Brown, M. and D. G. Lowe (2003), Recognising Panoramas, In Proc. of the 9th International Conference on Computer Vision, 2 pp.1218~1225.
[4]Bevilacqua, A. and P. Azzari (2006), High-Quality Real-Time Motion Detection Using PTZ Cameras, IEEE Intl. Conf. on Video and Signal Based Surveillance, p.23.
[5]Brown, M. and D. G. Lowe (2007), Automatic Panoramic Image Stitching Using Invariant Features, International Journal of Computer Vision, pp.59~73.
[6]Comaniciu, D., V. Ramesh and P. Meer (2000), Real-Time Tracking of Non-Rigid Objects Using Mean Shift, in Proc. IEEE Conf. on Computer Vision and Pattern Recognition, 2 pp.142~149.
[7]Dagless, E. L., A. T. Ali and J. B. Cruz (1993), Visual Road Traffic Monitoring and Data Collection, in Proc. of the IEEE Vehicle Navigation and Information Systems Conf., pp.146~149.
[8]Devroye, L., L. Gyorfi and G. Lugosi (1996), A Probabilistic Theory of Pattern Recognition, Applications of Mathematics, Springer-Verlag Inc., 31 pp. 25~32.
[9]Elgammal, A., R. Duraiswami and L. S. Davis (2003), Efficient Kernel Density Estimation Using the Fast Gauss Transform with Applications to Color Modeling and Tracking, IEEE Trans. on Pattern Analysis and Machine Intelligence, 25 pp.1499~1504.
[10]Fukunaga, K. and L. D. Hostetler (1975), The Estiamtion of the Gradient of a Density Function, with Applications in Pattern Recognition, IEEE Trans. on Information Theory, 21 (1) pp.32~87.
[11]Fischler, M. A. and R. C. Bolles (1981), Random Sample Consensus: A Paradigm for Model Fitting with Application to Image Analysis and Automated Cartography, Communication Machine, 24 (6) pp.381~395.
[12]Goolkasian, P. (1991), Processing Visual-Stimuli Inside and Outside the Focus Attention, Bulletin of the Psychonomic Society, 29 (6) pp.510~515.
[13]Gupte, S., O. Masoud, R. F. K. Martin and N. P. Papanikolopoulos (2002), Detection and Classification of Vehicles, IEEE Trans. on Intelligent Transportation Systems, 3 (1) pp.37~47.
[14]Heckbert, P. S. (1986), Survey of Texture Mapping, IEEE, Computer Graphics and Applications, 6 pp.56~67.
[15]Harris, C. and M. Stephens (1988), A Combined Corner and Edge Detector, Proc. Alvey Vision Conference, pp.147~151.
[16]Heckbert, P. S. (1989), “Fundamentals of Texture Mapping and Image Warping, Master's Thesis, Dept. of Electrical Engineering and Computer Science, University of California, Berkeley.
[17]Hirakawa, M., K. Uchida and A. Yoshitaka (2002), Content-Based Video Retrieval Using Mosaic Images, Proc. IEEE First International Symposium on Cyber Worlds, pp.161~167.
[18]Isard, M. and A. Blake (1998), Condensation - Conditional Density Propagation for Visual Tracking, International Journal of Computer Vision, 29 (1) pp.5~28.
[19]Koenderink, J. J. (1984), The Structure of Images, Biological Cybernetics, 50 (5) pp.363~396.
[20]Kass, M., A. Witkin and D. Terzopoulos (1988), Snakes: Active Contour Models, International Journal of Computer Vision, pp.321~331.
[21]Kim, K., T. H. Chalidabhongse, D. Harwood and L. Davis (2005), Real-Time Foreground-Background Segmentation Using Codebook Model, Real-Time Imaging, 11 pp.172~185.
[22]Lindeberg, T. (1994), Scale-Space Theory: A Basic Tool for Analysing Structures at Different Scales, Journal of Applied Statistics, 21 (2) pp.224~270.
[23]Liang, Z. P., H. Pan, R. L. Magin, N. Ahuja and T. S. Huang (1997), Automated Image Registration by Maximization of a Region Similarity Metric, Intl. Conf. on Image Processing, 3 pp.272~275.
[24]Lipton, A. J., H. Fujiyoshi and R. S. Patil (1998), Moving Target Classification and Tracking from Real-Time Video, in Proc. of the IEEE Workshop on Applications of Computer Vision, pp.8~14.
[25]Lin, W., C. M. Wang, Y. J. Chang and Y. C. Chen (2002), Real-Time Object Extraction and Tracking with an Active Camera Using Image Mosaics, IEEE Workshop on Multimedia Signal Processing, pp.149~152.
[26]Lowe, D. G. (2004), Distinctive Image Features from Scale-Invariant Keypoints, International Journal of Computer Vision, 60 pp.99~110.
[27]Meyer, D., J. Denzler and H. Niemann (1998), Model Based Extraction of Articulated Objects in Image Sequences for Gait Analysis, in Proc. of the IEEE International Conference on Image Processing, pp.78~81.
[28]Ojala, T., M. Pietikainen and T. Maenpaa (2002), Multiresolution Gray-Scale and Rotation Invariant Texture Classification with Local Binary Patterns, IEEE Trans. on Pattern Analysis and Machine Intelligence, 24 pp.971~987.
[29]Peterfreund, N. (1999), Robust Tracking of Position and Velocity with Kalman Snakes, IEEE Trans. on Pattern Analysis and Machine Intelligence, 21 (6) pp.564~569.
[30]Paragios, N. and R. Deriche (2005), Active Regions and Level Set Methods for Motion Estimation and Tracking, Computer Vision and Image Understanding, 97 (3) pp. 259~282.
[31]Stauffer, C. and W. Grimson (1999), Adaptive Background Mixture Models for Real-Time Tracking, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2 pp.246~252.
[32]Wren, C. R., A. Azarbayejani, T. Darrell and A. P. Pentland (1997), Pfinder: Real-Time Tracking of the Human Body, IEEE Trans. on Pattern Analysis and Machine Intelligence, 19 pp.780~785.
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top