跳到主要內容

臺灣博碩士論文加值系統

(44.210.83.132) 您好!臺灣時間:2024/05/27 01:33
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:袁佩瑄
研究生(外文):Yuan, Pei-Hsuan
論文名稱:使用一對雙環場攝影機成像系統對視訊監控車周圍的物體做監測之研究
論文名稱(外文):A Study on Monitoring of Nearby Objects around a Video Surveillance Car with a Pair of Two-camera Omni-directional Imaging Devices
指導教授:蔡文祥蔡文祥引用關係
指導教授(外文):Tsai, Wen-Hsiang
學位類別:碩士
校院名稱:國立交通大學
系所名稱:多媒體工程研究所
學門:電算機學門
學類:軟體發展學類
論文種類:學術論文
論文出版年:2010
畢業學年度:98
語文別:英文
論文頁數:100
中文關鍵詞:視訊監控環場多攝影機3D資料
外文關鍵詞:video surveillanceomni-directionalmulti cameras3D datapasser-bypassing-by car
相關次數:
  • 被引用被引用:0
  • 點閱點閱:340
  • 評分評分:
  • 下載下載:51
  • 收藏至我的研究室書目清單書目收藏:0
本論文提出了一個以視覺為基礎的視訊監控方法,該方法使用了兩組架設於監控車車頂的雙環場攝影機成像系統。首先,我們使用空間對應的方法建立起此環場攝影機成像系統的校正資訊表,稱之為全景對應表。基於此對應表以及兩個環場影像的對應點,我們提出了影像座標點與世界座標點之間的轉換方法。為了觀察監控車周遭的環境,我們另提出了建構上視圖的方法,以及將兩張上視圖合併成一張寬廣視角圖的技巧。此外,我們也使用一區域網路作兩台筆記型電腦的溝通管道,供傳輸移動指令,讓使用者能移動滑鼠即可建構出各種視角的透視圖。
另一方面,我們提出了一個自動偵測可疑人物並將其標記在上視圖的方法,這方法使用了一些影像處理的技巧,如距量保存影像二值化和灰階動態補償等。再者,利用一組雙環場攝影機成像系統所拍攝的影像組,我們能計算出可疑人物於立體空間中的距離及高度。如果監控車中的使用者想更直接的觀察可疑人物,他/她也能使用此系統而不需要走出車外即能觀察所建構出的對應透視圖。另外,我們也提出了一個自動偵測車輛的方法,在該法中,為了移除影像中的地板區域以及擷取出環場影像中的車體,我們使用了區域生長、組件標籤、圖像變換、模板匹配等影像處理技巧,並將之有效整合來獲得環場影像中車子的位置。最後,使用一組雙環場攝影機機成像系統所拍攝的影像組,我們可計算出車子位於真實世界座標中的位置。
良好的實驗結果顯示我們所提出視訊監控系統的可行性及應用靈活性。
Vision-based methods for video surveillance via the use of a pair of two-camera omni-directional imaging devices affixed on the roof of a video surveillance car are proposed. First, a space mapping method is used to construct the so-called pano-mapping tables of the pairs of two-camera omni-directional imaging devices. By the mapping tables and corresponding points of two omni-images, a method for converting the coordinates of the points between the image coordinate system and the world coordinate system is proposed. To see the environment around the video surveillance car, techniques for constructing top-view images and merging them into wider-area integrated ones are proposed. Also, a local network architecture for data communication between two laptop PCs, as well as a technique for constructing perspective-view images of any view direction decided by mouse clicks are proposed.
Furthermore, a method for detecting a suspicious passer-by automatically and marking his/her position on a top-view image is proposed, which is based on image processing schemes of moment-preserving thresholding and dynamic grayscale offsetting. Moreover, the distance and height of a passer-by in 3D space is computed by image pairs taken with a two-camera omni-directional imaging device. If a user in the surveillance car wants to see a detected suspicious passer-by directly, he/she may use the system to generate a corresponding perspective-view image to inspect the suspicious passer-by without going out of the car. Additionally, a method of detecting a passing-by car automatically is proposed. To eliminate the ground region and capture the passing-by car shape in the omni-image, image processing techniques like region growing, component labeling, image transformation, template matching, etc. are used integrally and effectively to get the accurate position of the passing-by car in an omni-image. Finally, the position of the passing-by car in the real world is estimated as well using image pairs taken with a two-camera omni-directional imaging device.
Good experimental results show the flexibility and feasibility of the proposed methods for the application of video surveillance.
ABSTRACT (in Chinese) i
ABSTRACT (in English) ii
ACKNOWLEDGEMENTS iv
CONTENTS v
LIST OF FIGURES viii
LIST OF TABLES xiii

Chapter 1 Introduction 1
1.1 Motivation 1
1.2 Survey on Related Studies 3
1.3 Overview of Proposed Methods 6
1.3.1 Terminologies 6
1.3.2 Brief Descriptions of Proposed Approach 6
1.4 Contributions 8
1.5 Thesis Organization 9
Chapter 2 System Configuration, Camera Design, and Idea of Proposed Method 10
2.1 Idea of Proposed Monitoring of Nearby Objects around a Mobile Surveillance Car 10
2.2 System Configuration 13
2.2.1 Hardware configuration 13
2.2.2 Software configuration 15
2.2.3 Network configuration 16
2.3 Design of a Pair of Two-camera Omni-directional Imaging devices 17
2.3.1 System configuration 17
2.3.2 Camera Design Principle 17
2.3.3 3D data acquisition 21
2.4 System Process 24
Chapter 3 Using Pano-mapping Tables for Unwarping Omni-images into Multi-perspective-view Images 27
3.1 Idea of Pano-mapping for Omni- image Unwarping 27
3.2 Construction of Pano-mapping Table 28
3.2.1 Landmark learning 29
3.2.2 Estimation of coefficients of radial stretching function 30
3.2.3 Filling of pano-mapping table entries 31
3.3 Image Unwarping and Generation of Perspective-view Images 34
3.3.1 Generation of a perspective view 34
3.3.2 Generation of specified perspective-view images with mouse clicks 38
Chapter 4 Automatic Detection of a Suspicious Passer-by with a Two-camera Omni-directional Imaging Device 40
4.1 Introduction 40
4.2 Review of Related Concepts in Proposed System 40
4.2.1 Moment-preserving thresholding for object segmentation 41
4.2.2 Dynamic offsetting 43
4.3 Estimation of a Passer-by’s Distance and Height Information 44
4.3.1 Detection of moving objects in an omni-image 44
4.3.2 Detection of a passer-by’s head by component labeling 46
4.3.3 Calculation of a passer-by’s distance and height in 3D space 50
Chapter 5 Integration of Two Omni-images into a Top-view Image with a Pair of Two-camera Omni-directional Imaging Devices 53
5.1 Introduction 53
5.2 Construction of a Top-view Image 53
5.2.1 Construction of a top-view image with an omni-camera 53
5.2.2 Calculation of relative position of two omni-cameras 57
5.2.3 Merging of two top-view images into a single one 58
5.3 Video Surveillance Car Shape Superimposition and Ground Texture Filling in Top-view Image 60
5.3.1 Construction of car shape 61
5.3.2 Video surveillance car shape superimposition and ground texture filling 61
Chapter 6 Automatic Detection of a Passing-by Car with a Two-camera Omni- directional Imaging Device 63
6.1 Proposed Idea of Automatic Detection of a Passing-by Car 63
6.2 Detection of Car Region in an Omni-image 64
6.2.1 Detection of non-ground region 64
6.2.2 Detection of car region by region growing and component labeling 66
6.3 Detection of Car Position in Real World 73
6.3.1 Transformation of a car model in real world into an omni-image 73
6.3.2 Detection of car position by template matching 75
6.4 Passing-by Car Shape Superimposition and Ground Texture Filling in Top-view Image 78
6.4.1 Ground Texture Filling 78
6.4.2 Passing-by car shape superimposition 80
Chapter 7 Experimental Results and Discussions 81
7.1 Experimental Results of Pano-mapping Process 82
7.2 Experimental Results of Perspective-view Image Generation 86
7.3 Experimental Results of Top-view Image Generation and Passer-by Detection 87
7.4 Experimental Results of Passing-by Car Detection 89
7.5 Experimental Results of Integrated System 91
7.6 Discussion 93
Chapter 8 Conclusions and Suggestions for Future Works 94
8.1 Conclusions 94
8.2 Suggestions for Future Works 96
References 98
[1] Y. C. Liu, K. Y. Lin, and Y. S. Chen, “Bird’s-eye View Vision System for Vehicle Surrounding Monitoring,” Proceedings of Conference on Robot Vision, pp.207-218, Berlin, Germany, Feb. 20, 2008.
[2] H. C. Chen and W. H. Tsai, "Optimal Security Patrolling by Multiple Vision-based Autonomous Vehicles with Omni-monitoring from The Ceiling," Proceedings of 2008 International Computer Symposium, vol. 2, pp.196-201, Nov. 2008.
[3] S. W. Jeng and W. H. Tsai, A Study on Camera Calibration and Image Transformation Techniques and Their Application, Ph.D. Dissertation, Institute of Information Science and Engineering, National Chiao Tung University, Hsinchu, Taiwan, Republic of China, June 2007.
[4] H. Koyasu, J. Miura and Y. Shirai, “Real-time Omnidirectional Stereo for Obstacle Detection and Tracking in Dynamic Environments,” Proceedings of 2001 IEEE/RSJ International Conference on Intelligent Robots and Systems, vol. 1, pp.31-36, Maui, Hawaii, U. S. A., Oct. 29-Nov. 03, 2001.
[5] H. Ukida, N. Yamato, Y Tanimoto, T Sano, and H. Yamamoto, “Omni-directional 3D Measurement by Hyperbolic Mirror Cameras and Pattern Projection,” Proceedings of 2008 IEEE Conference on Instrumentation & Measurement Technology, pp.365-370, Victoria, BC, Canada, May 12-15, 2008.
[6] J. Gluckman, S. K. Nayar, and K. J. Thoresz, “Real-time Omnidirectional and Panoramic Stereo,” Proceedings of DARPA98, pp.299–303, 1998.
[7] J. Salvi, X. Armangué, and J. Batle, “A Comparative Review of Camera Calibrating Methods with Accuracy Evaluation,” Proceedings of Pattern Recognition, Vol. 35, No. 7, pp.1617-1635, July 2002.
[8] S. W. Jeng and W. H. Tsai, “Using Pano-mapping Tables for Unwarping of Omni-images into Panoramic and Perspective-view Images,” Journal of IET Image Processing, Vol. 1, No. 2, pp.149-155, June 2007.
[9] C. J. Wu and W. H. Tsai, “Unwarping of Images Taken by Misaligned Omni-cameras without Camera Calibration by Curved Quadrilateral Morphing Using Quadratic pattern Classifiers,” Optical Engineering, Vol. 48, No. 8, Aug. 2009.
[10] T. Mashita, Y. Iwai, and M. Yachida, “Calibration Method for Misaligned Catadioptric Camera,” Proceedings of IEICE Transactions on Information & Systems, Vol. E89-D, No. 7, pp.1984-1993, July 2006.
[11] Y. Onoe, N. Yokoya, K Yamazawa, and H. Takemura, “Visual Surveillance and Monitoring System Using an Omnidirectional Video Camera,” Proceedings of ICPR98, Vol. 1, pp.588-592, Sep. 1998.
[12] T. Mituyosi, Y. Yagi, and M. Yachida, “Real-time Human Feature Acquisition and Human Tracking by Omnidirectional mage Sensor,” Proceedings of IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, pp.258-263, Sep. 2003.
[13] S. Morita, K. Yamazawa, and N. Yokoya, “Networked Video Surveillance Using Multiple Omnidirectional Cameras,” Proceedings of 2003 IEEE International Symposium on Computational Intelligence in Robotics and Automation, Vol. 3, pp.1245-1250, July 16-20, 2003.
[14] L. Matuszyk and A. Zelinsky, “Stereo Panoramic Vision for Monitoring Vehicle Blind-spots,” Proceedings of 2004 IEEE Intelligent Vehicles Symposium, pp.31-36, June 14-17, 2004
[15] J. I. Meguro, J, I, Takiguchi, and Y, A, T, Hashizume, “3D Reconstruction Using Multi-baseline Omni-directional Motion Stereo Based on GPS/DR Compound Navigation System,” International Journal of Robotics Research, Vol. 26, No. 6, pp.625-636, June 2007..
[16] L.R. Burden, and J.D. Faires, “Numerical analysis,” Brooks Cole, Belmont, CA, 2000. 7th edition, ISBN. 0534382169.
[17] W. H. Tsai, “Moment-preserving Thresholding: A New Approach,” Journal of Computer Vision, Graphics, and Image Processing, Vol. 29, No. 3, pp.377-393, 1985.
[18] J. Y. Wang, “A Study on Indoor Security Surveillance by Vision-based Autonomous Vehicles with Omni-cameras on House Ceilings,” M.S. Thesis, Institute of Multimedia Engineering, National Chiao Tung University, Hsinchu, Taiwan, June 2009.
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top