跳到主要內容

臺灣博碩士論文加值系統

(44.200.145.223) 您好!臺灣時間:2023/05/29 00:18
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:石神恩
研究生(外文):Shih, Shen-En
論文名稱:精準且有自適能力之環場視覺 技術及應用之研究
論文名稱(外文):A study on accurate and adaptive omni-vision techniques and applications
指導教授:蔡文祥蔡文祥引用關係
指導教授(外文):Tsai, Wen-Hsiang
學位類別:博士
校院名稱:國立交通大學
系所名稱:資訊科學與工程研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2013
畢業學年度:102
語文別:英文
論文頁數:123
中文關鍵詞:環場視覺最佳化系統組態自動調適
外文關鍵詞:Omni-visionOptimal system cofigurationAutomatic Adaptation
相關次數:
  • 被引用被引用:0
  • 點閱點閱:252
  • 評分評分:
  • 下載下載:68
  • 收藏至我的研究室書目清單書目收藏:1
為了能使電腦與四周環境互動,環場視覺是一項極其有效且十分重要的技術。與傳統電腦視覺技術相比,環場視覺強調其在單一時間點能對大範圍環境取景之能力,而不用在攝影機上加裝馬達裝置來週期性地轉移攝影機,更不需用多部攝影機來對環境取景。由上述可知,在環場視覺技術中我們可以避免影像接合、攝影機換手、多攝影機特徵連續追蹤等複雜問題。為了達到大範圍取景之目的,有兩種特殊設計的環場攝影機較常被使用,其一是反射式環場攝影機,另一種是魚眼攝影機。其中,前者是將一個特殊形狀的反射式鏡面擺放在一傳統攝影機前方,藉由該鏡面來研展攝影機的可視範圍;後者是利用一特殊的魚眼透鏡來研展其可視範圍。然而,因為極大範圍的環境資訊被濃縮於一張傳統大小的影像中,環場攝影機所擷取到的影像必定有十分嚴重的扭曲,這也使得後續影像分析的工作變得困難且複雜許多。雖然將影像扭曲校正回來是其中一種簡單的解決方式,然而因為扭曲造成影像解析度的不同,校正回來的影像在某些區域會十分地模糊,在影像分析後會產生不穩定的結果。更甚者,上述扭曲校正的過程也需要些許運算能力,在即時應用及嵌入式系統中都較不適用。
為了克服環場攝影機擷取到的嚴重扭曲,我們提出了在扭曲影像上精準且穩定地偵測空間中直線的方法。另外,我們也針對各種利用環場攝影機偵測空間中直線的應用,提出改良的攝影機模型,並且也提出一套方便的校正程序來校正環場攝影機。此校正程序只需使用空間中的一條直線特徵,且不需要測量其位置及方向,使得整個校正程序變得十分簡單,且可讓一般使用者方便地進行校正,使環場視覺技術朝消費電子更邁進一步。
另外,從消費者的角度上來看,我們應該也要能讓一般使用者方便地架設一套環場視覺系統。在此方面,我們提出一套新的雙眼視覺系統,此系統可讓使用者任意地擺放兩部環場攝影機。在擺放完成後,系統會自動利用環境中之直線特徵來回推攝影機的位置及角度,從而正確地計算立體資訊,以供各種人機互動應用使用。另一方面,針對需要取得十分精準之立體資訊的應用中,我們也提出一套最佳化架構以及三個最佳化演算法,其可告訴使用者如何擺放該二部環場攝影機的位置及角度,以求得最佳之立體資訊。根據這些最佳化演算法,使用者將可建構出能進行精準立體測量之雙眼環場視覺系統。
最後,我們也對上述所提出之各種環場視覺技術進行延伸研究,開發一套室內停車場管控系統。此系統可利用假設於天花板之各環場攝影機,自動地分析停車場中各停車格之位置,並自動找出空的停車格位置,以利駕駛找尋停車位。與現有系統相比,我們提出的系統因為攝影機的可視範圍較大,所以只需要較少的攝影機數量;另外,因為我們提出的系統可自動分析停車格位置,因此其系統建置過程會便利許多。
在可行性及效率評估中,我們已對上述各方法及技術進行理論及實驗分析,並得到良好之實驗結果。

Omni-vision is an important and effective technique to make computers be aware of the surrounded environment. Different from traditional computer vision techniques, omni-vision ones emphasize more on capturing the environment information within a very wide area at one time without adding a motor control to the camera, moving the camera periodically, or using multiple cameras. Such techniques can avoid the difficulties of image stitching, camera hand-off, feature tracking over different cameras, etc. To achieve the capability of capturing information of a wide area, two special kinds of cameras are commonly used, which are catadioptric omni-directional cameras, and fisheye-lens cameras. The formal ones use a specially-designed reflective mirror to extend the viewing field, and the latter ones use a fisheye-lens to achieve the goal. However, since the environment information captured from a wide area is all compressed in a relatively small image, the captured image is inevitably heavily distorted, which makes the image analysis task much more difficult and complicated. Although, an easy and feasible way to deal with the heavy distortion is to unwarp the captured images to yield an image looking like one captured by a conventional perspective camera. However, since the resolution distributions captured by omni-directional cameras and by conventional perspective camera are quite different from each other, an unwrapped image becomes much more blurred in some regions, making image analysis tasks unstable and unreliable. Furthermore, the unwarping process needs some computation power, making it unsuitable to real-time applications and embedded systems with restricted computation power.
To deal with the heavily-distorted images captured by omni-directional cameras, an accurate and reliable space line detection method without unwarping the distorted image is proposed. Also, to model the imaging process conducted by an omni-directional camera, a new camera model along with a convenient calibration process to calibrate an omni-camera easily is proposed. This new calibration technique requires only one straight line in the environment without knowing the position or direction of the line, making it possible for non-technical user to conduct the calibration work which is a big step toward consumer electronics.
In addition, from the viewpoint of a consumer, the setup procedure of an omni-vision system should be sufficiently convenient for use by a typical user with no technical background. In this sense, a new binocular omni-vision system is proposed, which allows the user to place the two omni-directional cameras freely at any positions and with any orientations. After the two cameras are placed, the system can automatically derive the cameras’ positions and orientations via analysis of the space lines within the environment. As a result, the binocular omni-vision system can calculate 3D information correctly for use in many advanced human-machine interaction applications. Furthermore, for applications requiring precise 3D information, an optimization framework along with three different optimization algorithms are proposed as well to tell the user where to place the two omni-cameras, and what are the best orientations. With these optimization algorithms, the user can set up a binocular omni-vision system which acquires the most precise 3D data.
Finally, the proposed omni-vision techniques are extended for uses in the application of indoor parking lot management. The proposed system for this application utilizes the omni-directional cameras mounted on the ceiling, and automatically analyzes the acquired images to obtain the locations of the parking spaces and detect vacant parking spaces. Different from existing similar application systems, the proposed one requires fewer cameras due to the wider fields of view of the cameras, and is much more convenient to set up because of the developed automatic parking-space analysis capability.
The feasibility and effectiveness of all the above proposed methods and systems are demonstrated by theoretical analyses and good experimental results.

Chinese Abstract iii
English Abstract v
Acknowledgements vii
Table of Contents viii
List of Tables xi
List of Figures xii
Chapter 1. Introduction 1
1.1. Research Motivation 1
1.2. Survey of Related Works 2
1.3. Contribution of This Study 7
1.4. Dissertation Organization 8
Chapter 2. Overview of Proposed Techniques and Ideas 9
2.1 A Modified Unifying Model for Omni-cameras 9
2.2 Space Line Detection Techniques for Omni-cameras by Equal-width Curve Extractions 10
2.3 Automatic Adaptation Techniques of Binocular Omni-vision Systems to Any System Setup 10
2.4 Optimal Design and Placement of Omni-cameras in Binocular Vision Systems for Accurate 3D Data Measurement 12
2.5 An Omni-vision-based Indoor Parking Lot System with the Capability of Automatic Parking Space Detection 13
Chapter 3. Omni-camera Structure and Models 15
3.1 Catadioptric Omni-camera Structure 15
3.2 Review of the Unifying Model for Omni-cameras 16
Chapter 4. Space Line Detection for Omni-cameras by Equal-width Curve Extractions 18
4.1. Problems of Existing Methods 18
4.2. Proposed Method 19
4.3. Experimental Results 22
Chapter 5. Binocular Omni-vision Systems with an Automatic Adaptation capability to Any System Setup for 3D Vision Applications 26
5.1. Overview of the Adaptation Process 26
5.2. Space Line Detection in Omni-images 28
5.3. Calculation of Included Angle between Two Cameras’ Optical Axes Using Detected Lines 34
5.4. Proposed Technique for Deriving Camera Poses 38
5.5. Experimental Results 40
Chapter 6. Optimal Design and Placement of Omni-cameras in Binocular Vision Systems for Accurate 3D Data Measurement 46
6.1. Overview of the Optimization Framework 46
6.2. Related Formulas for Omni-cameras 48
6.3. Formula to Derive the Degree of Accuracy 50
6.4. Fast Configuration Optimization for Regular Cases 53
6.5. Optimization for General Cases 67
6.6. Experimental Results 70
6.7. Comparisons with Existing Methods 74
6.8. Conclusion 80
Chapter 7. A Convenient Vision-based System for Automatic Detection of Parking Spaces in Indoor Parking Lots Using Wide-angle Cameras 81
7.1. Overview of Proposed Method 81
7.2. Proposed Calibration Method using Only One Space Line 83
7.3. Review of the Proposed Space Line Detection Method 89
7.4. Parking Space Segmentation and Vacancy Detection 96
7.5. Experimental Results of Proposed Calibration Method 103
7.6. Experimental Results of Parking Space Segmentation 108
7.7. An Example of Setting up an Indoor Parking Lot System 109
7.8. Conclusions 111
Chapter 8. Conclusions and Suggestions for Future Works 113
8.1. Conclusions 113
8.2. Suggestions for Future Works 116
References 118
List of Publications 122
Vitae 123

[1] Z. Y. Zhou, A. D. Cheok, Y. Qiu, and X. Yang, “The Role of 3-D sound in human reaction and performance in augmented reality environments,” IEEE Trans. on Systems, Man, and CyberneticsPart A: Systems and Humans, vol. 37, no. 2, pp. 262272, 2007.
[2] B. J. Tippetts, D. J. Lee, J. K. Archibald, and K. D. Lillywhite, “Dense disparity real-time stereo vision algorithm for resource-limited systems,” IEEE Trans. on Circuits and Systems for Video Technology, vol. 21, no. 10, pp. 15471555, 2011.
[3] Y. Sun, X. Chen, M. Rosato, and L. Yin, “Tracking vertex flow and model adaptation for three-dimensional spatiotemporal face analysis,” IEEE Trans.on Systems, Man and CuberneticsPart A: Systems and Humans, vol. 40, no. 3, pp. 461474, 2010.
[4] K. Li, Q. Dai, W. Xu, J. Yang, and J. Jiang, “Three-dimensional motion esimtation via matrix completion,” IEEE Trans. on Systems, Man, and CyberneticsPart B: Cybernetics, vol. 42, no. 2, pp. 539551, 2012.
[5] Wikipedia contributors, “Kinect,” Wikipedia, The Free Encyclopedia, http://en.wikipedia.org/wiki/Kinect.
[6] M. Betke and L. Gurvits, “Mobile robot localization using landmarks,” IEEE Transactions on Robotics and Automation, vol. 13, no.2, pp. 251263, 1997.
[7] Y. Yagi, Y. Nishizawa, and M. Yachida, “Map-based navigation for a mobile robot with omni-directional image sensor copis,” IEEE Transactions on Robotics and Automation, vol. 11, no. 5, pp. 634648, 1995.
[8] J. Gaspar, N. Winters, and J. Santos-Victor, “Vision-based navigation and environmental representations with an omni-directional camera,” IEEE Transactions on Robotics and Automation, vol. 16, no. 6, pp. 890898, 2000.
[9] E. Menegatti, T. Maeda and H. Ishiguro, “Image-based memory for robot navigation using properties of the omni-directional images,” Robotics and Autonomous Systems, vol. 47, no. 4, pp. 251267, 2004.
[10] H. Koyasu, J. Miura, and Y. Shirai, “Recognizing moving obstacles for robot navigation using real-time omni-directional stereo vision,” Journal of Robotics and Mechatronnics, vol. 14, no. 2, pp. 147156, June 2002.
[11] C. Cauchois, E. Brassart, B. Marhic, and C. Drocourt, “An absolute localization method using a synthetic panoramic image base,” Proceedings of IEEE Workshop on Omnidirectional Vision, Copenhagen, Denmark, pp. 128135, 2002.
[12] Y. Ogawa, J. H. Lee, S. Mori, A. Takagi, C. Kasuga and H. Hashimoto, “The positioning system using the digital mark pattern-the method of measurement of a horizontal distance,” Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, pp.731741, 1999.
[13] S. J. Ahn, W. Rauh, and M. Recknagel, “Circular coded landmark for optical 3D-measurement and robot vision,” Proceedings of International Conference on Intelligent Robots and Systems, pp.11281133, 1999.
[14] S. Kim and S.Y. Oh, “SLAM in indoor environments using omni-directional vertical and horizontal line features,” Journal of Intelligent and Robotic Systems, vol. 51, no. 1, pp. 3143, 2008.
[15] J. Kannala and S. Brandt, “A generic camera calibration method for fish-eye lenses,” Proceedings of the 17th International Conference on Pattern Recognition; Cambridge, U.K, vol. 1, pp. 1013, 2004.
[16] S. Shah and J. K. Aggarwal, “Intrinsic parameter calibration procedure for a (high-distortion) fish-eye lens camera with distortion model and. accuracy estimation,” Pattern Recognition, vol. 29, no. 11, pp. 17751788, 1996.
[17] Y. C. Liu, K. Y. Lin, and Y. S. Chen, “Bird’s-eye view vision system for vehicle surrounding monitoring,” Proceedings of Conference on Robot Vision, Berlin, Germany, pp. 207218, 2008.
[18] S. W. Jeng. “A study on camera calibration and image transformation techniques and their applications,” Ph. D. Dissertation, Institute of Information Science and Engineering, National Chiao Tung University, Hsinchu, Taiwan, Republic of China, June 2007.
[19] L. Tian, L. C. Wu, Y. Wang, G. S. Yang, “Binocular vision system design and its active object tracking,” Proc. IEEE International Symposium on Computational Intelligence and Design (ISCID), vol. 1, pp. 278281, 2011.
[20] Y. Xie, Y. N. Wang, B. T. Guo, H. H. Wang, “Study on human-computer interaction system based on binocular vision technology,” Proc. IEEE International Conference on Instrumentation, Measurement, Computer, Communication and Control (IMCCC), pp. 15411546, 2012.
[21] H. Koyasu, J. Miura and Y. Shirai, “Real-time omnidirectional stereo for obstacle detection and tracking in dynamic environments,” Proceedings of 2001 IEEE/RSJ International Conference on Intelligent Robots and Systems, vol. 1, pp. 3136, Maui, Hawaii, U. S. A., 2001.
[22] S. Laakso and M. Laakso, “Design of a body-driven multiplayer game system,” Computers in Entertainment (CIE), vol. 4, no. 4, 2006.
[23] J. J. Magee, M. Betke, J. Gips, M. R. Scott, and B. N. Waber, “A humancomputer interface using symmetry between eyes to detect gaze direction,” IEEE Trans. on Systems, Man, and CyberneticsPart A: Systems and Humans, vol. 38, no. 6, pp. 12481261, Nov. 2008.
[24] X. Zabulis, T. Sarmis, D. Grammenos and A. A. Argyros, “A multicamera vision system supporting the development of wide-area exertainment applications,” IAPR Conf.on Machine Vision Applications (MVA 2009), Yokohama, Japan, pp. 269272, 2009.
[25] J. Starck, A. Maki, S. Nobuhara, A. Hilton, and T. Matsuyama, “The multiple-camera 3-D production studio,” IEEE Trans. on Circuits and Systems for Video Technology, vol. 19, no. 6, 2009.
[26] S. Sefvic and S. Ribaric, “Determining the absolute orientation in a corridor using projective geometry and active vision,” IEEE Trans. on Industrial Electronics, vol. 48, no. 3, pp. 696710, June 2001.
[27] R. Carelli, R. Kelly, O. H. Nasisi, C. Soria, and V. Mut, “Control based on perspective lines of a non-holonomic mobile robot with camera-on-board,” Int’l Journal of Control, vol. 79, no. 4, pp. 362371, 2006.
[28] X. Ying and H. Zha, “Simultaneously calibrating catadioptric camera and detecting line features using Hough transform,” Proc. IEEE/RSJ Int’l Conf. on Intelligent Robots and Systems, pp. 412417, Aug. 2005.
[29] X. Ying, “Catadioptric Camera Calibration Using Geometric Invariants,” Proc. IEEE Int’l Conf. on Computer Vision, vol. 2, pp. 13511358, Oct. 2003.
[30] F. Duan, F. Wu, M. Zhou, X. Deng, and Y. Tian, “Calibrating effective focal length for central catadioptric cameras using one space line,” Pattern Recognition Letters, vol. 33, pp. 646-653, 2012.
[31] R. G. von Gioi, J. Jakubowicz, J.-M. Morel and G. Randall, “LSD: A fast line segment detector with a false detection control,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 32, no. 4, pp. 722-732, April 2010.
[32] C. J. Wu and W. H. Tsai, “An omni-vision based localization method for automatic helicopter landing assistance on standard helipads,” Proc. Int’l Conf. on Computer and Automation Engineering, Singapore, pp. 327-332, 2010.
[33] S. J. Maybank, S. Ieng, and R. Benosman, “A Fisher-Rao metric for paracatadioptric images of lines,” Int’l Journal of Computer Vision, vol. 99, no. 2, pp. 147165, 2012.
[34] K. Yamazawa, Y. Yagi and M. Yachida, “3D line segment reconstruction by using hyperomni vision and omnidirectional Hough transforming,” Proc. Int’l Conf. on Pattern Recognition, vol. 3, IEEE Computer Society, Washington, DC, USA, pp.34873490, 2000.
[35] S. T. Barnard, “Interpreting perspective images,” Artifical Intelligence, vol. 21, pp. 435462, 1983.
[36] B. Li, K. Peng, X. Ying, and H. Zha, “Vanishing point detection using cascaded 1D Hough Transform from single images,” Pattern Recognition Letters, vol. 33, pp. 1-8, 2012.
[37] S. Wenhardt, B. Deutsch, E. Angelopoulou, and H. Niemann, “Active Visual Object Reconstruction using D-, E-, and T-Optimal Next Best Views,” Proc. IEEE Conf. on Computer Vision and Pattern Recognition, pp. 1-7, 2007.
[38] H. Zhang, “Two-dimensional optimal sensor placement,” IEEE Trans. Systems, Man, and Cybernetics, vol. 25, no. 5, pp. 781792, 1995.
[39] B. Alsadik, M. Gerke, and G. Vosselman, “Automated Camera Network Design for 3D Modeling of Cultural Heritage Objects,” Journal of Cultural Heritage, 2013.
[40] C. Hoppe, A. Wendel, S. Zollmann, and S. Kluckner, “Photogrammetric Camera Network Design for Micro Aerial Vehicles,” Proc. 17th Computer Vision Winter Workshop, Feb. 2012.
[41] G. Olague, and R. Mohr, “Optimal camera placement for accurate reconstruction,” Pattern Recognition, vol. 35, no. 4, pp. 927944, 2002.
[42] A. H. Rivera, F. L. Shih, and M. Marefat, “Stereo camera pose determination with error reduction and tolerance satisfaction for dimensional measurements,” in Proc. IEEE Int. Conf. Robotics and Automation, pp. 423428, April 2005.
[43] R. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge University Press, 2000.
[44] C. Geyer and K. Daniilidis, “A Unifying Theory for Central Panoramic Systems and Practical Implications,” Proc. Sixth European Conf. Computer Vision, pp. 445-462, 2000.
[45] C. Geyer and K. Daniilidis, “Catadioptric projective geometry,” Int. J. of Computer Vision, vol. 45, no. 3, pp. 223243, 2001.
[46] X. Ying and Z. Hu, “Catadioptric Camera Calibration Using Geometric Invariants,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 26, no. 10, pp. 1260-1271, 2004.
[47] X. Ying and Z. Hu, “Can We Consider Central Catadioptric Cameras and Fisheye Cameras within a Unified Imaging Model?” Proc. European Conference on Computer Vision, pp. 442-455, 2004.
[48] X. M. Deng, F. C. Wu, and Y. H. Wu, “An Easy Calibration Method for Central Catadioptric Cameras,” Acta Automatica Sinica, vol. 33, no. 8, pp. 801-808, 2007.
[49] Y. Bastanlar, L. Puig, P. Sturm, J. J. Guerrero, and J. Barreto, “DLT-Like Calibration of Central Catadioptric Cameras,” Proc. 8th Workshop on Omnidirectional Vision, Camera Networks and Non-classical Cameras, Oct. 2008.
[50] S. Gasparini, P. Sturm, and J. P. Barreto, “Plane-Based Calibration of Central Catadioptric Cameras,” Proc. IEEE 12th International Conference on Computer Vision, pp. 1195-1202, 2009.
[51] D. Loannou, W. Huda, and A. F. Laine, “Circle recognition through a 2D Hough transform and radius histogramming,” Image and Vision Computing, vol. 17, no. 1, pp. 1526, 1999.
[52] Y. C. Cheng and S. C. Lee, “A New method for quadratic curve detection using K-RANSAC with acceleration technique,” Pattern Recognition, vol. 28, no. 5, pp. 663682, 1995.
[53] H. Ukida, N. Yamato, Y. Tanimoto, T. Sano and H. Yamamoto, “Omni-directional 3D measurement by hyperbolic mirror cameras and pattern projection,” Proc. 2008 IEEE Conf. on Instrumentation and Measurement Technology, Victoria, BC, Canada, pp. 365-370, 2008.
[54] R. I. Hartley and P. Sturm, “Triangulation,” Proc. ARPA Image Understanding Workshop, pp. 957-966, 1994.
[55] M. Abramowitz and I. A. Stegun, Handbook of Mathematical Functions With Formulas, Graphs, and mathematical Tables. pp. 72.
[56] S. Baker and S. Nayar, “A theory of single-viewpoint catadioptric image formation,” Int. J. of Computer Vision, vol. 35, no. 2, pp. 175196, 1999.
[57] D. Pedoe, Circles: A Mathematical View (Spectrum), 2nd ed. The Mathematical Association of America, 1997.
[58] R. S. Irving, Integers, Polynomials, and Rings. New York: Springer, 2004.
[59] M. Berg, M. Kreveld, M. Overmars, and O. Schwarzkopf, Computational Geometry: Algorithms and Applications. New York: Springer, 1997.
[60] O. Faugeras, Three-Dimensional Computer Vision: A Geometric Viewpoint, MIT Press, Cambridge, MA, 1996.
[61] J. E. Dennis and R. B. Schnabel, Numerical Methods for Unconstrained Optimization and Nonlinear Equations, Prentice Hall, 1983.
[62] J. P. Barreto and H. Araujo, “Geometric Properties of Central Catadioptric Line Images,” Proc. Seventh European Conference on Computer Vision, pp. 237-251, 2002.
[63] T. Apostol, Calculus, Vol 1: One-Variable Calculus with an Introduction to Linear Algebra, Wiley, 2nd edition, June, 1967.

連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top