(3.226.72.118) 您好!臺灣時間:2021/05/12 07:46
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

我願授權國圖
: 
twitterline
研究生:黃聰哲
研究生(外文):Tsung-CheHuang
論文名稱:基於全景控制影像進行室內定位及導航之可行性分析
論文名稱(外文):Indoor Positioning and Navigation Based on Control Spherical Panoramic Images
指導教授:曾義星曾義星引用關係
指導教授(外文):Yi-Hsing Tseng
學位類別:碩士
校院名稱:國立成功大學
系所名稱:測量及空間資訊學系
學門:工程學門
學類:測量工程學類
論文種類:學術論文
論文出版年:2016
畢業學年度:104
語文別:英文
論文頁數:83
中文關鍵詞:球形全景影像室內定位及導航影像匹配
外文關鍵詞:spherical panorma imageindoor positioning and navigationimage feature matching
相關次數:
  • 被引用被引用:2
  • 點閱點閱:156
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
室內外連續定位及導航是移動製圖技術發展中重要的一環,然而全球導航衛星系統(Global Navigation Satellite System, GNSS)於室內環境易受到訊號遮蔽之影響導致定位精度大幅地降低,因此發展高精度的室內定位及導航理論乃首要之務。本研究旨在透過球形全景影像(Spherical Panoramic Image, SPI)進行室內定位及導航分析,先於目標場域建立以之影像方位的控制影像資料庫,利用影像特徵點演算法自動獲取未知影像與控制影像重疊區共軛像點的資訊,進而計算位之影像之方位。球形全景影像晚整的視場角(Field of View, FOV)提供豐富的影像資訊,有效地降低影像數量進而提升計算效率,不僅突破傳統影像之視場角限制,也解決影像過多容易混淆之問題。

本研究可分二階段,第一階段為建立控制影像資料庫,控制影像意指其外方位資訊已知,此部分可透過光束法區域平差完成,第二階段則是未知方位球形全景影像可透過自動化搜尋控制影像資料庫,獲取含有重疊區的控制影像資訊,藉由影像特徵萃取及匹配技術求得共軛點資訊進而求解未知影像之方位資訊。針對球形全景影像之匹配及偵錯,本研究測試了三種相機姿態進行分析,分別為相機平移、相機旋轉以及相機傾斜試驗,並提出適用於球形全景影像匹配時所產生之錯誤共軛點對偵錯模型,實驗結果顯示使用本研究所提出之模型於球形全景影像共軛點偵錯是可行且有效的。

為驗證本研究提出之方位解算理論,我們使用兩種不同類型之共軛點進行室內場景定位實驗,包含手動量測共軛點以及自動化影像匹配量測共軛點,實驗場地位於成功大學測量系館,實驗結果顯示使用本研究所提出之方位解算理論在位置上可達數公分之定位精度(手工量測共軛點)以及約二十公分之定位精度(自動化量測共軛點),影響定位精度之主要因素為共軛點之品質、數量以及控制影像之分布。此外,針對影像姿態解算,本研究首先使用模擬資料測試所提出之方位解算理論是否可行,結果顯示若共軛點對觀測量不含任何誤差,則未知影像姿態角可正確求解。惟實際透過人工量測及自動化量測共軛點實驗後,發現姿態角解算成果並不穩定,對此,現階段我們仍無法給予明確結論,但根據模擬數據實驗提供之結果顯示共軛點品質於姿態解算時亦有所影響。

Continuous indoor and outdoor positioning and navigation is the goal to achieve in the field of mobile mapping technology. However, accuracy of positioning and navigation will be largely degraded in indoor or occluded areas, due to receiving weak or less GNSS signals. Targeting the need of high accuracy indoor and outdoor positioning and navigation for mobile mapping applications, the objective of this study is to develop a novel method of indoor positioning and navigation with the use of spherical panoramic image (SPI). An SPI can provide widely field of view (FOV) than a frame image. It not only breaks the limitation of FOV but also resolves the problem that handing a lot of images are confusing.

Two steps are planned in the technology roadmap. Firstly, establishing a control SPI database that contains a good number of well-distributed control SPIs pre-acquired in the target space. A control SPI means an SPI with known exterior orientation parameters (EOPs), which can be solved with bundle network adjustment of SPIs. Having a control SPI database, the target space will be ready to provide the service of positioning and navigation. Secondly, the position and orientation parameters (POPs) of a newly taken SPI can be solved by using overlapped SPIs searched from the control SPI database. The method of matching SPIs and finding conjugate image features will be developed and tested. The test cases involve three different types. Moreover, this study proposes a suitable model for eliminate the incorrect matches between two overlapped SPIs. The result reveals that using the model correctly can improve the efficiency and reliability of SPIs matching.

For validation, two kinds of corresponding points were applied in the experiment. The first kind involves manually measured points and the second kind involves automatic matched points, so that the effect of matching can be tested. The test field is in the indoor space of the Department of Geomatics. The results show positioning errors less than a few centimeters for manually measured points. The much larger errors resulted from improper matching pairs of corresponding points generated from automatic matching process. This reveals the importance of the quality of corresponding points. The numbers of corresponding points and the distribution of control SPIs are confirmed as reasons effecting the positioning result. On the other hand, for validating the feasibility of proposed method for the orientation computation. We firstly simulated the control and query SPI with known EOPs, so that the relative orientation and scale factor also can be calculated. The corresponding points were also generated by simulation. The result of simulation test shows that our theory is useful. However, the orientation result with realistic experiment is sometimes unstable. This result deviates from our anticipation, and puzzles us. In this stage, we still do not have a very clearly conclusion about orientation calculation. What we can confirm so far is that the measurement errors of corresponding points will affect the orientation results based on the test of simulated data.

摘要 I
ABSTRACT III
ACKNOWLEDGEMENT V
LIST OF TABLES VIII
LIST OF FIGURES X
Chapter 1 Introduction 1
1.1 Motivation 1
1.2 Objective 2
1.3 Research Approach 3
1.4 Thesis Structure 3
Chapter 2 Navigation Environment Setting 4
2.1 The Framework 4
2.2 The Geometry of SPI 6
2.3 Exterior Orientation of SPI 9
2.4 Bundle Adjustment of SPIs 11
Chapter 3 Relative Orientation of an SPI Pair 13
3.1 Relative Orientation 13
3.2 Coplanar Condition 14
3.3 Essential Matrix 16
3.4 Estimation of Relative Orientation 18
3.4.1 Ambiguity of Intersection 19
3.4.2 Estimation of Scale Factor 22
3.5 Validation of Computation 24
Chapter 4 Image Matching 29
4.1 Image Matching Method 29
4.2 Image Feature Detection 29
4.3 Feature Description and Matching 33
4.4 Parameters Testing in SURF Algorithm 35
4.5 Error Matching Detection and Elimination 38
4.5.1 Random Sample Consensus 38
4.5.2 Transformation Model 40
4.6 Preliminary Tests 42
Chapter 5 Experiments 51
5.1 Test Field 51
5.1.1 Establishing Control Field 52
5.1.2 Establishing Control SPIs 56
5.2 Searching Corresponding SPIs 59
5.3 Test Results 63
5.3.1 Test Case I (with Manual Measurements) 63
5.3.2 Test Case II (with Image Matching) 68
5.4 Analysis and Discussion 72
Chapter 6 Conclusions and Suggestions 77
6.1 Conclusions 77
6.2 Suggestions 79
REFERENCES 81
Bay, H., Tuytelaars, T., and Gool, V.L. (2008), “SURF: Speeded up robust features, Computer Vision and Image Understading, 110(3), pp.346~359.

Hartley, H. and Zisserman, A. (2004), Multiple View Geometry in Computer Vision, Cambridge University, pp.257~260.

Hayet, J.B., Lerasle, F. and Devy, M. (2007), “A visual landmark framework for mobile robot navigation, Image and Vision Computing, pp.1341~1351.

Horn, B.K.P. (1990), “Recovering baseline and orientation from essential matrix, http://people.csail.mit.edu/bkph/articles/Essential.pdf

Joglekar, J. and Gedam, S.S. (2012), “Area Based Image Matching Methods – A Survey, International Journal of Emerging Technology and Advanced Engineering, 2(1), pp.130~136.

Lee, Y., Lee, S., Kim, D. and Oh, J.K. (2013), “Improved Industrial Part Pose Determination Based on 3D Closed-Loop Boundaries, Proceedings of IEEE International Symposium on Robotics (ISR), pp.1~3.

Lienhart, R. and Maydt, J. (2002), “An extended set of haar-like features for rapid object detection, Proceedings of IEEE International Conference on Image Processing, Vol. 1, pp.900~903.

Lin, K.Y. (2014), Bundle Adjustment of Multi-station Spherical Panorama Images with GPS Positioning, Master’s Thesis, Department of Geomatics, National Cheng Kung University.
Longuet-Higgins, H.C. (1981), “A computer algorithm for reconstructing a scene from two projections, Nature, 293(10), pp.133~135.

Fischler, M.A. and Bolles, R.C. (1981), “Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartofraphy, Comm. of the ACM, 24, pp.381~395.

Nguyen, V.V., Kim, J.G. and Lee, J.W. (2011), Panoramic Image-Based Navigation for Smart-Phone in Indoor Environment, Springer-Verlag, Berlin Heidelberg, pp.370~376.

Nister, D. (2004), “An Efficient Solution to the Five-Point Relative Pose Problem, IEEE Transaction on Pattern Analysis and Machine Intelligence, 26(6), pp.758~759.

Ressl, C. (2000), “An Introduction to the Relative Orientation Using the Trifocal Tensor, International Archives of Photogrammetry and Remote Sensing, Vol. XXXIII, Part B3, pp.769~776.

Scaramuzza, D. and Fraundorfer, F. (2012), “Visual Odometry: Part I - The First 30 Years and Fundamentals, Proceedings of IEEE Robotics and Automation Magazine, 18(4), pp.85~87.

Se, S., Lowe, D. and Little, J. (2002), “Global Localization using Distintive Visual Features, Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, pp.226~231.

Sih, Y.R. (2014), Study on Vision-Based Navigation-Integration of Coplanarity and Collinearity Condition for Ego-Motion Estimation, Master’s Thesis, Department of Geomatics, National Cheng Kung University.

Stewenius, H., Engels, C., Nister, D. (2006), “Recent Developments on Direct Relative Orientation, ISPRS Journal of Photogrammetry and Remote Sensing, 60(4), pp. 284~294.

Wang, E. and Yan, W. (2013), iNavigation: an image based indoor navigation system, Springer Science+Business Media, New York, pp.1597~1615.

Zhang, C., Xu, J., Xi, N., Jia, Y. and Li, W. (2012), “Development of an omni-direction 3D camera for robot navigation, Proceedings of IEEE/ASME International Conference on Advance Intelligent Mechartronics, pp.262~267.

連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
系統版面圖檔 系統版面圖檔