跳到主要內容

臺灣博碩士論文加值系統

(44.200.140.218) 您好!臺灣時間:2024/07/19 01:42
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:尹克清
研究生(外文):Christian Ivancsits
論文名稱:應用視覺導航系統於小型無人飛行載具
論文名稱(外文):Visual Navigation System for SmallUnmanned Aerial Vehicles
指導教授:李敏凡
指導教授(外文):Min-Fan Ricky Lee
學位類別:碩士
校院名稱:國立臺灣科技大學
系所名稱:自動化及控制研究所
學門:工程學門
學類:機械工程學類
論文種類:學術論文
論文出版年:2010
畢業學年度:98
語文別:英文
論文頁數:136
外文關鍵詞:Machine visionvisual odometryrobust feature trackingabsolute orientationSIFTRANSACautonomous navigationunmanned aerial vehiclenetworked control system
相關次數:
  • 被引用被引用:0
  • 點閱點閱:341
  • 評分評分:
  • 下載下載:65
  • 收藏至我的研究室書目清單書目收藏:0
In recent years, small Unmanned Aerial Vehicles (UAVs) have experienced a strong boost in performance, opening the prospect to several military and civil applications, such as surveillance, monitoring, and inspection. However, the lack of effective autonomous navigation abilities has severely limited the opportunities for deployment. Visual navigation methods are attractive candidates because of the small weight of video cameras. The major issues in the development of a visual navigation system for small UAVs can be characterized as follows: 1) technical constraints, 2) robust image feature matching, 3) efficient and precise method for visual navigation. This thesis addresses these three issues, provides methods for their solution, and evaluates their feasibility and effectiveness.
The technical constraints of small UAVs inhibit on-board computation of visual navigation. This limitation can be overcome with the proposed wireless networked control system, which out-sources the data processing from the UAV to a ground-based process computer. The feature matching, which represents the font-end of all feature based visual navigation methods, is addressed with a robust method based on SIFT feature descriptors, which achieves real-time performance by detaching the explicit scale invariance of image features. The presented navigation concept implements a visual odometry system with a single calibrated camera. The proposed method uses a framework for incremental reconstruction of the camera path and the structure of the environment based on two-view epipolar geometry, followed by sparse bundle adjustment.
The concept for a wireless networked control system was evaluated with latency- and throughput measurements in different environments. The experiment setup conforming to the IEEE 802.11n standard achieves an average latency of 1.3 ms and a data throughput of 3.000 kB/s up to a distance of 70 m. The results demonstrate the feasibility of real-time closed-loop navigation control with the proposed concept.

The presented feature matching method was tested with ten frames of a benchmark image sequence. The evaluation shows similar results compared with SIFT in the number of feature correspondences, and superior performance with respect to the number of false feature matches when applied to visual navigation. The proposed method for robust feature matching achieves up to 8.4 times faster computation compared to SIFT on images of size 640×480 pixels.
The visual odometry was evaluated with real-life image sequences. The proposed method achieved an error of 1.65% with respect to the total path length of 9.43 m on a circular trajectory. The reconstruction from 840 images includes 42 camera positions and 2113 3D world points.
Table of Contents
ABSTRACT I
ACKNOWLEDGMENTS III
TABLE OF CONTENTS IV
LIST OF FIGURES VII
LIST OF TABLES X
CHAPTER 1 INTRODUCTION 1
1.1 BACKGROUND AND MOTIVATION 1
1.2 AUTONOMOUS NAVIGATION 2
1.2.1 Simultaneous Localization and Mapping 2
1.2.2 Visual Odometry 3
1.2.3 Differences between Visual SLAM and Visual Odometry 3
1.3 TECHNICAL CONSTRAINTS OF SMALL UAVS 4
1.4 CONTRIBUTION OF THE PRESENTED WORK 5
1.5 OUTLAY AND STRUCTURE OF THIS THESIS 6
CHAPTER 2 ANALYSIS 9
2.1 COMMERCIALLY AVAILABLE SMALL MULTI-ROTOR UAVS 9
2.2 SKYBOTIX COAX UAV 11
2.2.1 Sensory Equipment 11
2.2.2 Onboard Camera 12
2.2.3 Embedded Computer 13
2.3 AUTONOMOUS VISUAL NAVIGATION 13
2.3.1 Visual Simultaneous Localization and Mapping 14
2.3.2 Visual Odometry 16
2.3.3 Motivation for Implementing Visual Odometry 17
2.4 EXPERIMENTAL ENVIRONMENT 19
CHAPTER 3 CONCEPT FOR A WIRELESS NETWORKED CONTROL SYSTEM 20
3.1 MOTIVATION FOR APPLYING WIRELESS NETWORKED CONTROL 20
3.2 DESIGN OF THE WIRELESS NETWORKED CONTROL SYSTEM 21
3.3 ANTICIPATED FIELD OF APPLICATION AND LIMITATIONS 23
3.4 WIRELESS COMPUTER NETWORKS 23
3.4.1 IEEE Standards for Wireless Local Area Networks 24
3.4.2 Network Topology 25
3.4.3 Network Parameters 26
3.4.4 Communications Protocols 28
3.5 TRANSMISSION OF IMAGE DATA 29
CHAPTER 4 FUNDAMENTAL METHODS OF MACHINE VISION 31
4.1 CORNER DETECTION 31
4.1.1 Moravec Corner Detector 31
4.1.2 Harris Corner Detector 32
4.1.3 Shi and Tomasi‘s Method 34
4.2 FEATURE MATCHING 34
4.2.1 Correlation based Methods 35
4.2.2 Scale Invariant Feature Transform 36
4.3 PINHOLE CAMERA MODEL 38
4.4 EPIPOLAR GEOMETRY 41
4.4.1 Fundamental Matrix 42
4.4.2 Essential Matrix 44
4.5 RANDOM SAMPLE CONSENSUS METHOD 45
4.5.1 Method Overview 45
4.5.2 Line Fitting Example 46
4.5.3 Probabilistic Derivation of the Number of Iterations 47
CHAPTER 5 IMAGE FEATURE MATCHING WITH HIGHLY DISTINCTIVE FEATURE DESCRIPTORS 49
5.1 BACKGROUND AND MOTIVATION 49
5.2 METHOD OVERVIEW 50
5.3 FEATURE POINT DETECTION 52
5.4 FEATURE DESCRIPTOR 52
5.4.1 Image Smoothing 53
5.4.2 Image Gradients 54
5.4.3 Dominant Feature Orientations 56
5.4.4 Image Sample Rotation 59
5.4.5 Feature Descriptor Generation 61
5.5 FEATURE MATCHING 65
CHAPTER 6 VISUAL ODOMETRY 67
6.1 METHOD OVERVIEW 67
6.2 CAMERA CALIBRATION 70
6.3 RELATIVE CAMERA POSE AND 3D WORLD POINTS 71
6.3.1 Estimation of the Essential Matrix 72
6.3.2 Extraction of the Camera Matrix 74
6.3.3 Triangulation of 3D Points 76
6.4 INCREMENTAL RECONSTRUCTION OF CAMERA PATH AND ENVIRONMENT 78
6.4.1 Absolute Orientation Problem 80
6.4.2 Transformation of Local Coordinates to the Global Coordinate Frame 86
6.5 NON-LINEAR OPTIMIZATION / BUNDLE ADJUSTMENT 88
CHAPTER 7 RESULTS 90
7.1 PERFORMANCE OF THE WIRELESS NETWORKED CONTROL SYSTEM 90
7.1.1 Latency and Throughput 91
7.1.2 Summary Wireless Networked Control System 94
7.2 PERFORMANCE OF FEATURE MATCHING WITH HIGHLY DISTINCTIVE FEATURE DESCRIPTORS 96
7.2.1 Comparison with SIFT 96
7.2.2 Computation Time of HDF Compared to SIFT 105
7.2.3 Computation Time of HDF on Multi-Core Computers 106
7.2.4 Summary Feature Matching with Highly Distinctive Feature Descriptors 107
7.3 PERFORMANCE OF THE VISUAL ODOMETRY SYSTEM 108
7.3.1 Experimental Evaluation 108
7.3.2 Summary Visual Odometry System 124
CHAPTER 8 CONCLUSION 125
8.1 CONCLUSION 125
8.2 FUTURE WORK 127
APPENDIX 129
REFERENCES 131
GLOSSARY OF NOTATION 134
BIOGRAPHY 136
[1]S. Thrun, et al., Probabilistic Robotics (Intelligent Robotics and Autonomous Agents): The MIT Press, 2005.
[2]D. G. Lowe, "Distinctive Image Features from Scale-Invariant Keypoints," International Journal of Computer Vision, vol. 60, pp. 91--110, 2004.
[3]A. J. Davison, "Real-Time Simultaneous Localisation and Mapping with a Single Camera," presented at the Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2, 2003.
[4]A. J. Davison, et al., "REAL-TIME 3D SLAM WITH WIDE-ANGLE VISION," presented at the 5th IFAC/EURON Symposium on Intelligent Autonomous Vehicles, 2004.
[5]A. J. Davison, et al., "MonoSLAM: Real-Time Single Camera SLAM," IEEE Trans. Pattern Anal. Mach. Intell., vol. 29, pp. 1052-1067, 2007.
[6]D. Chekhlov, et al., "Real-time and robust monocular SLAM using predictive multi-resolution descriptors," in In 2nd International Symposium on Visual Computing, 2006.
[7]K. Celik, et al., "Monocular vision SLAM for indoor aerial vehicles," presented at the Proceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems, St. Louis, MO, USA, 2009.
[8]D. Nistér, et al., "Visual odometry," presented at the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Washington, DC, USA, 2004.
[9]M. A. Fischler and R. C. Bolles, "Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography," Commun. ACM, vol. 24, pp. 381-395, 1981.
[10]J. Campbell, et al., "A Robust Visual Odometry and Precipice Detection System Using Consumergrade Monocular Vision," in Proceedings of the 2005 IEEE International Conference on Robotics and Automation ICRA, 2005, pp. 3421--3427.
[11]N. Sünderhauf, et al., "Visual odometry using sparse bundle adjustment on an autonomous outdoor vehicle," in Tagungsband Autonome Mobile Systeme, 2005.
[12]M. I. A. Lourakis and A. A. Argyros, "SBA: A software package for generic sparse bundle adjustment," ACM Trans. Math. Softw., vol. 36, pp. 1-30, 2009.
[13]J. F. Kurose and K. W. Ross, Computer Networking: A Top-Down Approach vol. 4: Addison-Wesley Publishing Company, 2009.
[14]H. Moravec, "Towards Automatic Visual Obstacle Avoidance," in Proceedings of the 5th International Joint Conference on Artificial Intelligence, 1977, p. 584.
[15]C. Harris and M. Stephens, "A Combined Corner and Edge Detection," in Proceedings of The Fourth Alvey Vision Conference, 1988, pp. 147--151.
[16]J. Shi and C. Tomasi, "Good Features to Track," presented at the 1994 IEEE Conference on Computer Vision and Pattern Recognition (CVPR'94), 1994.
[17]R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision: Cambridge University Press, 2003.
[18]G. Xu and Z. Zhang, Epipolar Geometry in Stereo, Motion, and Object Recognition: A Unified Approach: Kluwer Academic Publishers, 1996.
[19]B. D. Lucas and T. Kanade, "An iterative image registration technique with an application to stereo vision," presented at the Proceedings of the 7th international joint conference on Artificial intelligence - Volume 2, Vancouver, BC, Canada, 1981.
[20]C. Tomasi and T. Kanade, "Detection and Tracking of Point Features," International Journal of Computer Vision, 1991.
[21]G. Bradski and A. Kaehler, Learning OpenCV: Computer Vision with the OpenCV Library vol. 1: O'Reilly Media, 2008.
[22]Z. Zhang, "A Flexible New Technique for Camera Calibration," IEEE Trans. Pattern Anal. Mach. Intell., vol. 22, pp. 1330-1334, 2000.
[23]D. C. Brown, "Close-range camera calibration," PHOTOGRAMMETRIC ENGINEERING, vol. 37, pp. 855--866, 1971.
[24]R. I. Hartley, "In defence of the 8-point algorithm," presented at the Proceedings of the Fifth International Conference on Computer Vision, 1995.
[25]B. K. P. Horn, "Closed-form solution of absolute orientation using unit quaternions," Journal of the Optical Society of America A, vol. 4, pp. 629--642, 1987
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top