跳到主要內容

臺灣博碩士論文加值系統

(18.97.14.91) 您好!臺灣時間:2025/01/19 22:39
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:柯孜澄
研究生(外文):Zih-Cheng Ke
論文名稱:影像辨識應用於直昇機自動降落系統
論文名稱(外文):Image Recognition Applied to the Helicopter Landing System
指導教授:王冠智王冠智引用關係
指導教授(外文):Luke K. Wang
口試委員:蕭飛賓黃國源
口試委員(外文):Fei-Bin HsiaoKou-Yuan Huang
口試日期:2013-07-08
學位類別:碩士
校院名稱:國立高雄應用科技大學
系所名稱:電機工程系博碩士班
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2014
畢業學年度:101
語文別:英文
論文頁數:57
中文關鍵詞:影像辨識停機坪霍夫轉換加速穩健特徵一位元轉換影像矩哈理斯角檢測
外文關鍵詞:Image RecognitionHelipadHough TransformSURFOne-Bit TransformImage MomentHarris Corner Detector
相關次數:
  • 被引用被引用:2
  • 點閱點閱:776
  • 評分評分:
  • 下載下載:169
  • 收藏至我的研究室書目清單書目收藏:1
本研究提出一種應用於直昇機降落系統的停機坪影像辨識方法,為了能適用於大多數的降落環境,因此本研究使用的是標準式樣的停機坪,並結合多種影像辨識方法來提升辨識率,主要目的是除了找出影像中的停機坪外,還能夠確實標記其中心之位置,以利直昇機降落時能正確落在停機坪之上方。本研究的架構主要包含霍夫轉換與特徵辨識並結合一位元轉換和哈理斯角檢測。連續影像首先藉由霍夫轉換找出影像中的圓形目標物,再透過加速穩健特徵與影像矩特徵去除不符合的圓,最後再透過哈理斯角檢測校正圓心的位置。本論文在實驗的過程中也針對前述之方法進行辨識成功率之比較。最後於實驗結果的部分,運用SolidWorks建構一個六個自由度之史都華平台來進行直升機降落之模擬。
This research proposes a method of image recognition for the helicopter landing system. In order to be used in most of environments, our study uses the standard helipad, and combines with methods of multiple image processing to increase the recognition rate. In addition to finding the helipad in the image, we are also able to accurately locate the center of the helipad, and the helicopter can correctly land on the helipad. The structure of this research mainly comprises Hough transform, SURF, one-bit transform, and Harris corner detector. First, the images are captured from webcam, and detected the circular object by Hough transform. Then, the inconsistent circle is removed by SURF and moment features. Finally, we are using Harris corner detector to correct the center location of the circle. The advantages and disadvantages of the above methods and their successful rate are also represented in this thesis. In the last experimental result, we use a six DOF Stewart platform to simulate the landing scenario of the helicopter through the SolidWorks software.
ABSTRACT ……………………………………………………………………...…............... i
摘要 ……………………………………………………………………………..…………... ii
致謝 …………………………………………………………………………..…….............. iii
CONTENTS ………………………………………………………………...…………….... iv
LIST OF FIGURES ……………………………………………………………...………… vi
CHAPTER 1. INTRODUCTION ……………………………………………….................. 1
CHAPTER 2. CORNER AND INTEREST POINT DETECTION ...………….............. 4
2.1 Speeded-up Robust Features …………………………………………………………... 4
2.1.1 Integral Images ……………………………………………………………………… 4
2.1.2 Fast Hessian Detector …………………………………………………………..5
2.1.3 Approximation of Hessian Matrix …………………………………………………… 6
2.1.4 Scale-Space Representation ………………………………………………………… 7
2.1.5 Interest Point Localization …………………………………………………............... 10
2.1.6 Interest Point Descriptor …………………………………………………………… 12
2.1.6-1 Orientation Assignment …………………………………………………………. 12
2.1.6-2 Descriptor Based on Sum of Haar Wavelet Responses …………………………. 14
2.2 Harris Corner Detector …………………………………………………………..…… 15
2.2.1 Corner Feature ……………………………………………………………………… 15
2.2.2 Harris Corner Detector ……………………………..………………………………. 16
CHAPTER 3. GEOMETRIC CHARACTERISTICS AND ONE-BIT TRANSFORM …..
………………………………………………………………………………………. 22
3.1 Hough Transform …………………………………………………..…………. 22
3.1.1 Line Detection ………………..……………………………………………... 22
3.1.2 Circle Detection ………………………..……………………………………. 27
3.1.3 Two-stages Hough Transform ………………...…………………………...… 28
3.2 One-bit Transform ……………………………………………………………. 29
3.3 Invariant Moment …………………………………………………………….. 31
3.3.1 Raw Moment ………..……………………….………………………………………. 31
3.3.2 Central Moments ……………………………………………………………………. 32
3.3.3 Hu’s Seven Moments …………………………………………………….…………... 32
CHAPTER 4. EXPERIMENTAL AND SIMULATION RESULTS ………………..…... 34
4.1 Feature Detector ……………………………………………………………………… 34
4.2 Circle Detector ………………………………………………………………………... 40
4.3 Geometric Feature Detector …………………………………………………………. 44
4.4 Corrected Center of Helipad ………………………………………………………... 47
4.5 Example ………………………………………………………………………………. 48
4.6 Simulation …………………………………………………………………………….. 51
CHAPTER 5. CONCLUSION ……………………………………………………………. 54
REFERENCES …………………………………………………………………………….. 55
[1] Liu, H., Bai, Y., Lu, G., Shi, Z., and Zhong, Y. “Robust Tracking Control of a Quadrotor Helicopter,”Journal of Intelligent & Robotic Systems, (2013).
[2] Cho, J. E. “Automatic Take-off/Landing Control Design for Unmanned Helicopter,” Master thesis, Dept. Electronic Eng., Kao Yuan University, (2010).
[3] Adolf, F., Andert, F., Lorenz, S., Goormann, L., and Dittrich, J. “An Unmanned Helicopter for Autonomous Flights in Urban Terrain,”International Conference on Advances in Robotics Research, pp. 275-286, (2009).
[4] Bi, Y., and Duan, H. “Implementation of Autonomous Visual Tracking and Landing for a Low-cost Quadrotor,”International Journal for Light and Electron Optics, pp. 1-5, (2013).
[5] Tseng, S. P., and Liao, W. T. “Development of an Vision Assisted Automatic Landing System for Unmanned Aerial Helicopter,”National Defense University Conference on National Defense Science and Technology, vol. 21, pp. B181-B188, (2012).
[6] Tseng, S. P., and Chen, Y. Z. “An Embedded Computer Vision System Applied in the Automatic Landing System for Unmanned Aerial Vehicle,”National Defense University Conference on National Defense Science and Technology, vol. 20, pp. 27-34, (2011).
[7] Yang, S., Scherer, S. A., and Zell, A. “An Onboard Monocular Vision System for Autonomous Takeoff, Hovering and Landing of a Micro Aerial Vehicle,”Journal of Intelligent and Robotic Systems, vol. 69, pp. 499-515, (2012).
[8] Wu, C. J., and Tsai, W. H. “An Omni-vision Based Localization Method for Automatic Helicopter Landing Assistance on Standard Helipads,”International Conference on Computer and Automation Engineering, vol. 3, pp. 327-332, (2010).
[9] Cesetti, A., Frontoni, E., Mancini, A., Zingaretti, P., and Longhi, S. “A Vision-based Guidance System for UAV Navigation and Safe Landing Using Natural Landmarks,”Journal of Intelligent and Robotic Systems, vol. 57, pp. 233-257, (2010).
[10] Cesetti, A., Frontoni, E., Mancini, A., and Zingaretti, P. “Autonomous Safe Landing of a Vision Guided Helicopter,”International Conference on Mechatronics and Embedded Systems and Applications, pp. 125-130, (2010).
[11] Cesetti, A., Frontoni, E., Mancini, A., Zingaretti, P., and Longhi, S. “A Single-camera Feature-based Vision System for Helicopter Autonomous Landing,”International Conference on Advanced Robotics, pp. 1-6, (2009).
[12] Bay, H., Ess, A., Tuytelaars, T., and Van Gool, L. “Speeded-up Robust Features (SURF),”Computer Vision and Image Understanding, vol. 110, pp. 346-359, (2008).
[13] Evans, C. “Notes on the OpenSURF Library,”(2009).
[14] Bay, H. “From Wide-baseline Point and Line Correspondences to 3D,”Ph.D. dissertation, Swiss Federal Institute of Technology, ETH Zurich, (2006).
[15] Neubeck, A., and Van Gool, L. “Efficient Non-maximum Suppression,” International Conference on Pattern Recognition, vol. 18, pp. 850-855, (2006).
[16] Brown, M., and Lowe, D. “Invariant Features from Interest Point Groups,” British Machine Vision Conference, pp. 253-262, (2002).
[17] Schmid, C., Mohr, R., and Bauckhage, C. “Evaluation of Interest Point Detectors,” International Journal of Computer Vision, vol. 37, no. 2, pp. 151-172, (2000).
[18] Chu, C. L. “Applying Image Processing Techniques on Wafer Defect Inspection and Coin Image Registration,”Master thesis, Dept. Information Management, National Yunlin University of Science and Technology, (2007).
[19] Harris, C., and Stephens, M. “A Combined Corner and Edge Detector,”Alvey Vision Conference, pp. 147-151, (1998).
[20] Moravec, H. “Obstacle Avoidance and Navigation in the Real World by a Seeing Robot Rover,”Ph.D. dissertation, Robotics Institute, Carnegie Mellon University, (1980).
[21] Frolova, D., and Simakov, D. “Matching with Invariant Features,” Weizmann Institute of Science, (2004).
[22] Tsai, Z. S. “Objects/Targets Recognition and Camera Calibration Technique for Sequence Image Tracking Applications,”Master thesis, Dept. Electrical Eng., National Kaohsiung University of Applied Sciences, (2009).
[23] Hough, P. V. C. “Method and Means for Recognizing Complex Patterns,”U.S. Patent 3 069 654, (1962).
[24] Wu, C. “Hough Transform In Geometry Detection,”Master thesis, Dept. Information Eng., I-Shou University, (2012).
[25] Hong, J. Y. “A study on the recognition of overlapped coins,”Master thesis, Dept. Information Eng., Southern Taiwan University of Science and Technology, (2007).
[26] Davies, E. R. “A modified Hough scheme for general circle location,”Pattern Recognition Letters, vol. 7, no. 1, pp. 37-44, (1988).
[27] Illingworth, J., and Kittler, J. “The adaptive Hough Transform,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 9, no. 5, pp. 690-697, (1987).
[28] Chi, F. C. “Evaluation of the Two-Stages Hough Transform to Detect Lines and Circles in Digital Images by Using Quadratic Polynomial Fitting,”Master thesis, Institute of Electrical and Control Engineering, National Chiao Tung University, (2002).
[29] Feng, J., Lo, T. K., Mehrpour, H., and Karbowiak, A. E., “Adaptive Block MatchingMotion Estimation Algorithm Using Bit-Plane Matching,” International Conference on Image Processing, vol. 3, no. 23, pp. 496-499, (1995).
[30] Duh, M. “A Hybrid Method for One-Bit Transform Based Motion Estimation,” Master thesis, Dept. Electrical Eng., National Kaohsiung University of Applied Sciences, (2013).
[31] Hu, M. K. “Visual Pattern Recognition by Moment Invariants,” IRE Transactions on Information Theory, vol. 8, no. 2, (1962).
[32] Flusser, J. “Moment invariants in image analysis,” World Academy of Science, Engineering and Technology, vol. 11, pp. 376-381, (2005).
[33] Wikimedia. Available: http://en.wikipedia.org/.
[34] Gonzalez, W. “Digital image processing,” 3th ed. Taipei: Pearson, (2009).
[35] Shi, H., and Wang, H. “A Vision System for Landing an Unmanned Helicopter in Complex Environment,” SPIE, vol. 7496, pp. 74962G_1- 74962G_8, (2009).
[36] Montgomery, J.F., and Sukhatme, G. “Vision-based Autonomous Landing of an Unmanned Aerial Vehicle,” IEEE International Conference on Robotics and Automation, vol. 3, pp. 2799-2804, (2002).
[37] Cowens, D., (2012), Available: https://forum.solidworks.com/thread/57171.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top