跳到主要內容

臺灣博碩士論文加值系統

(44.220.249.141) 您好!臺灣時間:2023/12/11 19:59
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:林邦昱
研究生(外文):Lin, Pang-Yu
論文名稱:利用SIFT及三角化重建3D點雲圖的一硬體實現
論文名稱(外文):3D Point Cloud Reconstruction via SIFT and Triangulation and Its FPGA Implementation
指導教授:陳進興陳進興引用關係
指導教授(外文):Chen, Chin-Hsing
口試委員:吳誠文陳進興孫永年張志文
口試日期:2023-07-20
學位類別:碩士
校院名稱:國立成功大學
系所名稱:電腦與通信工程研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2023
畢業學年度:111
語文別:英文
論文頁數:53
中文關鍵詞:現場可程式化邏輯閘陣列尺度不變特徵轉換特徵匹配運動恢復結構3D點雲重建相機標定三角測量影像處理點雲
外文關鍵詞:FPGASIFTfeature matchingstructure from motion3D point cloud reconstructioncamera calibrationtriangulationimage processingpoint cloud
相關次數:
  • 被引用被引用:0
  • 點閱點閱:12
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
本論文提出一個以FPGA實現從兩張視角圖重建出物體3D點雲圖的系統。此一系統主要由雙模組構成。第一模塊使用RS232 將兩張圖像傳輸到 FPGA 中,並利用Scale-invariant feature transform (SIFT) 演算法計算出圖像中的特徵點座標和描述子。接下來,我們將對兩張圖像的特徵點進行匹配。最後將匹配點的座標通過 RS232 傳輸回 PC 端。
由於FPGA 記憶體的大小限制,我們的系統只能處理大小為256*256的兩張影像進行特徵點的匹配。為了增加匹配點的數量,從而獲得更準確的三角測量結果並進行點雲的可視化展示,本論文將大小為N*N的圖像分解成大小為(N/256)*(N/256)張256*256的子圖像,並對所有的子圖像進行特徵匹配。
第二個模組是三角測量模組。首先,我們運用相機標定演算法來計算相機的內部參數。接著,我們透過RS232將第一個模塊計算出的特徵點座標傳送至FPGA。同時,利用三角測量方法和內部參數,我們計算出了點雲的資訊。最後,將點雲通過 RS232 傳輸回 PC 端並在軟體中進行顯示。
由實驗解果顯示,我們成功的還原了圖像的3D點雲圖,並且由視覺化中顯示了我們的3D點雲圖的還原度是精準的,也成功證明了FPGA是可以做到3D點雲圖的重建。
This thesis presents a system for structure from motion (SFM) from two views via scale-invariant feature transform (SIFT) and triangulation algorithm with FPGA Implementation. The system consists of two modules. The first module first transmits two images into FPGA via RS232, finds the feature points and its descriptor via SIFT, and then implements feature matching, to find the coordinates of the matching point, finally transmits the coordinate with a matching point to the PC via RS232.
Owing to the limit of the memory on FPGA, our system can only implement feature matching on two 256*256 images. In order to get more matching points to get the better vision after triangulation. We divide N*N image to (N/256)*(N/256) subimages, and implement the feature matching for all subimages, so that we can get more matching points.
The second module is the triangulation module. We first get the intrinsic matrix for our camera via camera calibration, then transmit the coordinates from the first module to FPGA via RS232, then calculate the point clouds by triangulation algorithm with intrinsic matrix. Finally, the point cloud is transmitted back to the PC end via RS232 and displayed on the screen. The experimental results show that we have successfully reconstructed the 3D point cloud from two images. The accuracy of our reconstructed 3D point cloud was demonstrated through visualization, which affirms that FPGA can indeed achieve accurate 3D reconstruction.
摘 要 I
Abstract III
誌 謝 V
Acknowledgment VI
Contents VII
List of Figures X
Chapter 1 Introduction 1
1.1 3D Point Clouds Reconstruction 1
1.2 SIFT and Feature Matching 2
1.3 Structure from Motion 4
1.4 Camera Calibration 5
1.5 Triangulation 6
1.6 Field Programmable Gate Array (FPGA) 7
1.7 Thesis Outline 8
Chapter 2 Background Related to the Proposed System 9
2.1 SIFT and Feature Matching 9
2.1.1 SIFT 9
2.1.2 Feature Matching 17
2.2 Epipolar Geometry and Triangulation 18
2.2.1 Epipolar Geometry 18
2.2.2 Triangulation 21
2.3 Hardware-based Approaches for 3D Reconstruction 22
Chapter 3 Our FPGA Implementation of 3D Point Cloud Reconstruction 23
3.1 The Feature Matching Module 25
3.1.1 DoG Module 25
3.1.2 SIFT Detection Module 26
3.1.3 SIFT Descriptor Module 27
3.1.4 Feature Matching Module 28
3.1.5 UART Receiver Module 29
3.1.6 SRAM Controller Module 30
3.1.7 RAM Module 31
3.1.8 FIFO Controller Module 31
3.1.9 UART Transmitter 32
3.2 The Triangulation Module 33
3.2.1 Triangulation 33
3.2.2 UART Receiver Module 34
3.2.3 FIFO controller 35
3.2.4 The Second FIFO Controller 35
3.2.5 UART Receiver 36
3.2.6 Finite State Machine of Triangulation Module 37
Chapter 4 Experimental Results 39
4.1 Overview 39
4.2 Experimental Setup 41
4.3 SIFT Feature Point Detection 41
4.4 SIFT Feature Matching 43
4.5 Camera Calibration 45
4.6 The result of Triangulation 48
Chapter 5 Conclusion and Future Work 50
References 51
[1]C. Freeman, "Multiple methods beyond triangulation: collage as a methodological framework in geography," Geografiska Annaler: Series B, Human Geography, 2020.
[2] C. H. Chien, C. J. Chien and C. C. Hsu, "Hardware-Software Co-Design of an Image Feature Extraction and Matching Algorithm," 2019 2nd International Conference on Intelligent Autonomous Systems (ICoIAS), Singapore, pp. 37-41, 2019.
[3] D. Lowe, "Distinctive image features from scale-invariant keypoints," International Journal of Computer Vision, 2004.
[4]F. Porikli and A. Divakaran, "Multi-camera calibration, object tracking and query generation," 2003 International Conference on Multimedia and Expo. ICME '03. Proceedings (Cat. No.03TH8698), Baltimore, MD, USA, pp. I-653, 2003.
[5]H. Nejad and Z. Nasri, "Adaptive RANSAC and extended region-growing algorithm for object recognition over remote-sensing images," Multimed Tools Appl, 2022.
[6]J. J. Moré, "The Levenberg-Marquardt algorithm: Implementation and theory," Numerical Analysis. Lecture Notes in Mathematics, vol. 630. Springer, Berlin, Heidelberg, 1978.
[7]J. Xu, Z. P. Fang, A. A. Malcolm and H. Wang, "A robust close-range photogrammetric system for industrial metrology," 7th International Conference on Control, Automation, Robotics and Vision, 2002. ICARCV 2002, Singapore, pp. 114-119, vol.1, 2002.
[8]J. Zhang, Y. Xiu, "Image stitching based on human visual system and SIFT algorithm," Vis Comput, 2023.
[9]K. A. Neumann, "A Structure From Motion Pipeline for Orthographic Multi-View Images," 2022 IEEE International Conference on Image Processing (ICIP), Bordeaux, France, pp. 1181-1185, 2022.
[10]L. Yao, H. Feng, Y. Zhu, Z. Jiang, D. Zhao and W. Feng, "An architecture of optimised SIFT feature detection for an FPGA implementation of an image matcher," 2009 International Conference on Field-Programmable Technology, Sydney, NSW, Australia, pp. 30-37, 2009.
[11]M. Wang, X. Du, Z. Chang, K. Wang, "A Scene Perception Method Based on MobileNetV3 for Bionic Robotic Fish," Neural Computing for Advanced Applications. NCAA 2022. Communications in Computer and Information Science, vol. 1638, Springer, Singapore, 2022.
[12]P. Kreowsky and B. Stabernack, "A Full-Featured FPGA-Based Pipelined Architecture for SIFT Extraction," IEEE Access, vol. 9, pp. 128564-128573, 2021.
[13]S. Battiato, G. Gallo, G. Puglisi and S. Scellato, "SIFT Features Tracking for Video Stabilization," 14th International Conference on Image Analysis and Processing (ICIAP 2007), Modena, Italy, pp. 825-830, 2007.
[14]S. Gibson, J. Cook, T. Howard, R. Hubbold and D. Oram, "Accurate camera calibration for off-line, video-based augmented reality," Proceedings. International Symposium on Mixed and Augmented Reality, Darmstadt, Germany, pp. 37-46, 2002.
[15]S. Se and P. Jasiobedzki, "Photo-realistic 3D model reconstruction," Proceedings 2006 IEEE International Conference on Robotics and Automation, 2006. ICRA 2006, Orlando, FL, USA, pp. 3076-3082, 2006.
[16] V. Abalakin, M. Chubey, G. Eroshkin, and I. Kopylov, "Triangulation Measurements in the Solar System," International Astronomical Union Colloquium, 2000.
[17]W. S. Lin, Y. L. Wu, W. C. Hung, and C. Y. Tang, "A Study of Real-Time Hand Gesture Recognition Using SIFT on Binary Images," Advances in Intelligent Systems and Applications - Volume 2. Smart Innovation, Systems and Technologies, vol. 21. Springer, Berlin, Heidelberg, 2013
電子全文 電子全文(網際網路公開日期:20280824)
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊