跳到主要內容

臺灣博碩士論文加值系統

(18.97.14.84) 您好!臺灣時間:2024/12/03 22:36
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:林國朝
研究生(外文):Guo-Chao Lin
論文名稱:一個使用點雲優化與帕松重建之基於 SLAM的建構高完整度室內場景方法
論文名稱(外文):A SLAM Based High-Integrity Indoors Scene Reconstruction Method by Using Point Cloud Optimization and Poisson Reconstruction
指導教授:張厥煒張厥煒引用關係
口試委員:奚正寧楊士萱張厥煒
口試日期:2017-07-14
學位類別:碩士
校院名稱:國立臺北科技大學
系所名稱:資訊工程系研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
畢業學年度:105
語文別:中文
中文關鍵詞:深度攝影機表面重建特徵比對運動恢復結構同步定位與地圖建構
外文關鍵詞:Depth CameraSurface ReconstructionORBSFMSLAM
相關次數:
  • 被引用被引用:3
  • 點閱點閱:668
  • 評分評分:
  • 下載下載:198
  • 收藏至我的研究室書目清單書目收藏:0
隨著混合實境(Mixed Reality, MR)等創新領域的發展,許多裝置都需要同步定位與地圖建構(Simultaneous Localization and Mapping, SLAM)技術的支援,在運動中定位(Localization)出自身位置,同時記錄場景資訊做為地圖(Mapping)。並得到場景表面的結構,才能具備感測空間的能力,達到將現實與虛擬結合的目的。本論文設計一套方法,基於彩色與深度影像(Depth Image)實現SLAM,以點雲(Point Cloud)資料格式建立地圖。並加入排除離群(Remove Outliers)與平滑化(Smoothing)點雲,搭配帕松重建(Poisson Reconstruction)的特性,解決地圖中的雜訊、與點雲密度不均的問題,以及在地圖資訊有所缺少的情況,仍能重建具有高完整度表面的場景模型。有了完整的場景結構模型,能利於符合後續相關的發展的使用,同時兼容各種深度攝影機,並兼顧速度與精細度。

本論文使用第一代體感器(Kinect for Windows V1),取得串流中的彩色與深度影像,並校正為相同大小。使用ORB(Oriented FAST and Rotated BRIEF)特徵檢測搭配隨機抽樣一致(RANdom SAmple Consensus, RANSAC),取得較高品質特徵點。以特徵點進行多點透視法(Perspective-N-Point, PnP),得到相機位姿。接著以彩色與深度影像轉換為點雲,依相機位姿調整後,進行相加(Join),逐步得到完整的場景點雲地圖。對點雲地圖去除雜訊與平滑化,最後運行帕松表面重建,輸出一個高完整度表面結構的場景模型。
With the development and innovation of MR (Mixed Reality) areas and et cetra. Many devices need the support of SLAM (Simultaneous Localization and Mapping) technology in order to localize their position in motion and record the scene information as a map. Using this map can get the structure of the scene surface. By the ability to aware space which get from the surface, we can achieve the purpose of combining reality and virtual. This paper designs a set of methods to implement SLAM that based on color and depth image, which can generate point cloud map. With the poisson reconstruction characteristics, remove outliers from point clouds and smoothing can solve the noise in the map, as well as point cloud density uneven problems. Our system can rebuild a high-integrity indoors scene model even when lack of map information. The complete scene structure model can meet the use of follow-up relating development.Our system also can compatible with a variety of depth camera, and take care the speed and precision at sametime.
This paper uses kinect for windows v1 to get the color and depth images in the stream and adjust it to the same size. Using the ORB (Oriented FAST and Rotated BRIEF) feature detection with RANSAC (RANdom SAmple Consensus), a higher quality feature point is obtained. We also use PnP (Perspective-N-Point) with feature points to obtain camera position. The system then converts color and depth of the image into a point cloud. The point cloud is adjusted by the camera position.Then the system joins point clouds, we can get a complete point cloud map increasingly. After remove outliers and smoothing the point cloud map, run the Poisson surface reconstruction.Finally we get the output model which is a high-integrity surface structure of the scene.
摘 要 i
ABSTRACT ii
誌 謝 iv
目 錄 v
表目錄 vi
圖目錄 vii
第一章 緒論 1
1.1研究動機 1
1.2研究目的 3
1.3論文架構 5
第二章 相關技術與文獻探討 6
2.1 SLAM相關文獻 6
2.2深度資訊擷取 8
2.3圖形特徵點匹配 10
2.4相機姿態估算 11
2.5閉環檢測 12
2.6點雲資料格式 13
2.7表面重建 15
2.7.1貪婪表面三角法 15
2.7.2 Marching Cubes移動立方體 15
2.7.3帕松表面重建 16
第三章 系統架構與流程 18
3.1系統概述 18
3.2系統架構 19
3.3系統流程 19
第四章 特徵點匹配與相機位姿估算 22
4.1彩色與深度影像調整 22
4.2特徵點匹配 23
4.3相機位姿 26
4.4關鍵幀提取 28
第五章 軌跡優化與閉環檢測 30
5.1閉環檢測 30
5.2軌跡優化 31
第六章 點雲地圖與表面重建 34
6.1點雲拼接 34
6.2點雲優化 35
6.3表面重建 38
第七章 實驗結果與分析 42
7.1實驗與系統環境 42
7.2相機定位之實驗 43
7.2.1 攝影機調整 43
7.2.2 特徵抽取匹配 46
7.2.3 相機位姿估算 52
7.2.4 關鍵幀提取 55
7.3地圖建構與表面重建實驗 56
7.3.1 閉環檢測 56
7.3.2 點雲去除雜訊 59
7.3.3 帕松重建 60
第八章 結論與未來展望 68
8.1結論 68
8.2未來展望 69
參考文獻 70
[1]P. Milgram and A. F. Kishino, “Taxonomy of Mixed Reality Visual Displays IEICE Transactions on Information and Systems”, pp. 1321-1329, 1994.
[2]Tango – Google, Tango, https://get.google.com/tango/.
[3]Microsoft HoloLens, https://www.microsoft.com/en-us/hololens, 2017.
[4]Svet Androida, Samsung is preparing a new GearVR and private HoloLens headset, https://www.svetandroida.cz/gearvr-hololens-201612/, 2016/12.
[5]Digital Trends, Microsoft and Pearson are Partnering to Turn Hololens into an Educational Tool, https://www.digitaltrends.com/computing/pearson-hololens-mixed-reality-education/, 2016/10.
[6]R. C. Smith and P. Cheeseman, “On the Representation and Estimation of Spatial Uncertainty”. The International Journal of Robotics Research, 1986.
[7]M. Kazhdan, M. Bolitho and H. Hoppe, “Poisson Surface Reconstruction,” ACM Transactions on Graphics, Vol.32, No.3, 2006/01.
[8]J. Leonard and H. F. Durrant-Whyte, “Mobile Robot Localization by Tracking Geometric Constraints”, IEEE Conference on Decision and Control, 1990.
[9]黃金聰, 陳思翰, “利用多重影像產生之點雲的精度評估” Journal of Taiwan Land Research, Vol 16,No.1 pp.81~101,2013/05.
[10]A. J. Davison and D. Reid, “MonoSLAM:Real-Time Single Caremara SLAM,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 29, No.6, 2007/07.
[11]G. Klein and D. Murray, “Parallel Tracking and Mapping for Small AR Workspaces,” Proc. International Symposium on Mixed and Augmented Reality, ISMAR07, Nara, 2007.
[12]D. Zou and P. Tan, “CoSLAM: Collaborative Visual SLAM in Dynamic Environments,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.35, No.2, 2013/01.
[13]F. Endres, J. Hess, J. Sturm and D. Cremers, W. Burgard, “3D Mapping with an RGB-D Camera,” IEEE Transactions on Robotics, Vol.30, No.1, 2014/01.
[14]D. Caruso, J. Engel and D. Cremers, “Large-Scale Direct SLAM for Omnidirectional Cameras,” International Conference on Intelligent Robots and Systems, IROS.2015.7353366, 2015/12.
[15]R.Mur-Artal, J. M. M. Montiel and J. D. Tardos, “ORB-SLAM: A Versatile and Accurate Monocular SLAM System,” IEEE Transactions on Robotics, Vol.31, No.5, pp. 1147-1163, 2015/10.
[16]Technology Solutions, PrimeSense™ 3D Sensors - I3DU, http://www.i3du.gr/pdf/primesense.pdf, 2017.
[17]CNblogs, Kinect v1和Kinect v2的徹底比較, http://www.cnblogs.com/TracePlus/p/4136297.html, 2014/12.
[18]Techbang, 微軟Kinect是怎麼做到的, http://www.techbang.com/posts/2936-get-to-know-how-it-works-kinect, 2010/07.
[19]Kinect for Windows SDK,
https://msdn.microsoft.com/en-us/library/dn799271.aspx, 2017.
[20]D. G. Lowe, “Distinctive Image Features from Scale-Invariant Keypoints”, 2004/05.
[21]H.Bay, T. Tuytelaars and L. V. Gool, “SURF: Speeded Up Robust Features”, 2008.
[22]E. Rublee, V. Rabaud, K. Konolige and G. Bradski, “ORB: an efficient alternative to SIFT or SURF,” IEEE International Conference on Computer Vision,,ICCV.2011.6126544, 2011.
[23]X. S. Gao, X. R. Hou, J. Tang and H. F. Cheng, “Complete Solution Classification for the Perspective-Three-Point Problem,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.25, No.8, 2003/08.
[24]OpenCV, OpenCV Library, http://opencv.org/
[25]Frontiersin, An Event-Based Solution to the Perspective-n-Point Problem, http://journal.frontiersin.org/article/10.3389/fnins.2016.00208/full
[26]M.A. Fischler and R. C. Bolles, “Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography”, 1981/06
[27]A.Angeli, S. Doncieux and J.A. Meyer, “Real-Time Visual Loop-Closure Detection”, Robotics and Automation, 2008/05
[28]B.Triggs1, P. McLauchlan, R. Hartley and A. Fitzgibbon, “Bundle Adjustment - A Modern Synthesis”, 1999.
[29]Levenberg and Kenneth, “A Method for the Solution of Certain Non-Linear Problems in Least Squares”, 1944.
[30]Pixnet, Bundle Adjustment, http://silverwind1982.pixnet.net/blog/post/238016320-bundle-adjustment
[31]M.I.A. Lourakis and A.A. Argyros, “SBA: A Software Package for Generic Sparse Bundle Adjustment” Journal ACM Transactions on Mathematical Software, Vol.36, No.1, 2009/05.
[32]R.Kuemmerle, G. Grisetti, H. Strasdat, K.Konolige and W. Burgard ,”g2o: A General Framework for Graph Optimization”, IEEE International Conference on Robotics and Automation (ICRA), 2011
[33]PCL, Point Cloud Library, http://pointclouds.org/.
[34]Point Cloud Library, What’s PCL, http://www.pointclouds.org/about/.
[35]PCL, Module surface, http://docs.pointclouds.org/1.0.1/group__surface.html
[36]MT Dickersony, “Fast Greedy Triangulation Algorithms”, 1994/05
[37]Geodesic Computations on 3D Meshes , http://www.cmap.polytechnique.fr/~peyre/geodesic_computations/
[38]W. E. Lorensen , “Marching cubes: A high resolution 3D surface construction algorithm”, 1987.
[39]Wikipedia, Marching cubes, https://en.wikipedia.org/wiki/Marching_cubes
[40]About camera, Pinhole Camera Model, http://misterjtbarbers.com/gallery/pinhole-camera-model.html
[41]Csdn blog, FAST特徵點檢測算法, http://blog.csdn.net/hujingshuang/article/details/46898007
[42]Wikimedia Commons, File:Line with outliers.svg, https://commons.wikimedia.org/wiki/File:Line_with_outliers.svg, 2017/04
[43]Learn OpenCV, Head Pose Estimation using OpenCV and Dlib, https://www.learnopencv.com/head-pose-estimation-using-opencv-and-dlib/
[44]Zhe Chen, “Bayesian Filtering: From Kalman Filters to Particle Filters and Beyond”, 2003
[45]Autonomous Navigation and Perception Lab (ANPL) , Incremental Light Bundle Adjustment for Structure from Motion and Autonomous Navigation, http://vindelman.net.technion.ac.il/research/incremental-light-bundle-adjustment-for-structure-from-motion-and-autonomous-navigation/
[46]PCL Documentation, Smoothing and normal estimation based on polynomial reconstruction, http://www.pointclouds.org/documentation/tutorials/resampling.php#moving-least-squares
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top