跳到主要內容

臺灣博碩士論文加值系統

(216.73.216.213) 您好!臺灣時間:2025/11/07 04:49
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:吳姿璇
研究生(外文):Tzy-shyuan Wu
論文名稱:整合RGB-D感測器與單眼數位相機的室內環境點雲模型重建
論文名稱(外文):Integration of RGB-D Sensor and Digital Single-Lens Reflex Camera for Indoor Point Cloud Model Generation
指導教授:蔡富安蔡富安引用關係
學位類別:碩士
校院名稱:國立中央大學
系所名稱:土木工程學系
學門:工程學門
學類:土木工程學類
論文種類:學術論文
論文出版年:2015
畢業學年度:103
語文別:中文
論文頁數:114
中文關鍵詞:點雲模型Kinect運動探知結構隨機抽樣一致三維相似轉換
外文關鍵詞:Point cloud modelKinectStructure from MotionRANSAC3D similarity transformation
相關次數:
  • 被引用被引用:2
  • 點閱點閱:598
  • 評分評分:
  • 下載下載:104
  • 收藏至我的研究室書目清單書目收藏:0
近年來室內建模技術快速發展,以攝影測量而言,傳統主流方式是利用多張高解析度影像進行特徵萃取及匹配,建構空間中三維點雲及模型,然而影像式三維重建在特徵不足的室內環境中將無法獲取足夠的空間點雲資料。RGB-D 感測器由於可以同時獲取彩色影像及每個像元的深度資訊,即使在特徵不足區域也能有相對應之點雲資料,因此在電腦視覺領域中逐漸成為一新興發展的室內測繪工具。但其缺點是資料獲取上有範圍限制,且影像解析度較低。進行室內測繪時,不論是以影像的方式或RGB-D直接獲取場景資料,皆有其特長及缺陷,因此本研究期望發展一套整合RGB-D感測器及單眼數位相機的測繪系統及流程,整合兩種儀器的優點相輔相成,建構出完善的室內三維點雲模型。

本研究使用微軟所開發的Kinect作為RGB-D感測器的測試儀器,整體程序主要分三大項目:(1) 透過運動探知結構 (Structure from Motion, SfM) 演算方式,將所獲取的彩色影像重建拍攝當時的相機位置及參數,藉由單眼數位相機提供的高解析度影像,提高影像交會解算精度; (2) 利用基於多視角立體(Clustering Views for Multi-view Stereo, CMVS)理論所開發的軟體套件重建場景之稠密性匹配點雲模型; (3) 根據解算時所萃取的特徵點坐標,以隨機抽樣一致算法 (RANdom SAmple Consensus, RANSAC) 篩選特徵點,運用三維的相似轉換,將Kinect所獲取之每幅點雲資料與重建的密匹配模型整合至同一坐標系統當中。研究的實驗成果顯示,利用本研究所開發的整合系統流程所建構出的室內點雲模型,縱使在無特徵處,也能擁有完善的點雲資訊;而RANSAC的篩選程序,能有效改善轉換參數成果精度並穩定最終整合點雲模型的品質。

Three-dimensional (3D) modeling of indoor environment has been extensively developed in recent years. In photogrammetry, one of the traditional mainstream solutions for indoor mapping and modeling is to create 3D point cloud model from multiple images. However, the major drawback of image-based approaches is the lack of points extracted in featureless areas. RGB-D sensors, which capture both RGB images and per-pixel depth information, recently became a popular indoor mapping tool in the field of computer vision. The shortages of RGB-D sensors are low resolution of the image and the limitation of range. Indoor mapping based on images or RGB-D information, have their own properties and limitations. Therefore, this research aims to develop an indoor mapping procedure, combining these two devices to overcome the shortcomings from each other, and to create a uniformly distributed point cloud of indoor environments.

This study uses Microsoft Kinect as RGB-D sensor in experiments. There are three main steps in the proposed procedure: (1) Structure from Motion (SfM) method is used to reconstruct the camera position and parameters from multiple color images. High resolution images captured by DSLR cameras can provide more accurate ray intersection condition. (2) Using the software based on Clustering Views for Multi-view Stereo (CMVS) method to construct a dense matching point clouds. (3) According to feature point extracted in SfM reconstruction, using Random Sample Consensus (RANSAC) method to select the feature points. Then, transfer the Kinect point clouds to the same coordinate as the dense matching point clouds via 3D Similarity transformation. Experimental results demonstrate the proposed data processing procedure can generate dense and fully colored point clouds of indoor environments even in featureless places. In addition, the feature point selection approach can improve the accuracy of the obtained parameters and ensure the quality of final point cloud model results.

摘要 I
ABSTRACT III
致謝 V
目錄 VI
圖目錄 IX
表目錄 XIII
第一章 緒論 1
1-1 研究背景 1
1-2 研究動機與目的 3
1-3 論文架構 5
第二章 文獻回顧 6
2-1 影像式三維建模 7
2-1-1 特徵點萃取及匹配 7
2-1-2 重建三維結構和相機關係 10
2-1-3 三維重建點雲:多視角三維重建技術 12
2-2 三維資訊獲取與室內建模 16
2-2-1 RGB-D感測器 16
2-2-2 RGB-D室內建模 21
第三章 研究方法與步驟 26
3-1 研究方法綜述 26
3-2 研究資料獲取及前處理 28
3-2-1 Microsoft Kinect 28
3-2-2 單眼數位相機 31
3-3 影像重建:Visual SFM 32
3-4 點雲整合 37
3-4-1 三維相似轉換及流程 38
3-4-2 特徵點篩選 41
3-5 點雲模型展示 44
第四章 實驗成果與分析 45
4-1 實驗介紹 45
4-1-1 實驗環境 45
4-1-2 實驗設置 47
4-2 Kinect獲取資料品質評估 53
4-2-1 深度值穩定度評估 53
4-2-2 面擬合評估 57
4-3 實驗成果 61
4-3-1 點雲模型成果 61
4-3-2 點雲精度評估 72
4-3-3 成果分析 79
第五章 結論與建議 90
參考文獻 93

洪祥恩,2011,以地面及空載光達點雲重建複雜建物三維模型,碩士論文,國立中央大學土木工程學系。

孫敏,2007,多視幾何與傳統攝影測量理論,北京大學學報(自然科學版),第四十三卷,第四期,頁453-459。

陳思翰,2011,未校正影像三維模型建構與定位精度之研究,碩士論文,國立台北大學不動產與城鄉環境學系。

張桓、蔡富安,2014,單視角影像滅點偵測與三維建物模型重建,航測及遙測學刊,第十八卷,第四期,頁 217-233。

張萌,2013,基於建築物三維點雲數據的水平面檢測,碩士論文,西安電子科技大學通信與信息系統學系。

黃金聰、陳思翰,2013,利用多重影像產生之點雲的精度評估,台灣土地研究,第十六卷,第一期,頁 81-101。

趙煇,2006,SIFT特徵匹配技術講義,山東大學信息學院。

Bouguet, J. Y., 2013. Camera Calibration Toolbox for Matlab, Retrieved June 30, 2015, from http://www.vision.caltech.edu/bouguetj/calib_doc/

Besl, P. J. and McKay, N. D., 1992. A method for registration of 3-D shapes. IEEE Trans. on Pattern Analysis and Machine Intelligence, vol.14, no.2, pp.239–256.

Chow, J. and Lichti, D., 2013. Photogrammetric bundle adjustment with self-calibration of the PrimeSense 3D camera technology: Microsoft Kinect, IEEE Access, vol.1, pp. 465-474.

Fischler, M. A., Bolles, R. C., 1981. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography, Comm. of the ACM, vol. 24, pp.381-395.

Furukawa, Y. and Ponce, J., 2010. Accurate, dense, and robust multi-view stereopsis, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 8, pp. 1362-1376.

Han, J., Shao, L., Xu, D., and Shotton, J., 2013. Enhanced computer vision with Microsoft Kinect sensor: A review, IEEE Transactions on Cybernetics, pp.1318-1334.

Henry, P., Krainin, M., Herbst, E., Ren, X., and Fox, D., 2012. RGB-D mapping: Using Kinect-style depth cameras for dense 3D modeling of indoor environments, The International Journal of Robotics Research, vol. 31, no. 5, pp. 647-663.

Khoshelham, K. and Elberink, S., 2012. Accuracy and resolution of Kinect depth data for indoor mapping applications, Sensors, vol.12, no.1, pp.1437-1454.

Lowe, D. G., 1999. Object recognition from local scale-invariant features, International Conference on Computer Vision, Corfu, Greece, pp.1150-1157.

Mankoff, K. D., & Russo, T. A. 2013. The Kinect: A low‐cost, high‐resolution, short‐range 3D camera, Earth Surface Processes and Landforms, vol.38, no.9, pp.926-936.

Newcombe, R. A., Izadi, S., Hilliges, O., Molyneaux, D., Kim, D., Davison, A. J., Kohli, P., Shotton, J., Hodges, S., and Fitzgibbon, A., 2011. KinectFusion: Real-time dense surface mapping and tracking, In Mixed and augmented reality (ISMAR), 2011 10th IEEE international symposium on, pp.127-136.

Wolf, P. R., and Dewitt, B. A., 2000. Element of photogrammrtry with application in GIS, McGraw Hill press, 3rd edition.

Wu, C., 2011. VisualSFM: A Visual Structure from Motion System, Retrieved August 20, 2014, from http://ccwu.me/vsfm/

Wu, C., Agarwal, S., Curless, B., and Seitz, S. M., 2011. Multicore bundle adjustment, Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pp.3057–3064.

Wu, C., 2013. Towards linear-time incremental structure from motion, 3D Vision-3DV 2013, 2013 International Conference on, pp.127-134.

Wasmeier, P., 2014. Geodetic Transformation Toolbox, Retrieved June 24, 2014, from http://www.mathworks.com/matlabcentral/fileexchange/9696-geodetic-
transformations-toolbox

Zhou, K., 2010. Structure & Motion, Structure in Pattern Recognition, Vienna University of Technology, Faculty of Informatics, Institute of Computer Graphics and Algorithms, Pattern Recognition and Image Processing Group.

連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top