(3.238.118.78) 您好!臺灣時間:2021/04/15 21:10
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

我願授權國圖
: 
twitterline
研究生:郭宣瑋
研究生(外文):Kuo, Syuan-Wei
論文名稱:利用角度與距離聯合平差於RGB-D影像之方位重建
論文名稱(外文):Orientation Modelling for RGB-D Images using Angle and Distance Combined Adjustment
指導教授:張智安張智安引用關係
指導教授(外文):Teo, Tee-Ann
口試委員:史天元王聖鐸黃智遠
口試委員(外文):Shih, Peter Tian-YuanWang, SendoHuang, Chih-Yuan
口試日期:2018-07-26
學位類別:碩士
校院名稱:國立交通大學
系所名稱:土木工程系所
學門:工程學門
學類:土木工程學類
論文種類:學術論文
論文出版年:2018
畢業學年度:106
語文別:英文
論文頁數:51
中文關鍵詞:方位重建角度與距離平差三維密點雲匹配
外文關鍵詞:Microsoft Kinect V2orientation modelingdistance and angle adjustment3D dense clouds registration
相關次數:
  • 被引用被引用:0
  • 點閱點閱:52
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:1
  • 收藏至我的研究室書目清單書目收藏:1
RGB-D相機為能同時拍攝彩色影像與深度資訊之感測器,近年已被廣泛應用在室內製圖與圖形辨識中。在室內製圖中必須獲得正確的方位參數,然而在室內環境中有許多不易偵測出特徵點的均調區,因此透過RGB-D相機同時提供角度以及距離的資訊,能自行產生控制點作為方位重建的約制。
本研究利用RGB-D資料的特性建立有尺度的相對方位重建,匹配連續點雲並探討其精度。研究步驟包含三大部分:第一、相機中兩感測器的內方位參數率定以及深度影像的距離改正。第二、方位重建,共有四種方法,分別為只使用角度資料的三角法、只使用距離資料的三邊法、同時結合三角法與三邊法(結合法-1)及具尺度約制的方位重建法(結合法-2)。第三、由方位重建解算出的相機相對方位重新投影點雲作為ICP演算法的初始值,並匹配連續點雲。
實驗結果顯示,相機內感測器的影像畸變及深度改正需納入於拍攝的資料中。為評估方位重建法,首先模擬控制點的數量深度遠近及分布了解在實際資料中控制點的影響,本研究透過方位重建法估算相機間的相對方位以匹配連續測站的點雲,由三角法、結合法-1及結合法-2解算出的外方位參數中,位置參數之未知數中誤差分別為14.108、0.677及0.595公釐,而旋轉角參數之未知數中誤差分別為0.005、0.007及0.001弳度,成果顯示結合法-2有較佳的精度。透過ICP演算法匹配之連續點雲間點與點的距離成果穩定並小於11.3公釐,約為測距精度的1.5倍。
RGB-D cameras are widely applied in indoor mapping and pattern recognition, capturing both RGB image and per-pixels depth image simultaneously. It is important to acquire the correct orientations for indoor mapping, yet homogenous areas are hard to be detected for feature points. Therefore, taking advantage of RGB-D data, the observations contains angle and range information that construct control points to constraint the orientation modeling.
This thesis proposed a novel method for orientation modeling for sequential point clouds registration. The main process of this study comprises three parts. First, the intrinsic parameters of two sensors and depth distortion are calibrated. Second, four orientation modeling methods are introduced in this paragraph. Triangulation optimizes only angle information with collinearity equations, trilateration optimizes range, combining both triangulation with trilateration (called combine-1 in this study), and scale fixed adjustment (called combine-2 in this study) with rigid constraints in every rays. Finally, the iterative closest point (ICP) algorithm was performed in such registration of transformed sequential point clouds.
The experimental results show the image distortion and depth distortion of RGB-D sensors need to be considered in data preprocessing. In the evaluation of different orientation modeling methods, we first simulated a number of control points, variance of depths and different distribution of control points on the images. This research registers sequential point clouds using RGB and depth information. The standard deviations of camera position in triangulation, combine-1 and combine-2 are respectively 14.108, 0.677 and 0.595 mm. Meanwhile, thestandard deviations of camera rotation angles in triangulation, combined-1 and combine-2 are 0.005, 0.007 and 0.001 rad. The results indicated that the combined adjustment show better precision than triangulation method. The point-to-point distances of point clouds pairs computed by ICP algorithm are better than 11.3 mm, and it is about 1.5 times of range precision (i.e. 3.5mm).
Table of Contents iv
List of Tables vi
List of Figures vii
Chpater 1 Introduction 1
1.1 Background 1
1.2 Motivation 1
1.3 Research Objectives 1
1.4 Thesis Structure 2
Chpater 2 Literature Review 3
2.1 Development and Application of RGB-D Cameras 3
2.2 RGB-D Camera Calibration 4
2.3 RGB-D Camera Orientation Models 5
2.4 Point Clouds Registration 6
Chpater 3 Specification of Microsoft Kinect V2 7
3.1 Characterization of Microsoft Kinect V2 7
3.2 Kinect V2 Depth Acquisition – ToF 8
3.3 Coordinate Frames 10
3.3.1 Camera Frame 10
3.3.2 Image Frame 11
Chapter 4 Methodology 13
4.1 Workflows 13
4.2 Pre-processing: Camera Calibration 14
4.2.1 RGB and IR Image Distortion 14
4.2.2 Depth Distortion 15
4.2.3 RGB and Depth Camera Registration 17
4.3 Orientation Estimation 18
4.3.1 Triangulation 19
4.3.2Trilateration 21
4.3.3 Combined Triangulation and Trilateration 22
4.3.4 Scale Fixed Adjustment 23
4.4 Point Clouds Registration 26
Chapter 5 Experiments and Analysis 27
5.1 Camera Calibration 27
5.1.1 Intrinsic Parameters of RGB and IR Sensors 27
5.1.2 Depth Correction 29
5.1.3 RGB Image and Depth Data Registration 31
5.2 Simulation Experiment 32
5.2.1 Simulation Setting 32
5.2.2 Comparison and Evaluation 32
5.3 Multi-viewPoint Clouds Registration 36
5.3.1 Experimental Data 36
5.3.2 Sequential Point Clouds Registration 37
5.3.3 Evaluation and Analysis 38
5.3.4 Improvement by ICP Registration 44
Chapter 6 Conclusions and Future Works 46
6.1 Conclusions 46
6.2 Suggestions 48
6.3 Future Works 48
Bibliography 49
Curriculum Vitae 51
Besl, P.J., McKay, N.D., 1992. Method for registration of 3-D shapes. Sensor Fusion IV: Control Paradigms and Data Structures. International Society for Optics and Photonics, 586-607.
Boukerche, A., Oliveira, H.A., Nakamura, E.F., Loureiro, A.A., 2007. A voronoi approach for scalable and robust dv-hop localization system for sensor networks, Computer Communications and Networks, 2007. ICCCN 2007. Proceedings of 16th International Conference on. IEEE, 497-502.
Butkiewicz, T., 2014. Low-cost coastal mapping using Kinect v2 time-of-flight cameras, 2014 Oceans-St. John's. IEEE, 1-9.
Chen, C., Yang, B., Song, S., 2016. Low cost and efficient 3d indoor mapping using multiple consumer rgb-d cameras. International Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences 41, 169-174.
Chen, C., Yang, B., Song, S., Tian, M., Li, J., Dai, W., Fang, L., 2018. Calibrate Multiple Consumer RGB-D Cameras for Low-Cost and Efficient 3D Indoor Mapping. Remote Sensing 10, 328.
Chow, J.C., Lichti, D.D., 2013. Photogrammetric bundle adjustment with self-calibration of the PrimeSense 3D camera technology: Microsoft Kinect. IEEE Access 1, 465-474.
Corti, A., Giancola, S., Mainetti, G., Sala, R., 2016. A metrological characterization of the Kinect V2 time-of-flight camera. Robotics and Autonomous Systems 75, 584-594.
Di, K., Zhao, Q., Wan, W., Wang, Y., Gao, Y., 2016. RGB-D SLAM Based on Extended Bundle Adjustment with 2D and 3D Information. Sensors 16, 1285.
Dryanovski, I., Valenti, R.G., Xiao, J., 2013. Fast visual odometry and mapping from RGB-D data, Robotics and Automation (ICRA), 2013 IEEE International Conference on. IEEE, 2305-2310.
Endres, F., Hess, J., Engelhard, N., Sturm, J., Cremers, D., Burgard, W., 2012. An evaluation of the RGB-D SLAM system, Robotics and Automation (ICRA), 2012 IEEE International Conference on. IEEE, 1691-1696.
Fankhauser, P., Bloesch, M., Rodriguez, D., Kaestner, R., Hutter, M., Siegwart, R., 2015. Kinect v2 for mobile robot navigation: Evaluation and modeling, Advanced Robotics (ICAR), 2015 International Conference on. IEEE, 388-394.
Fraser, C.S., 1997. Digital camera self-calibration. ISPRS Journal of Photogrammetry and Remote sensing 52, 149-159.
Fürsattel, P., Placht, S., Balda, M., Schaller, C., Hofmann, H., Maier, A., Riess, C., 2016. A comparative error analysis of current time-of-flight sensors. IEEE Transactions on Computational Imaging 2, 27-41.
He, G., Novak, K., Feng, W., 1993. Stereo camera system calibration with relative orientation constraints, Applications in Optical Science and Engineering. International Society for Optics and Photonics, 2-8.
Henry, P., Krainin, M., Herbst, E., Ren, X., Fox, D., 2014. RGB-D mapping: Using depth cameras for dense 3D modeling of indoor environments, Experimental robotics. Springer, 477-491.
Horaud, R., Hansard, M., Evangelidis, G., Ménier, C., 2016. An overview of depth cameras and range scanners based on time-of-flight technologies. Machine vision and applications 27, 1005-1020.
Hu, G., Huang, S., Zhao, L., Alempijevic, A., Dissanayake, G., 2012. A robust RGB-D SLAM algorithm, Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on. IEEE, 1714-1719.
Jóźków, G., Toth, C., Koppanyi, Z., Grejner-Brzezinska, D., 2014. Combined Matching of 2D and 3D Kinect™ Data to support Indoor Mapping and Navigation, Proceedings of Annual Conference of American Society for Photogrammetry and Remote Sensing.
Jung, J., Lee, J.-Y., Jeong, Y., Kweon, I.S., 2015. Time-of-flight sensor calibration for a color and depth camera pair. IEEE transactions on pattern analysis and machine intelligence 37, 1501-1513.
Lachat, E., Macher, H., Mittet, M., Landes, T., Grussenmeyer, P., 2015. First experiences with Kinect v2 sensor for close range 3D modeling. The International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences 40, 93.
Lichti, D.D., Kim, C., 2011. A comparison of three geometric self-calibration methods for range cameras. Remote Sensing 3, 1014-1028.
Litomisky, K., 2012. Consumer RGB-D cameras and their applications. Rapport technique, University of California, 20.
Pagliari, D., Pinto, L., 2015. Calibration of kinect for xbox one and comparison between the two generations of Microsoft sensors. Sensors 15, 27569-27589.
Peasley, B., 2013. Large scale 3D mapping of indoor environments using a handheld rgbd camera, Clemson University, 146.
Piatti, D., Remondino, F., Stoppa, D., 2013. State-of-the-art of TOF range-imaging sensors, TOF Range-Imaging Cameras. Springer, 1-9.
Sell, J., Patrick, O., 2014. The xbox one system on a chip and kinect sensor. IEEE Micro, 1-1, 44-53.
Sturm, J., Engelhard, N., Endres, F., Burgard, W., Cremers, D., 2012. A benchmark for the evaluation of RGB-D SLAM systems, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 573-580.
Terven, J.R., Córdova-Esparza, D.M., 2016. Kin2. A Kinect 2 toolbox for MATLAB. Science of Computer Programming 130, 97-106.
Wen, C., Qin, L., Zhu, Q., Wang, C., Li, J., 2014. Three-dimensional indoor mobile mapping with fusion of two-dimensional laser scanner and RGB-D camera data. IEEE Geoscience and Remote Sensing Letters 11, 843-847.
Yang, L., Zhang, L., Dong, H., Alelaiwi, A., El Saddik, A., 2015. Evaluating and improving the depth accuracy of Kinect for Windows V2. IEEE Sensors Journal 15, 4275-4285.
Zennaro, S., 2014. Evaluation of Microsoft Kinect 360 and Microsoft Kinect One for robotics and computer vision applications, UniversitàdegliStudi di Padova, Padova, Italy, 74.
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關論文
 
無相關期刊
 
無相關點閱論文
 
系統版面圖檔 系統版面圖檔