跳到主要內容

臺灣博碩士論文加值系統

(44.212.99.248) 您好!臺灣時間:2023/01/28 12:02
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:楊博旭
研究生(外文):Bo-Hsu Yang
論文名稱:利用立體相機進行基於粒子濾波器的連續自我校正
論文名稱(外文):Continuous Self-Calibration of Stereo Camera Based on Particle Filter Framework
指導教授:連豊力
口試委員:簡忠漢李後燦
口試日期:2015-07-31
學位類別:碩士
校院名稱:國立臺灣大學
系所名稱:電機工程學研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2015
畢業學年度:103
語文別:英文
論文頁數:181
中文關鍵詞:立體攝影機連續自動校正三維環境重建資料點疊合形狀疊合距離轉換粒子濾波器
外文關鍵詞:Stereo cameracontinuous self-calibration3D environment reconstructionpoint alignmentshape alignmentdistance transformparticle filtering
相關次數:
  • 被引用被引用:0
  • 點閱點閱:119
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
內視鏡已經成為一種在醫療中廣為使用的感測器。由於操作者只能從內視鏡鏡頭觀察手術進行的過程,如果該內視鏡能提供完整的三維資訊,將會使手術者更能掌握手術內部場景。為了取得三維的資訊,須使用光學三維資訊重建的技術,在可行的三維重建技術中,立體視學是目前在醫療手術中最廣為使用且最成熟的技術。一般而言,使用立體視覺重建環境資訊可分成兩個步驟,首先,先校正左右兩個相機的影像,校正過後,用立體配對法產生深度資訊並投影成三維資料點,完成三維資料點的產生後,使用資料點疊合法將不同視角的資料點作疊合,使得視野因此擴大,有助於對場景的了解。相機參數的精準度影響重建資料點的精準度,然而,相機參數可能會在操作過程中有所變化,因此持續的校正相機參數將會是一個很重要的議題。
本篇論文提出一個連續自我校正基於粒子濾波器的演算法,一開始給定一組起始相機參數,該組參數可能來自於先前的校正結果或是感測器上的規格表。校正過程中,每個時刻會讀入一組影像,從左右影像中擷取出特徵點並匹配成對,用這些配對的特徵點的幾何關係來給定粒子的權重,藉由不斷地更新粒子濾波器的狀態,滿足幾何關係限制的相機參數會被保留下來,因此能不斷地修正更新參數。當相機參數被校正完後,就可產生精準的三維資料點,之後使用資料點疊合法即可將不同時刻的資料點作疊合。我們提出一個兩步驟的資料點疊合法,首先,先用距離轉換將目標資料點轉換成距離地圖,接著使用粒子群優化演算法找尋初始的疊合轉換關係,透過這個轉換關係,可以把離群的資料點移除,移除無關的資料點後,不斷迭代地修正初始的轉換關係得到最後的轉換關係用以疊合。這兩步驟的演算法可以成功得疊合資料點即使一開始沒有給定良好的估測,除此之外,該演算法在有雜訊和離群點的狀況下仍穩健。實驗結果顯示所提出的校正演算法能持續修正參數而產生較好的資料點重建效果,同時,所提出的資料點疊合演算法能疊合不同時刻的三維資料點,進而產生更大的可視範圍。


Endoscopic camera has become a popular sensor for clinical use. While the operator can only observe the surgery scene via endoscopic camera, it would provide better scene understanding for the endoscopic camera to offer three-dimensional information. To obtain three-dimensional information, optical three-dimensional reconstruction techniques are required. Among existing optical three-dimensional reconstruction techniques, stereoscopy is currently most widely adopted and well developed techniques in clinical practice. In general, the task of three-dimensional reconstruction using stereoscopy can be divided into two steps. First, the image pair obtained at each time step is rectified, and the stereo matching algorithm is performed to generate a three-dimensional point set. After the three-dimensional reconstructed data points are available, the point alignment process is performed to align several point cloud captured from different viewing angles, and the larger field of view is formed for better scene understanding. The accuracy of the camera parameters affect the accuracy of the three-dimensional reconstruction. Considering the camera parameters may change during the operation, it is crucial to constantly track these parameters.
In this thesis, a continuous self-calibration based on particle filter is proposed. Start with an initial set of camera parameters, which is available by previous calibration or specification sheet on the sensors. Our proposed algorithm reads in an image pair at each time step. The feature points are extracted from the images and matched as pairs. These feature matching pairs are used to form epipolar constraints, and the particles are given weights according to these constraints. By constantly update the states of the particle filter, the parameters satisfying the epipolar constraints are maintained, and the camera parameters are thus tracked. After the camera parameters are calibrated, accurate data point cloud can be generated. The point alignment algorithm is then performed to register point clouds captured at different time steps. We proposed a two-step algorithm for conducting point alignment. First, the target point cloud is described by distance map via distance transform. A randomized optimization, Particle Swarm Optimization (PSO), is applied to find initial transformation. With the initial transformation, outliers are removed accordingly, and an iterative process is performed to refine the initial estimation. The two-step point alignment algorithm can align point sets well even if good initial guess is not available. In addition, the algorithm is robust against noise and outliers. The experimental results demonstrate that the proposed algorithm could refine the camera parameter constantly and provide a better reconstruction result, and the proposed point alignment algorithm can align three-dimensional data from different time steps providing larger field of view.


中文摘要 i
ABSTRACT iii
CONTENTS vi
LIST OF FIGURES ix
LIST OF TABLES xii
Chapter 1. Introduction 1
1.1 Motivation 1
1.2 Problem Formulation 3
1.3 Contribution 5
1.4 Organization of the Thesis 6
Chapter 2. Background and Literature Review 7
2.1 Self-calibration 7
2.2 Point Cloud Alignment 11
Chapter 3. Related Algorithms 18
3.1 Epipolar geometry 18
3.2 Random Sample Consensus 23
3.3 Distance Transform 26
3.3.1 Squared Euclidean Distance 27
3.3.2 Absolute linear distance 29
3.3.3 Multi-dimensional distance transform 31
3.3.4 Combination of Distance Function 33
Chapter 4. Continuous Stereo Self-Calibration 36
4.1 Feature processing 37
4.1.1 Feature point extraction 39
4.1.2 Feature point matching 41
4.1.3 Update Feature Information 45
4.2 Monte Carlo Parameter Refinement 47
4.2.1 Particle propagation 48
4.2.2 Likelihood function 50
4.2.3 Particle Resampling 53
4.3 Stereo point cloud alignment 55
4.3.1 Architecture of Point Cloud Alignment System 56
4.3.2 Randomized Point Cloud Alignment 57
4.3.3 Outlier Removal and Iterative Refinement 63
Chapter 5. Experiment Result and Analysis 67
5.1 Experimental Hardware 67
5.2 The Accuracy Stereo Self-Calibration 71
5.3 Continuous self-calibration 85
5.3.1 Feature processing for continuous self-calibration 86
5.3.2 Distribution of the extrinsic parameter 103
5.3.3 Varying Camera parameter 107
5.4 Point cloud alignment 117
5.4.1 View enhancement 118
5.4.2 Accuracy of alignment 121
5.4.3 Alignment with circular path 138
Chapter 6. Conclusion and Future Work 148
6.1 Conclusion 148
6.2 Future Work 150
Appendix A 153
References 177

[1: Maier-Hein et al. 2013]
L. Maier-Hein, A. Groch, A. Bartoli, S. Bodenstedt, G. Boissonnat, P.-L. Chang, N. T. Clancy, D. S. Elson, S. Haase, E. Heim, J. Hornegger, P. Jannin, H. Kenngott, T. Kilgus, B. Müller-Stich, D. Oladokun, S. Röhl,T. R. dos Santos, H.-P. Schlemmer, A. Seitel, S. Speidel, M. Wagner, and D. Stoyanov, "Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery," IEEE Transactions on Medical Imaging, Vol. 17, Issue 8, pp. 974-996, 2013
[2: Besl et al. 1992]
Paul J. Besl and Neil D. McKay, “A Method for Registration of 3D Shapes,” IEEE Transaction on Pattern Analysis and Machine Intelligence, vol. 14, no. 2, February, 1992.
[3: Zhengyou Zhang 1999]
Zhengyou Zhang, “Flexible camera calibration by viewing a plane from unknown orientations,” in Proceedings of the Seventh IEEE International Conference on Computer Vision, Kerkyra, vol. 1, pp. 666 - 673, Sep., 1999.
[4: Triggs et al. 2000]
B. Triggs, P. McLauchlan, R. Hartley, and A. Fitzgibbon, “Bundle Adjustment-A modern synthesis,” in Vision Algorithms: Theory and Practice, ser. Lecture Notes Comput. Sci., B. Triggs, A. Zisserman, and R. Szeliski, Eds. New York: Springer-Verlag, 2000, vol. 1883, pp. 298–372.
[5: Lourakis and Argyros 2009]
Manolis I. A. Lourakis and Antonis A. Argyros, “SBA: A software package for generic sparse bundle adjustment,” ACM Transactions on Mathematical Software (TOMS), vol. 36, Issue 1, No. 2, Mar., 2009.
[6: Zhengyou Zhang 1998]
Zhengyou Zhang, “Determining the epipolar geometry and its uncertainty: A
review,” International Journal of Computer Vision, vol. 27, No. 2, pp. 161 - 195, 1998.
[7: Hartley 1997]
Richard I. Hartley, “Kruppa''s equations derived from the fundamental matrix,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, Issue 2, pp. 133 - 135, Feb. 1997.
[8: Horaud et al. 2000]
Radu Horaud, Gabriella Csurka, and David Demirdijian, “Stereo calibration from rigid motions,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, Issue 12, pp. 1446 - 1452, Dec. 2000.
[9: Luong and Faugeras 1997]
Q.-T. Luong and O.D. Faugeras, “Self-Calibration of a Moving Camera from Point Correspondences and Fundamental Matrices,” International Journal of Computer Vision, vol. 22, Issue 3, pp. 261 - 289, Mar. 1997.
[10: Qian and Chellappa 2004]
Gang Qian and Rama Chellappa, “Structure from Motion Using Sequential Monte Carlo Methods,” International Journal of Computer Vision, vol. 59, Issue 1, pp. 5 - 31, Aug. 2004.
[11: Pettersson and Petersson 2005]
Niklas Pettersson and Lars Petersson, “Online stereo calibration using FPGAs,” in Proceedings of IEEE Intelligent Vehicles Symposium, Las Vegas, NV, pp. 55 - 60, June 6-8, 2005.
[12: McLauchlan and Murray 1996]
P. F. McLauchlan and D. W. Murray, “Active camera calibration for a head-eye platform using the variable state-dimension filter,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 18, No. 1, pp. 15 - 22, Jan. 1996.
[13: Dang et al. 2009]
Thao Dang, Christian Hoffmann, and Christoph Stiller, “Continuous Stereo Self-Calibration by Camera Parameter Tracking,” IEEE Transactions on Image Processing, Vol. 18, Issue 7, pp. 1536-1550, June 12, 2009.
[14: Lowe 2004]
David G. Lowe, “Distinctive Image Features from Scale-Invariant Keypoints,” International Journal of Computer Vision, Vol. 60, Issue 2, pp. 91-110, November 2004.
[15: Bay et al. 2008]
Herbert Bay, Andreas Ess, Tinne Tuytelaars, and Luc Van Gool, “ORB: An efficient alternative to SIFT or SURF,” Computer Vision and Image Understanding, vol. 110, Issue 3, pp. 346 - 359, Jun. 2008.
[16: Rublee et al. 2011]
Ethan Rublee, Vincent Rabaud, Kurt Konolige, and Gary Bradski, “ORB: An efficient alternative to SIFT or SURF,” in Proceedings of IEEE International Conference on Computer Vision (ICCV), Barcelona, pp. 2564-2571, Nov. 6-13, 2011.
[17: Hartley and Zisserman 2004]
Richard Hartley and Andrew Zisserman, “Multiple View Geometry in Computer Vision,” 2nd ed., Cambridge University Press, April 19, 2004
[18: Arulampalam et al. 2002]
M. Sanjeev Arulampalam, Simon Maskell, Neil Gordon, and Tim Clapp, “A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking," IEEE Transactions on Signal Processing, vol. 50, Issue 2, pp. 174 - 188, Feb. 2002.
[19: Zhang 1993]
Zhengyou Zhang, “Iterative Point Matching for Registration of Free-Form Curves and Surfaces", International Journal of Computer Vision, vol. 13, pp. 119-152, March 1993.
[20: Men et al. 2011]
Hao Men, Biruk Gebre, Kishore Pochiraju, “Color Point Cloud Registration with 4D ICP Algorithm,” in Proceedings of IEEE International Conference on Robotics and Automation (ICRA), Shanghai, pp. 1511-1516, May 9-13, 2011.
[21: Henry et al. 2013]
Peter Henry, Michael Krainin, Evan Herbst, Xiaofeng Ren and Dieter Fox, “RGB-D mapping: Using Kinect-style depth cameras for dense 3D modeling of indoor environments International Journal of Robotics Research,” Vol. 32, No. 11, pp. 647-633, Sep. 2013.
[22: Masuda et al. 1996]
T. Masuda, K. Sakaue, and N. Yokoya, “Registration and integration of multiple range images for 3-D model construction,” in Proceedings of the 13th International Conference on Pattern Recognition, Vienna, vol. 1, pp. 879-883, Aug. 25-29, 1996.
[23: Dorai et al. 1997]
Chitra Dorai, John Weng, and Anil K. Jain, “Optimal registration of object views using range data,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, No. 10, pp. 1131-1138, Oct. 1997.
[24: Sandh et al. 2008]
Romeil Sandhu, Samuel Dambreville, and Allen Tannenbaum, “Particle filtering for registration of 2D and 3D point sets with stochastic dynamics,” IEEE Conference on Computer Vision and Pattern Recognition, AK, Anchorage, pp. 1-8, June 23-28, 2008.
[25: Rangarajan et al. 1997]
Anand Rangarajan, Haili Chuib, Eric Mjolsnessc, Suguna Pappud, Lila Davachie, Patricia Goldman-Rakice, and James Duncanf, “A robust point-matching algorithm for autoradiograph alignment,” Medical Image Analysis, vol. 1, Issue. 4, pp. 379-398, Sep. 1997.
[26: Fischler and Bolles 1981]
Martin A. Fischler and Robert C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” Communications of the ACM, vol. 24, Issue. 6, pp. 381-395, 1981.
[27: Chui and Rangarajan 2000]
Haili Chui and Anand Rangarajan, “A feature registration framework using mixture models,” in Proceedings of IEEE Workshop on Mathematical Methods in Biomedical Image Analysis, SC, Hilton Head Island, pp. 190-197, June 11-12, 2000.
[28: Jian and Vemuri 2005]
Bing Jian and Baba C. Vemuri, “A robust algorithm for point set registration using mixture of Gaussians,” in Proceedings of IEEE International Conference on Computer Vision, vol. 2, pp. 1246-1251, Oct. 17-21, 2005.
[29: Tsin and Kanade 2004]
Yanghai Tsin and Takeo Kanade, “A Correlation-Based Approach to Robust Point Set Registration,” in Proceedings of European Conference on Computer Vision, Prague, Czech Republic, vol. 3, No. 11, pp. 558-569, May 11-14, 2004.
[30: Granger and Pennec 2002]
Sebastien Granger and Xavier Pennec, “Multi-scale EM-ICP: A Fast and Robust Approach for Surface Registration,” in Proceedings of European Conference on Computer Vision, Copenhagen, Denmark, vol. 2353, pp. 418-432, May 28-31, 2002.
[31: Fitzgibbon 2001]
Andrew Fitzgibbon, “Robust registration of 2D and 3D point sets”, in Proceedings of British Machine Vision Conference, vol. 2, pp. 411-420, Manchester, UK, September 2001
[32: Li et al. 2011]
Hongsheng Li, Tian Shen, and Xiaolei Huang, “Approximately Global Optimization for Robust Alignment of Generalized Shapes,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, Issue 6, pp. 1116-1131, June 2011
[33: Hirschmüller 2008]
Heiko Hirschmüller, “Stereo Processing by Semiglobal Matching and Mutual Information,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 30, No. 2, pp. 328–341, Feb. 2008
[34: Felzenszwalb and Huttenlocher 2008]
Pedro F. Felzenszwalb and Daniel P. Huttenlocher, "Distance Transforms of Sampled Functions,” Cornell Computing and Information Science Technical Report TR2004-1963, September 2004.
[35: Xie et al. 2002]
Xiao-Feng Xie ,Wen-Jun Zhang , Zhi-Lian Yang, “Adaptive particle swarm optimization on individual level,” in Proceedings of sixth International Conference on Signal Processing, vol. 2, pp. 1215 - 1218, Aug., 2002.
[36: Khoshelham and Elberink 2012]
Kourosh Khoshelham and Sander Oude Elberink, “Accuracy and Resolution of Kinect Depth Data for Indoor Mapping Applications,” Sensors, vol. 12, no. 2, pp.1437 -1454, Feb. 1, 2012.
[37: Laganière 2011]
Robert Laganière, “OpenCV 2 Computer Vision Application Programming Cookbook,” 1st ed., Editor: Neha Shetty, Packt Publishing Ltd., May 2011.
Websites
[38: URG-04LX-UG01 from Hokuyo]
Hokuyo URG-04LX-UG01 Documents- Product Datasheet. (2013, July 30). In Hokuyo Official Website. Retrieved July 19, 2015, from https://www.hokuyo-aut.jp/02sensor/07scanner/urg_04lx_ug01.html
[39: Point Cloud Library from PCL Website 2015]
Point Cloud Library. (2015, July 19). In PCL Website. Retrieved July 19, 2015, from http://pointclouds.org/
[40: OpenCV from OpenCV official website 2015]
Open Source Computer Vision Library. (2015, July 19). In OpenCV official Website. Retrieved July 19, 2015, from http://opencv.org/


QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top