跳到主要內容

臺灣博碩士論文加值系統

(18.97.14.81) 您好!臺灣時間:2025/01/21 12:39
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:吳佳祥
研究生(外文):Chia-Hsiang Wu
論文名稱:基於幾何條件由影像序列重構人體三維腔室
論文名稱(外文):3D Reconstruction of Human Inner Structure from Video by Using Geometric Constraints
指導教授:孫永年孫永年引用關係
指導教授(外文):Yung-Nien Sun
學位類別:博士
校院名稱:國立成功大學
系所名稱:資訊工程學系碩博士班
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2007
畢業學年度:95
語文別:英文
論文頁數:88
中文關鍵詞:特徵追蹤內視鏡三維重構微創手術
外文關鍵詞:endoscopefeature tracking3D reconstructionminimally invasive surgery
相關次數:
  • 被引用被引用:0
  • 點閱點閱:250
  • 評分評分:
  • 下載下載:39
  • 收藏至我的研究室書目清單書目收藏:0
近年來,內視鏡手術在臨床上的應用非常廣泛,由於醫師可以最小侵入性的方式進行診斷與治療,因此術後傷口小,可有效縮短病人的住院與復元時間。然而內視鏡系統讓醫師透過螢幕觀察病灶影像並操作手術工具的治療方式,對手眼協調(hand-eye coordination)、受力回饋(physical feedback)、觀察範圍(visibility)與深度認知(depth perception)造成限制與不便,雖然目前已有提供立體影像的內視鏡系統可解決部分問題,但仍缺乏實際的結構,因此我們提出一個以幾何條件為基礎的方法,由內視鏡影像序列結合三維追蹤器估計體內結構。首先,我們對內視鏡影像進行特徵點追蹤,內容包含成像失真的修正、影像品質的提升與誤失結果的去除,根據視訊窗格的特徵對應,捕捉體腔與內視鏡攝影機的相對運動;根據這些反應在影像上的運動變化,以及結合三維追蹤器的手術工具所提供的幾何條件,我們利用矩陣分解方式,以秩限制(rank constraint)、擴充測量矩陣(augmented measurement matrix)與三維點的更新,進行重建;此外,我們也提出一個以擾動為基礎的歐式重建法(perturbation-based Euclidean reconstruction),修正初始歪斜的外形。模擬與實物實驗結果均顯示此方法重建結果的精確度優於傳統方法,且僅需數秒鐘即可完成整個過程,可應用於線上手術系統的研發,並提供病灶外型與參數量測的所需資訊,作為治療之臨床參考。
The endoscope is a popular imaging modality used in many pre-evaluations and surgical treatments; it is also one of the essential tools in minimally invasive surgery. Compared with traditional open surgery, infection and scarring are reduced, hospital stay and recovery time are shortened, and less post-operative discomfort is present. Regular endoscopy systems display internal cavities on video monitors for surgeons to observe lesion areas; however, this has several drawbacks, such as limited visibility, lack of physical feedback, indirect hand-eye coordination, and poor depth perception. Although stereoendoscopy systems can display three-dimensional (3D) images, the real anatomical structure of the observed lesion is still unavailable and can only be judged by the surgeon’s imagination. In this thesis, we propose a constraint-based factorization method (CBFM) for reconstructing 3D anatomical structures from 2D endoscopic images. The proposed method incorporates the geometric constraints from the tracked surgical instrument into the traditional factorization method based on frame-to-frame feature motion in the endoscopically viewed scene. First, we correct image distortion, enhance image quality, and eliminate outliers for detecting and tracking image feature points. Using these tracked points, we accomplish the reconstruction by rank constraint, augmented measurement matrix, and update of 3D points. In addition, we present a perturbation-based Euclidean reconstruction scheme to correct the estimated shape. Using the proposed approach, we can obtain good real-scale 3D extraction with greater accuracy than traditional methods through the experiment with real and synthetic data. The reconstruction process can be fulfilled in seconds, making it suitable for online surgical applications to provide additional 3D shape information, critical distance monitoring, and proper warnings.
CHAPTER 1 INTRODUCTION 1
1.1 MOTIVATION 1
1.2 RELATED WORK 3
1.3 PROBLEM FORMULATION 6
1.4 CONTRIBUTIONS AND THESIS ORGANIZATION 9
CHAPTER 2 FACTORIZATION OVERVIEW 12
2.1 CAMERA MODEL 12
2.1.1 Weak perspective projection 17
2.1.2 Paraperspective projection 18
2.1.3 Orthographic projection 20
2.2 METHOD 21
2.3 SOLVING FOR MOTION AND SHAPE 23
2.4 EUCLIDEAN RECONSTRUCTION 24
2.5 REVERSAL AMBIGUITY REMOVAL 27
CHAPTER 3 ENDOSCOPIC FEATURE TRACKING 29
3.1 FEATURE TRAJECTORIES 29
3.2 FEATURE TRACKING 30
3.2.1 Enhancing low-contrast images 32
3.2.2 Removing uneven brightness 32
3.2.3 Feature extraction 33
3.2.4 Building correspondences across frames 35
3.2.5 Outlier rejection 36
3.3 PERFORMANCE EVALUATION 37
CHAPTER 4 CONSTRAINT-BASED FACTORIZATION 41
4.1 PRELIMINARY 41
4.2 OVERVIEW OF CBFM 43
4.3 PERTURBATION-BASED EUCLIDEAN RECONSTRUCTION 45
4.4 IN-SITU CONSTRAINTS 49
4.5 TRANSLATION ESTIMATION 53
4.6 REJECTION OF DEGENERATE CONFIGURATIONS 53
4.7 FREE-POSITION CONSTRAINTS 54
CHAPTER 5 EXPERIMENTAL RESULTS 58
5.1 RESULTS FROM SYNTHETIC DATA 58
5.2 RESULTS FROM REAL DATA 62
5.3 COMPUTATION TIME ANALYSIS 70
CHAPTER 6 DISCUSSION 72
6.1 PROTOTYPE DESIGN AND POTENTIAL APPLICATION 72
6.2 LIMITATIONS AND CONTINUING RESEARCH 76
CHAPTER 7 CONCLUSION 79
REFERENCES 81
VITA 86
[1]D. Tomazevic, B. Likar, and F. Pernus, “3-D/2-D registration by integrating 2-D information in 3-D,” IEEE Trans. Med. Imag., vol. 25, no. 1, pp. 17-27, 2006.
[2]D.J. Vining, D. Gelfand, R. Bechtold, E. Scharling, E.F. Grishaw, and R. Shifirin, “Technical feasibility of colon imaging with helical CT and virtual reality,” in Proc. Ann. Meeting Amer. Roentgen Ray. Soc., pp. 104, 1994.
[3]L. Hong, S. Muraki, A. Kaufman, D. Bartz, and T. He, “Virtual voyage: Interactive navigation in the human colon,” in Proc. SIGGRAPH, pp. 27-34, 1997.
[4]D. Bartz, and M. Skalej, “VIVENDI - A virtual ventricle endoscopy system for virtual medicine,” in Proc. of Symposium on Visualization, pp. 155-166,324, 1999.
[5]T.Y. Lee, P.H. Lin, C.H. Lin, Y.N. Sun, and X.Z. Lin, "Interactive 3D virtual colonoscopy system," IEEE Trans. Inf. Technol. Biomed., vol. 3, no. 2, pp. 139-150, 1999.
[6]R. Wegenkittl, A. Vilanova Bartrolí, B. Hegedüs, D. Wagner, M.C. Freund, and E. Gröller, “Mastering interactive virtual bronchioscopy on a low-end PC,” in Proc. IEEE Visualization, pp. 461-464, 2000.
[7]T.K. Sinha, B.M. Dawant, V. Duay, D.M. Cash, R.J. Weil, R.C. Thompson, K.D. Weaver, and M.I. Miga, “A method to track cortical surface deformations using a laser range scanner,” IEEE Trans. Med. Imag., vol. 24, no. 6, pp. 767-781, 2005.
[8]M. Hayashibe, N. Suzuki, A. Hattori, and Y. Nakamura, “Intraoperative fast 3D shape recovery of abdominal organs in laparoscopy,” in Proc. MICCAI, LNCS 2489, pp. 356-363, 2002.
[9]G.J. Tearney, M.E. Brezinski, B.E. Bouma, S.A. Boppart, C. Pitris, J.F. Southern, and J.G. Fujimoto, “In vivo endoscopic optical biopsy with optical coherence tomography,” Science, vol. 276, no. 5321, pp. 2037-2039, 1997.
[10]S.G. Demos, M. Staggs, and H.B. Radousky, “Endoscopic method for large-depth optical imaging of interior body organs,” Electronics Letters, vol. 38, no. 4, pp. 155-157, 2002.
[11]C. Daul, P. Graebling, A. Tiedeu, and D. Wolf, “3-D reconstruction of microcalcification clusters using stereo imaging: algorithm and mammographic unit calibration,” IEEE Trans. Biomed. Eng., vol. 52, no. 12, pp. 2058-2073, 2005.
[12]S.K. Yoo, G. Wang, F. Collison, J.T. Rubinstein, M.W. Vannier, H.J. Kim, and N.H. Kim, “Three-dimensional localization of cochlear implant electrodes using epipolar stereophotogrammetry,” IEEE Trans. Biomed. Eng., vol. 51, no. 5, pp. 838-46, 2004.
[13]G.J. Bootsma and G.W. Brodland, “Automated 3-D reconstruction of the surface of live early-stage amphibian embryos,” IEEE Trans. Biomed. Eng., vol. 52, no. 8, pp. 1407-1414, 2005.
[14]D. Stoyanov, A. Darzi, and G.Z. Yang, “Dense 3D depth recovery for soft tissue deformation during robotically assisted laparoscopic surgery,” in Proc. MICCAI, LNCS 3217, pp. 41-48, 2004.
[15]F. Mourgues, F. Devernay, and È. Coste-Manière, “3D reconstruction of the operating field for image overlay in 3D-endoscopic surgery,” in Proc. IEEE and ACM Symposium on Augmented Reality, pp. 191-192, 2001.
[16]T. Okatani, and K. Deguchi, “Shape reconstruction from an endoscope image by shape from shading technique for a point light source at the projection center,” Comput. Vis. and Image Understanding, vol. 66, no. 2, pp. 119-131, 1997.
[17]I. Bricault, G. Ferretti, and P. Cinquin, “Registration of real and CT-derived virtual bronchoscopic images to assist transbronchial biopsy,” IEEE Trans. Med. Imag., vol. 17, no. 5, pp. 703-714, 1998.
[18]K. Deguchi, T. Sasano, H. Arai and H. Yoshikawa, “3D shape reconstruction from endoscope image sequences by the factorization method,” IEICE Trans. Information and Systems, vol. E79-D, no. 9, pp. 1329-1336, 1996.
[19]D. Burschka, M. Li, M. Ishii, R.H. Taylor, and G.D. Hager, “Scale-invariant registration of monocular endoscopic images to CT-Scans for sinus surgery,” Med. Image Anal., vol. 9, pp. 413-426, 2005.
[20]C. Tomasi and T. Kanade, “Shape and motion from image streams under orthography: A factorization method,” Int’l J. Computer Vision, vol. 9, no.2, pp.137-154, 1992.
[21]C. Poelman and T. Kanade, “A paraperspective factorization method for shape and motion recovery,” IEEE Trans. Pattern Anal. Machine Intell., vol. 19, no.3, pp. 206-218, 1997.
[22]H. Aanæs, R. Fisker, K. Åström, and J.M. Carstensen, “Robust factorization,” IEEE Trans. Pattern Anal. Machine Intell., vol. 24, no. 9, pp. 1215-1225, 2002.
[23]M. Han, and T. Kanade, “Multiple motion scene reconstruction with uncalibrated cameras,” IEEE Trans. Pattern Anal. Machine Intell., vol. 25, no.7, pp. 884-894, 2003.
[24]M. Wilczkowiak, P. Sturm, and E. Boyer, “Using geometric constraints through parallelepipeds for calibration and 3D modeling,” IEEE Trans. Pattern Anal. Machine Intell., vol. 27, no. 2, pp. 194-207, 2005.
[25]S. Christy and R. Horaud, “Euclidean shape and motion from multiple perspective views by affine iteration,” IEEE Trans. Pattern Anal. Machine Intell., vol. 18, no. 11, pp. 1098-1104, 1996.
[26]C.H. Wu, Y.C. Chen, C.Y. Liu, C.C. Chang, and Y.N. Sun, “Automatic extraction and visualization of human inner structures from endoscopic image sequences,” in Proc. SPIE, vol. 5369, pp. 464-473, 2004.
[27]J. Shi and C. Tomasi, “Good features to track,” in Proc. IEEE Conf. on Computer Vision and Pattern Recognition, pp. 593-600, 1994.
[28]J.Y. Aloimonos, “Perspective approximations,” Image Vis. Comput., vol. 8, no. 3, pp. 177-192, 1990.
[29]R.B. Schnabel and E. Eskow, “A revised modified cholesky factorization algorithm,” SIAM J. Optim., vol. 9, no. 4, pp. 1135-1148, 1999.
[30]R.B. Schnabel and E. Eskow, “A new modified Cholesky factorization,” SIAM J. Sci. Statist. Comput., vol. 11, no. 6, pp. 1136-1158, 1990.
[31]P.E. Gill, W. Murry, and R.H. Byrd, Practical Optimization, Academic Press, London, 1981, pp. 108-111.
[32]S. Umeyama, “Least-squares estimation of transformation parameters between two point patterns,” IEEE Trans. Pattern Anal. Machine Intell., vol. 13, no. 4, pp. 376-380, 1991.
[33]Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Machine Intell., vol. 22, no. 11, pp. 1330-1334, 2000.
[34]T. Morita and T. Kanade, “A sequential factorization method for recovering shape and motion from image streams,” IEEE Trans. Pattern Anal. Machine Intell., vol. 19, no. 8, pp. 58-867, 1997.
[35]D.Q. Huynh and A. Heyden, “Outlier Detection in Video Sequences under Affine Projection,” in Proc. IEEE Conf. on Computer Vision and Pattern Recognition, vol. 1, pp. 695-701, 2001.
[36]C. Tomasi and T. Kanade, “Detection and tracking of point features,” Carnegie Mellon University Tec. Rep. CMU-CS-91-132, 1991.
[37]J.P. Helferty, C. Zhang, G. McLennan, and W.E. Higgins, “Videoendoscopic distortion correction and its application to virtual guidance of endoscopy,” IEEE Trans. Med. Imag., vol. 20, no. 7, pp. 605-617, 2001.
[38]T.W. Ridler and S. Calvard, “Picture thresholding using an iterative selection method,” IEEE Trans. Syst., Man, Cybern., vol. 8, no.8, pp.630-632, 1978.
[39]J.R. Shewchuk, “Delaunay refinement algorithms for triangular mesh generation,” Computational Geometry: Theory and Applications, no. 22, no.1-3, pp. 21-74, 2002.
[40]D. Dey, D.G. Gobbi, P.J. Slomka, K.J.M. Surry, and T.M. Peters, “Automatic Fusion of Freehand Endoscopic Brain Images to Three-Dimensional Surfaces: Creating Stereoscopic Panoramas”, IEEE Trans. Med. Imag., vol. 21, no. 1, pp. 23-30, 2002.
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top