|
[1]D. Tomazevic, B. Likar, and F. Pernus, “3-D/2-D registration by integrating 2-D information in 3-D,” IEEE Trans. Med. Imag., vol. 25, no. 1, pp. 17-27, 2006. [2]D.J. Vining, D. Gelfand, R. Bechtold, E. Scharling, E.F. Grishaw, and R. Shifirin, “Technical feasibility of colon imaging with helical CT and virtual reality,” in Proc. Ann. Meeting Amer. Roentgen Ray. Soc., pp. 104, 1994. [3]L. Hong, S. Muraki, A. Kaufman, D. Bartz, and T. He, “Virtual voyage: Interactive navigation in the human colon,” in Proc. SIGGRAPH, pp. 27-34, 1997. [4]D. Bartz, and M. Skalej, “VIVENDI - A virtual ventricle endoscopy system for virtual medicine,” in Proc. of Symposium on Visualization, pp. 155-166,324, 1999. [5]T.Y. Lee, P.H. Lin, C.H. Lin, Y.N. Sun, and X.Z. Lin, "Interactive 3D virtual colonoscopy system," IEEE Trans. Inf. Technol. Biomed., vol. 3, no. 2, pp. 139-150, 1999. [6]R. Wegenkittl, A. Vilanova Bartrolí, B. Hegedüs, D. Wagner, M.C. Freund, and E. Gröller, “Mastering interactive virtual bronchioscopy on a low-end PC,” in Proc. IEEE Visualization, pp. 461-464, 2000. [7]T.K. Sinha, B.M. Dawant, V. Duay, D.M. Cash, R.J. Weil, R.C. Thompson, K.D. Weaver, and M.I. Miga, “A method to track cortical surface deformations using a laser range scanner,” IEEE Trans. Med. Imag., vol. 24, no. 6, pp. 767-781, 2005. [8]M. Hayashibe, N. Suzuki, A. Hattori, and Y. Nakamura, “Intraoperative fast 3D shape recovery of abdominal organs in laparoscopy,” in Proc. MICCAI, LNCS 2489, pp. 356-363, 2002. [9]G.J. Tearney, M.E. Brezinski, B.E. Bouma, S.A. Boppart, C. Pitris, J.F. Southern, and J.G. Fujimoto, “In vivo endoscopic optical biopsy with optical coherence tomography,” Science, vol. 276, no. 5321, pp. 2037-2039, 1997. [10]S.G. Demos, M. Staggs, and H.B. Radousky, “Endoscopic method for large-depth optical imaging of interior body organs,” Electronics Letters, vol. 38, no. 4, pp. 155-157, 2002. [11]C. Daul, P. Graebling, A. Tiedeu, and D. Wolf, “3-D reconstruction of microcalcification clusters using stereo imaging: algorithm and mammographic unit calibration,” IEEE Trans. Biomed. Eng., vol. 52, no. 12, pp. 2058-2073, 2005. [12]S.K. Yoo, G. Wang, F. Collison, J.T. Rubinstein, M.W. Vannier, H.J. Kim, and N.H. Kim, “Three-dimensional localization of cochlear implant electrodes using epipolar stereophotogrammetry,” IEEE Trans. Biomed. Eng., vol. 51, no. 5, pp. 838-46, 2004. [13]G.J. Bootsma and G.W. Brodland, “Automated 3-D reconstruction of the surface of live early-stage amphibian embryos,” IEEE Trans. Biomed. Eng., vol. 52, no. 8, pp. 1407-1414, 2005. [14]D. Stoyanov, A. Darzi, and G.Z. Yang, “Dense 3D depth recovery for soft tissue deformation during robotically assisted laparoscopic surgery,” in Proc. MICCAI, LNCS 3217, pp. 41-48, 2004. [15]F. Mourgues, F. Devernay, and È. Coste-Manière, “3D reconstruction of the operating field for image overlay in 3D-endoscopic surgery,” in Proc. IEEE and ACM Symposium on Augmented Reality, pp. 191-192, 2001. [16]T. Okatani, and K. Deguchi, “Shape reconstruction from an endoscope image by shape from shading technique for a point light source at the projection center,” Comput. Vis. and Image Understanding, vol. 66, no. 2, pp. 119-131, 1997. [17]I. Bricault, G. Ferretti, and P. Cinquin, “Registration of real and CT-derived virtual bronchoscopic images to assist transbronchial biopsy,” IEEE Trans. Med. Imag., vol. 17, no. 5, pp. 703-714, 1998. [18]K. Deguchi, T. Sasano, H. Arai and H. Yoshikawa, “3D shape reconstruction from endoscope image sequences by the factorization method,” IEICE Trans. Information and Systems, vol. E79-D, no. 9, pp. 1329-1336, 1996. [19]D. Burschka, M. Li, M. Ishii, R.H. Taylor, and G.D. Hager, “Scale-invariant registration of monocular endoscopic images to CT-Scans for sinus surgery,” Med. Image Anal., vol. 9, pp. 413-426, 2005. [20]C. Tomasi and T. Kanade, “Shape and motion from image streams under orthography: A factorization method,” Int’l J. Computer Vision, vol. 9, no.2, pp.137-154, 1992. [21]C. Poelman and T. Kanade, “A paraperspective factorization method for shape and motion recovery,” IEEE Trans. Pattern Anal. Machine Intell., vol. 19, no.3, pp. 206-218, 1997. [22]H. Aanæs, R. Fisker, K. Åström, and J.M. Carstensen, “Robust factorization,” IEEE Trans. Pattern Anal. Machine Intell., vol. 24, no. 9, pp. 1215-1225, 2002. [23]M. Han, and T. Kanade, “Multiple motion scene reconstruction with uncalibrated cameras,” IEEE Trans. Pattern Anal. Machine Intell., vol. 25, no.7, pp. 884-894, 2003. [24]M. Wilczkowiak, P. Sturm, and E. Boyer, “Using geometric constraints through parallelepipeds for calibration and 3D modeling,” IEEE Trans. Pattern Anal. Machine Intell., vol. 27, no. 2, pp. 194-207, 2005. [25]S. Christy and R. Horaud, “Euclidean shape and motion from multiple perspective views by affine iteration,” IEEE Trans. Pattern Anal. Machine Intell., vol. 18, no. 11, pp. 1098-1104, 1996. [26]C.H. Wu, Y.C. Chen, C.Y. Liu, C.C. Chang, and Y.N. Sun, “Automatic extraction and visualization of human inner structures from endoscopic image sequences,” in Proc. SPIE, vol. 5369, pp. 464-473, 2004. [27]J. Shi and C. Tomasi, “Good features to track,” in Proc. IEEE Conf. on Computer Vision and Pattern Recognition, pp. 593-600, 1994. [28]J.Y. Aloimonos, “Perspective approximations,” Image Vis. Comput., vol. 8, no. 3, pp. 177-192, 1990. [29]R.B. Schnabel and E. Eskow, “A revised modified cholesky factorization algorithm,” SIAM J. Optim., vol. 9, no. 4, pp. 1135-1148, 1999. [30]R.B. Schnabel and E. Eskow, “A new modified Cholesky factorization,” SIAM J. Sci. Statist. Comput., vol. 11, no. 6, pp. 1136-1158, 1990. [31]P.E. Gill, W. Murry, and R.H. Byrd, Practical Optimization, Academic Press, London, 1981, pp. 108-111. [32]S. Umeyama, “Least-squares estimation of transformation parameters between two point patterns,” IEEE Trans. Pattern Anal. Machine Intell., vol. 13, no. 4, pp. 376-380, 1991. [33]Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Machine Intell., vol. 22, no. 11, pp. 1330-1334, 2000. [34]T. Morita and T. Kanade, “A sequential factorization method for recovering shape and motion from image streams,” IEEE Trans. Pattern Anal. Machine Intell., vol. 19, no. 8, pp. 58-867, 1997. [35]D.Q. Huynh and A. Heyden, “Outlier Detection in Video Sequences under Affine Projection,” in Proc. IEEE Conf. on Computer Vision and Pattern Recognition, vol. 1, pp. 695-701, 2001. [36]C. Tomasi and T. Kanade, “Detection and tracking of point features,” Carnegie Mellon University Tec. Rep. CMU-CS-91-132, 1991. [37]J.P. Helferty, C. Zhang, G. McLennan, and W.E. Higgins, “Videoendoscopic distortion correction and its application to virtual guidance of endoscopy,” IEEE Trans. Med. Imag., vol. 20, no. 7, pp. 605-617, 2001. [38]T.W. Ridler and S. Calvard, “Picture thresholding using an iterative selection method,” IEEE Trans. Syst., Man, Cybern., vol. 8, no.8, pp.630-632, 1978. [39]J.R. Shewchuk, “Delaunay refinement algorithms for triangular mesh generation,” Computational Geometry: Theory and Applications, no. 22, no.1-3, pp. 21-74, 2002. [40]D. Dey, D.G. Gobbi, P.J. Slomka, K.J.M. Surry, and T.M. Peters, “Automatic Fusion of Freehand Endoscopic Brain Images to Three-Dimensional Surfaces: Creating Stereoscopic Panoramas”, IEEE Trans. Med. Imag., vol. 21, no. 1, pp. 23-30, 2002.
|