|
[1] T. Iinuma, H. Murata, S. Yamashita, and K. Oyamada, “Natural Stereo Depth Creation Methodology for a Real-time 2D-to-3D Image Conversion,” SID Symposium Digest of Technical Papers, pp. 1212-1215, 2000. [2] C. C. Cheng, T. L. Chung, Y. M. Ysai, and L. G. Chen, “Hybrid Depth Cueing for 2D-To-3D Conversion System,” in Proc. of Stereoscopic Displays and Application XX, 2009 [3] S. Battiato, A. Capra, S. Curti, and M. L. Cascia, "3D Stereoscopic Image Pairs by Depth-Map Generation," in Proc. of International Symposium on 3D Data Processing, Visualization and Transmission (3DPVT), pp. 124-131, 2004. [4] D. Hoiem, A. Stein, A. A. Efros, and M. Hebert, “Recovering occlusion boundaries from a single image,” in Proc. of IEEE International Conference on Computer Vision (ICCV), 2007. [5] R. I. Hartley, and A. Zisserman, “Multiple Views Geometry in Computer Vision,” Cambridge University Press: Cambridge, UK, 2000. [6] M. Pollefeys, L. V. Gool, and M. Vergauwen, F. Verbiest, K. Cornelis, J. Tops, R. Koch, “Visual modeling with a hand-held camera,” International Journal of Computer Vision , vol. 59, no.3, pp. 207-232, 2004. [7] C. Tomasi and T. Kanade, “Detection and tracking of point features,” Carnegie Mellon Univ., Pittsburgh, PA, Tech. Rep. CMU-CS-91-132, Apr. 1991. [8] T. Jebara, A. Azarbayejani, A. Pentland, 3D structure from 2D motion, IEEE Signal Process. Mag. 16 (3) (1999) 66–84. [9] M.Z. Brown, D. Burschka, and G. Hager, “Advances in Computational Stereo,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no.8, pp. 993-1008,2003. [10] D. Scharstein and R. Szeliski, "A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms," International Journal of Computer Vision (IJCV), vol. 47, pp. 7-42, 2002. [11] S. Knorr, T. Sikora, “An image-based rendering (IBR) approach for realistic stereo view synthesis of TV broadcast based on structure from motion,” in Proc. of IEEE International Conference on Image Processing (ICIP), San Antonio, USA, 2007. [12] L.MacMillan, “An Image based approach to three-dimensional computer graphics,” Ph.D. Dissertation, 1997, University of North Carolina. [13] I. Ideses, L. P. Yaroslavsky, and B. Fishbain, “Real-time 2D to 3D video conversion,” Journal of Real-Time Image Processing, vol.2, no. 1, pp. 3–9, 2007. [14] M. Kunter, S. Knoor, A. Krutz, T. SiKora, “Unsupervised object segmentation for 2D to 3D conversion,” in Proc. of SPIE, vol. 7237, 2009 [15] A. Krutz, M. Kunter, M. Mandal, M. Frater, and T. Sikora, “Motion-based Object Segmentation using Sprites and Anisotropic Diffusion”, 8th International Workshop on Image Analysis for Multimedia Interactive Services (WIAMIS), 2007. [16] S. A. Valencia, R. M. Rodríguez-Dagnino, “Synthesizing Stereo 3D Views from Focus Cues in Monoscopic 2D images,” in Proc. SPIE, vol. 5006, pp. 377-388, 2003. [17] J.M. Geusebroek and A.W.M. Smeulders, “A six-stimulus theory for stochastic texture,” International Journal of Computer Vision (IJCV), vol. 62, pp. 7–16, 2005. [18] V. Nedovic, A. W. M. Smeulders, A. Redert, J. M. Geusebroek, “Depth estimation via stage classification,” in Proc. of 3DTV conference, pp 77-80, 2008 [19] D. A. Forsyth, D.A. “Shape from texture and integrability,” in Proc. of International Conference on Computer Vision (ICCV), vol. 2, pp. 447-452, 2001, [20] A. M. Loh, R. Hartley “Shape from Non-Homogeneous, Non-Stationary, Anisotropic, Perspective texture”, in Proc. of the British Machine Vision Conference, 2005 [21] Y. J. Jung, A. Baik, J. Kim, and D. Park, “A novel 2D-to-3D conversion technique based on relative height depth cue,” in Proc. of SPIE, vol. 7237, 2009 [22] A. Saxena, S. H. Chung, and A. Y. Ng, “Learning depth from single monocular images,” In NIPS, vol. 18, 2005 [23] A. Saxena, S. H. Chung, and A. Y. Ng, “3-D depth reconstruction from a single still image,” in Proc. of International Journal of Computer Vision (IJCV), vol.76 no. 1, 2007 [24] T. Okino, H. Murata, K. Taima, T. Iinuma, and K. Oketani, "New television with 2D to 3D image conversion technologies," in Proc. of SPIE , Stereoscopic Displays and Virtual Reality Systems III, Vol. 2653, pp. 96-103, 1996. [25] H. Murata, Y. Mori, S. Yamashita, A. Maenaka, S. Okada, K. Pyamada, and S. Kishimoto, "A real- Time Image Conversion Technique Using Computed Image Depth," SID Symposium Digest of Technical Papers, vol. 29, pp. 919-922, 1998 [26] Martin, C. Fowlkes and J. Malik, “Learning to detect natural image boundaries using local brightness, color and texture cues,” IEEE Transactions on Pattern Analysis and Machine Intelligence,” vol. 26, no. 5, pp. 530–549, 2004 [27] D. Hoiem, A. Efros, and M. Hebert, “Recovering surface layout from an image,” International Journal of Computer Vision (IJCV), vol. 75, no. 1, pp. 151–172, 2007. [28] Y. R. Horng, Y. C. Tseng, T. S. Chang, “Stereoscopic Images Generation with Directional Gaussian Filter,” in Proceedings of IEEE International Symposium on Circuits and Systems, pp. 2650-2653, 2010. [29] A. Ko ̈rbes, R. Lotufo, G. B. Vitor, and J. V. Ferreira, “A proposal for a parallel watershed transform algorithm for real-time segmentation,” in proc. of Workshop de Visa ̃o Computacional WVC’, 2009. [30] A. R. Smith, “Color gamut transform pairs,” Computer Graphics, Vol. 12, pp. 12-19, 1978. [31] J. R. Smith and S. F. Chang, "VisualSEEk: A fully automated content-based image query system", in proc. of ACM Multimedia Conference, pp. 87 - 98, 1996. [32] T. Leung and J. Malik, “Representing and recognizing the visual appearance of materials using threedimensional textons,” International Journal of Computer Vision (IJCV), vol. 43, no. 1, pp. 29–44, 2001. [33] D.H. Ballard, "Generalizing the Hough Transform to Detect Arbitrary Shapes", Pattern Recognition, vol.13, no.2, pp.111-122, 1981 [34] P. Felzenszwalb and D. Huttenlocher. “Efficient graph-based image segmentation, “International Journal of Computer Vision (IJCV), vol.59, no.2, 2004. [35] J. L. Schneiter, N. R. Corby , US Patent No. 4,963,017 , “Variable depth range camera”, General Electric Company, schenectedy, N.Y, 1990 [36] Y. Su, M. T. Sun, and V. Hsu, “Global motion estimation from coarsely sampled motion vector field and the applications,” in Proc. International Symposium on Circuits and System (ISCAS),, vol. 2, pp. 628–631, 2003 [37] S. Makrogiannis, G. Economou, and S. Fotopoulos, “A region dissimilarity relation that combines feature-space and spatial information for color image segmentation,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 35, no. 1, pp. 44–53, 2005. [38] D. B. K. Trieu, and T. Maruyama, T, “Real-time image segmentation based on a parallel and pipelined watershed algorithm.,“ Journal of Real-Time Image Processing, vol. 2, no. 4, pp. 319–329, 2007
|