|
[1]M. Hansen and G. Sommer, “Active depth estimation with gaze and vergence con-trol using gabor filters,”, Proceedings of the 13th International Conference on Pat-tern Recognition 1996, vol. 1, pp. 287-291, Aug. 1996. [2]Y. Y. Schechner and N. Kiryati, “Depth from defocus vs. stereo: how different really are they?,” in ICPR 1998, vol. 2, pp. 1784-1786, Aug. 1998 [3]R. Feris, R. Raskar, L. Chen, K. H. Tan and M. Turk, “Multiflash stereo: depth-edge-preserving stereo with small baseline illumination,” IEEE transactions on Pattern Analysis and Machine Intelligence, vol. 30, pp. 147-159, Jan. 2008. [4]T. Sato and N. Yokoya, “Multi-baseline stereo by maximizing total number of in-terest points,” Annual Conference on SICE, 2007, pp. 1471-1477, Sept. 2007. [5]M. Wang, X. Lv and X. Huang, “Self-optimizing visual servoing control for mi-croassembly robotic depth motion,” International Conferences on Information Ac-quisition, 2007, pp. 482-486, July 2007. [6]T. Nakaqawa, Y. Hayashi, Y. Hatanaka, A. Aoyama, T. Hara, A. Fujita, M. Kako-qawa, H. Fujita and T. Yamamoto, “Three-dimensional reconstruction of optic nerve head from stereo fundus images and its quantitative estimation,” IEEE In-ternational Conference on EMBS, 2007, pp. 6747-6750, Aug. 2007. [7]J. Yang, M. Zhang, Y. Wang and Y. Shang, “A monocular visual servoing control system for mobile robot,” IEEE Conference on Automation and Logistics, 2007, pp. 574-579, Aug. 2007. [8]P. Merrell, A. Akbarzadeh, L. Wanq, P. Mordohai, J. M. Frahm, R. Yanq, D. Nister and M. Pollefeys, “Real-time visibility-based fusion of depth maps,” IEEE Inter-national Conference on Computer Vision, 2007, pp. 1-8, Oct. 2007. [9]F. Bouqhorbel, “A new multiple-windows depth form stereo algorithm for 3D dis-plays,” 3DTV Conference, 2007, pp. 1-4, May 2007. [10]J. H. Piater, R. A. Grupen and K. Ramamritham, “Learning real-time stereo ver-gence control,” IEEE International Symposium on Intelligent Control/Intelligent Systems and Semiotics, 1999, pp. 272-277, Sept. 1999. [11]R. Bajcsy, “Active perception,” Proceedings of the IEEE, vol. 76, pp. 966-1005. Aug. 1988. [12]K. S. Pradeep and A. N. Rajagopalan, “Improving shape from focus using defocus information,” 18th International Conference on Pattern Recognition 2006, vol. 1, pp. 731-734, Sept. 2006. [13]Y. Xiong and S. A. Shafer, “Depth from focusing and defocusing,” IEEE Confer-ence on Computer Vision and Pattern Recognition, pp. 68-73, June 1993. [14]K. Nayar and Y. Nakagawa, “Shape from focus,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 16, pp. 824-831, Aug. 1994. [15]B. Jahne and P. Geissler, “Depth from focus with one image,” IEEE Computer and Science Conference on CVPR, 1994, pp. 713-717, June 1994. [16]Y. Joungil and T. S. Choi, “Accurate 3-D shape recovery using curved window fo-cus measure,” ICIP, 1999, vol. 3, pp.910-914, Oct. 1999. [17]P. Favaro, S. Soatto and M. Burger, Stanley J. Osher, “Shape from defocus via diffusion,” IEEE Transaction on Pattern Recognition and Machine Intelligence, vol. 30, pp. 518-531, March 2008. [18]P. Favaro and S. Soatto, “Learning shape from defocus,” Heidelberg, Springer Ber-lin, 2002. [19]K. S. Pradeep and A. N. Rajagopalan, “Improving shape from focus using defocus information,” 18th International Conference on Pattern Recognition 2006, vol. 1, pp. 731-734, Sept. 2006. [20]M. Asif and A. S. Malik, T. S. Choi “3D shape recovery from image defocus using wavelet analysis,” IEEE International Conference on Image Processing 2005, vol. 1, pp. 1025-1028, Sept. 2005. [21]M. Subbarao, “Parallel depth recovery by changing camera parameters,” Second International Conference on Computer Vision 1988, pp. 149-155, Dec. 1988. [22]Y. Xiong and S. A. Shafer, “Depth from focusing and defocusing,” IEEE Confer-ence on Computer Vision and Pattern Recognition, pp. 68-73, June 1993. [23]M. Subbarao and T. C. Wei, “Depth from defocus and rapid autofocusing: a practi-cal approach,” IEEE Conferences on Computer Vision and Pattern Recognition, pp. 773-776, Jun. 1992. [24]Y. Y. Schechner and N. Kiryati, “Depth from defocus vs. stereo: how different really are they?,” in ICPR 1998, vol. 2, pp. 1784-1786, Aug. 1998 [25]A. N. Rajagopalan and S. Chaudhuri, “A variational approach to recovering depth from defocused images,” IEEE Transactions on Pattern Analysis and Machine In-telligence, vol. 19, pp. 1158-1164, Oct. 1997. [26]Y. H. Kao, C. K. Liang, L. W. Chang and H. H. Chen, “Depth detection of light field,” IEEE international Conferences on Acoustics, Speech and Signal Process-ing, 2007, vol. 1, pp. 893-896, April 2007. [27]S. Y. Park, “An image-based calibration technique of spatial domain depth-from-defocus,” Pattern Recognition Letters, vol. 27, pp. 1318-1324, Sep. 2006. [28]V. Aslantas and D. T. Pham, “Depth from automatic defocusing,” Optics Express, vol. 15, pp.1011-1023, Feb. 2007. [29] M. Watanabe and S. K. Nayar, “Minimal operator set for passive depth from fo-cus,” IEEE Computer Society Conference on CVPR, 1996, pp. 431-438, June 1996. [30] P. Favaro and S. Soatto, “A geometric approach to shape from defocus,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, pp.406-417, March 2005. [31]L. Yifei, P. Favaro, A. L. Bertozzi and S. Soatto, “Autocalibration and uncalibrated reconstruction of shape from defocus,” IEEE Conference on CVPR, 2007, pp.1-8, June 2007. [32]M. Gokstorp, “Computing depth from out-of-focus blur using a local frequency representation,” IAPR International Conference A: Computer Vision & Image Processing, 1994, vol. 1, pp. 153-158, Oct. 1994. [33]A. Mennucci and S. Soatto, “On observing shape from defocused images,” Inter-national Conference on Image Analysis and Processing, pp. 550-555, Sept. 1999. [34]A. P. Pentland, “A new sense for depth of field”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 9, no. 4, pp. 523-531, 1987. [35]M. Subbarao and N. Gurumoorthy, “Depth recovery from blurred edges,” IEEE Conferences on Computer Vision and Pattern Recognition 1988, pp. 498-503, June 1988. [36]M. Matitre, Y. Shinagawa and N. D. Minh, “Rate-distortion optimal depth maps in the wavelet domain for free-viewpoint rendering,” IEEE International Conferences on Image Processing, 2007, vol. 5, pp. 125-128, Oct. 2007. [37]S. Ince, E. Martinian, S. Yea and A. Vetro, “Depth estimation for view synthesis in multiview video coding,” 3DTV Conference, 2007, pp. 1-4, May 2007. [38]Li Zhang and Shree Nayar, “Projection defocus analysis for scene capture and im-age display,” ACM SIGGRAPH International Conference on Computer Graphics and Interactive Techniques, 2006, pp. 907-915, 2006. [39]M. Haldun Ozaktas, Zeev Zalevsky and M. Alper Kutay, “The fractional Fourier transform with applications in optics and signal processing,” JOHN WILEY & SONS, LTD, New York, 2001. [40]B. Barshan, M. Alper Kutay and H. M. Ozaktas, “Optimal filtering with linear ca-nonical transformations,” Optics Communications, vol. 135, pp. 32-36, Feb. 1997. [41]J. Immerker, “Use of blur-space for deblurring and edge-preserving noise smooth-ing,” IEEE Transactions on Image Processing, vol. 10, issue 6, pp. 837-840, June 2001. [42]A. Kubota and K. Aizawa, “Inverse filters for reconstruction of arbitrarily focused images from two differently focused images,” IEEE Conferences on Image Proc-essing 2000, vol.1, pp.101-104, Sept. 2000. [43]A. Kubota, K. Kodama and K. Aizawa, “Registration and blur estimation methods for multiple differently focused images,” IEEE Conferences on Image Processing 1999, vol.2, pp.447-451, Oct. 1999. [44]K. Uehira, M. Suzuki and T. Abekawa, “3-D display using motion parallax for ex-tended-depth perception,” IEEE International Conference on Multimedia and Expo., 2007, pp. 1742-1745, July 2007. [45]M. Sorel and J. Flusser, “Space-variant restoration of images degraded by camera motion blur,” IEEE Transactions on Image Processing, vol. 17, pp. 105-116, Feb. 2008. [46]M. R. Banham and A. K. Katsaggelos, “Digital image restoration,” IEEE Signal Processing Magazine, vol. 14, pp. 24-41, March 1997. [47]Jos. Schneider Optische Werke GmbH, “The way a zoom lens works,” Feb. 2008. [Online]. Available: http://www.schneiderkreuznach.com/knowhow/zoom_e.htm. [Accessed: Mar. 9 2008].
|