|
[1] Z. Y. Zhou, A. D. Cheok, Y. Qiu, and X. Yang, “The Role of 3-D sound in human reaction and performance in augmented reality environments,” IEEE Trans. on Systems, Man, and CyberneticsPart A: Systems and Humans, vol. 37, no. 2, pp. 262272, 2007. [2] B. J. Tippetts, D. J. Lee, J. K. Archibald, and K. D. Lillywhite, “Dense disparity real-time stereo vision algorithm for resource-limited systems,” IEEE Trans. on Circuits and Systems for Video Technology, vol. 21, no. 10, pp. 15471555, 2011. [3] Y. Sun, X. Chen, M. Rosato, and L. Yin, “Tracking vertex flow and model adaptation for three-dimensional spatiotemporal face analysis,” IEEE Trans.on Systems, Man and CuberneticsPart A: Systems and Humans, vol. 40, no. 3, pp. 461474, 2010. [4] K. Li, Q. Dai, W. Xu, J. Yang, and J. Jiang, “Three-dimensional motion esimtation via matrix completion,” IEEE Trans. on Systems, Man, and CyberneticsPart B: Cybernetics, vol. 42, no. 2, pp. 539551, 2012. [5] Wikipedia contributors, “Kinect,” Wikipedia, The Free Encyclopedia, http://en.wikipedia.org/wiki/Kinect. [6] M. Betke and L. Gurvits, “Mobile robot localization using landmarks,” IEEE Transactions on Robotics and Automation, vol. 13, no.2, pp. 251263, 1997. [7] Y. Yagi, Y. Nishizawa, and M. Yachida, “Map-based navigation for a mobile robot with omni-directional image sensor copis,” IEEE Transactions on Robotics and Automation, vol. 11, no. 5, pp. 634648, 1995. [8] J. Gaspar, N. Winters, and J. Santos-Victor, “Vision-based navigation and environmental representations with an omni-directional camera,” IEEE Transactions on Robotics and Automation, vol. 16, no. 6, pp. 890898, 2000. [9] E. Menegatti, T. Maeda and H. Ishiguro, “Image-based memory for robot navigation using properties of the omni-directional images,” Robotics and Autonomous Systems, vol. 47, no. 4, pp. 251267, 2004. [10] H. Koyasu, J. Miura, and Y. Shirai, “Recognizing moving obstacles for robot navigation using real-time omni-directional stereo vision,” Journal of Robotics and Mechatronnics, vol. 14, no. 2, pp. 147156, June 2002. [11] C. Cauchois, E. Brassart, B. Marhic, and C. Drocourt, “An absolute localization method using a synthetic panoramic image base,” Proceedings of IEEE Workshop on Omnidirectional Vision, Copenhagen, Denmark, pp. 128135, 2002. [12] Y. Ogawa, J. H. Lee, S. Mori, A. Takagi, C. Kasuga and H. Hashimoto, “The positioning system using the digital mark pattern-the method of measurement of a horizontal distance,” Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, pp.731741, 1999. [13] S. J. Ahn, W. Rauh, and M. Recknagel, “Circular coded landmark for optical 3D-measurement and robot vision,” Proceedings of International Conference on Intelligent Robots and Systems, pp.11281133, 1999. [14] S. Kim and S.Y. Oh, “SLAM in indoor environments using omni-directional vertical and horizontal line features,” Journal of Intelligent and Robotic Systems, vol. 51, no. 1, pp. 3143, 2008. [15] J. Kannala and S. Brandt, “A generic camera calibration method for fish-eye lenses,” Proceedings of the 17th International Conference on Pattern Recognition; Cambridge, U.K, vol. 1, pp. 1013, 2004. [16] S. Shah and J. K. Aggarwal, “Intrinsic parameter calibration procedure for a (high-distortion) fish-eye lens camera with distortion model and. accuracy estimation,” Pattern Recognition, vol. 29, no. 11, pp. 17751788, 1996. [17] Y. C. Liu, K. Y. Lin, and Y. S. Chen, “Bird’s-eye view vision system for vehicle surrounding monitoring,” Proceedings of Conference on Robot Vision, Berlin, Germany, pp. 207218, 2008. [18] S. W. Jeng. “A study on camera calibration and image transformation techniques and their applications,” Ph. D. Dissertation, Institute of Information Science and Engineering, National Chiao Tung University, Hsinchu, Taiwan, Republic of China, June 2007. [19] L. Tian, L. C. Wu, Y. Wang, G. S. Yang, “Binocular vision system design and its active object tracking,” Proc. IEEE International Symposium on Computational Intelligence and Design (ISCID), vol. 1, pp. 278281, 2011. [20] Y. Xie, Y. N. Wang, B. T. Guo, H. H. Wang, “Study on human-computer interaction system based on binocular vision technology,” Proc. IEEE International Conference on Instrumentation, Measurement, Computer, Communication and Control (IMCCC), pp. 15411546, 2012. [21] H. Koyasu, J. Miura and Y. Shirai, “Real-time omnidirectional stereo for obstacle detection and tracking in dynamic environments,” Proceedings of 2001 IEEE/RSJ International Conference on Intelligent Robots and Systems, vol. 1, pp. 3136, Maui, Hawaii, U. S. A., 2001. [22] S. Laakso and M. Laakso, “Design of a body-driven multiplayer game system,” Computers in Entertainment (CIE), vol. 4, no. 4, 2006. [23] J. J. Magee, M. Betke, J. Gips, M. R. Scott, and B. N. Waber, “A humancomputer interface using symmetry between eyes to detect gaze direction,” IEEE Trans. on Systems, Man, and CyberneticsPart A: Systems and Humans, vol. 38, no. 6, pp. 12481261, Nov. 2008. [24] X. Zabulis, T. Sarmis, D. Grammenos and A. A. Argyros, “A multicamera vision system supporting the development of wide-area exertainment applications,” IAPR Conf.on Machine Vision Applications (MVA 2009), Yokohama, Japan, pp. 269272, 2009. [25] J. Starck, A. Maki, S. Nobuhara, A. Hilton, and T. Matsuyama, “The multiple-camera 3-D production studio,” IEEE Trans. on Circuits and Systems for Video Technology, vol. 19, no. 6, 2009. [26] S. Sefvic and S. Ribaric, “Determining the absolute orientation in a corridor using projective geometry and active vision,” IEEE Trans. on Industrial Electronics, vol. 48, no. 3, pp. 696710, June 2001. [27] R. Carelli, R. Kelly, O. H. Nasisi, C. Soria, and V. Mut, “Control based on perspective lines of a non-holonomic mobile robot with camera-on-board,” Int’l Journal of Control, vol. 79, no. 4, pp. 362371, 2006. [28] X. Ying and H. Zha, “Simultaneously calibrating catadioptric camera and detecting line features using Hough transform,” Proc. IEEE/RSJ Int’l Conf. on Intelligent Robots and Systems, pp. 412417, Aug. 2005. [29] X. Ying, “Catadioptric Camera Calibration Using Geometric Invariants,” Proc. IEEE Int’l Conf. on Computer Vision, vol. 2, pp. 13511358, Oct. 2003. [30] F. Duan, F. Wu, M. Zhou, X. Deng, and Y. Tian, “Calibrating effective focal length for central catadioptric cameras using one space line,” Pattern Recognition Letters, vol. 33, pp. 646-653, 2012. [31] R. G. von Gioi, J. Jakubowicz, J.-M. Morel and G. Randall, “LSD: A fast line segment detector with a false detection control,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 32, no. 4, pp. 722-732, April 2010. [32] C. J. Wu and W. H. Tsai, “An omni-vision based localization method for automatic helicopter landing assistance on standard helipads,” Proc. Int’l Conf. on Computer and Automation Engineering, Singapore, pp. 327-332, 2010. [33] S. J. Maybank, S. Ieng, and R. Benosman, “A Fisher-Rao metric for paracatadioptric images of lines,” Int’l Journal of Computer Vision, vol. 99, no. 2, pp. 147165, 2012. [34] K. Yamazawa, Y. Yagi and M. Yachida, “3D line segment reconstruction by using hyperomni vision and omnidirectional Hough transforming,” Proc. Int’l Conf. on Pattern Recognition, vol. 3, IEEE Computer Society, Washington, DC, USA, pp.34873490, 2000. [35] S. T. Barnard, “Interpreting perspective images,” Artifical Intelligence, vol. 21, pp. 435462, 1983. [36] B. Li, K. Peng, X. Ying, and H. Zha, “Vanishing point detection using cascaded 1D Hough Transform from single images,” Pattern Recognition Letters, vol. 33, pp. 1-8, 2012. [37] S. Wenhardt, B. Deutsch, E. Angelopoulou, and H. Niemann, “Active Visual Object Reconstruction using D-, E-, and T-Optimal Next Best Views,” Proc. IEEE Conf. on Computer Vision and Pattern Recognition, pp. 1-7, 2007. [38] H. Zhang, “Two-dimensional optimal sensor placement,” IEEE Trans. Systems, Man, and Cybernetics, vol. 25, no. 5, pp. 781792, 1995. [39] B. Alsadik, M. Gerke, and G. Vosselman, “Automated Camera Network Design for 3D Modeling of Cultural Heritage Objects,” Journal of Cultural Heritage, 2013. [40] C. Hoppe, A. Wendel, S. Zollmann, and S. Kluckner, “Photogrammetric Camera Network Design for Micro Aerial Vehicles,” Proc. 17th Computer Vision Winter Workshop, Feb. 2012. [41] G. Olague, and R. Mohr, “Optimal camera placement for accurate reconstruction,” Pattern Recognition, vol. 35, no. 4, pp. 927944, 2002. [42] A. H. Rivera, F. L. Shih, and M. Marefat, “Stereo camera pose determination with error reduction and tolerance satisfaction for dimensional measurements,” in Proc. IEEE Int. Conf. Robotics and Automation, pp. 423428, April 2005. [43] R. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge University Press, 2000. [44] C. Geyer and K. Daniilidis, “A Unifying Theory for Central Panoramic Systems and Practical Implications,” Proc. Sixth European Conf. Computer Vision, pp. 445-462, 2000. [45] C. Geyer and K. Daniilidis, “Catadioptric projective geometry,” Int. J. of Computer Vision, vol. 45, no. 3, pp. 223243, 2001. [46] X. Ying and Z. Hu, “Catadioptric Camera Calibration Using Geometric Invariants,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 26, no. 10, pp. 1260-1271, 2004. [47] X. Ying and Z. Hu, “Can We Consider Central Catadioptric Cameras and Fisheye Cameras within a Unified Imaging Model?” Proc. European Conference on Computer Vision, pp. 442-455, 2004. [48] X. M. Deng, F. C. Wu, and Y. H. Wu, “An Easy Calibration Method for Central Catadioptric Cameras,” Acta Automatica Sinica, vol. 33, no. 8, pp. 801-808, 2007. [49] Y. Bastanlar, L. Puig, P. Sturm, J. J. Guerrero, and J. Barreto, “DLT-Like Calibration of Central Catadioptric Cameras,” Proc. 8th Workshop on Omnidirectional Vision, Camera Networks and Non-classical Cameras, Oct. 2008. [50] S. Gasparini, P. Sturm, and J. P. Barreto, “Plane-Based Calibration of Central Catadioptric Cameras,” Proc. IEEE 12th International Conference on Computer Vision, pp. 1195-1202, 2009. [51] D. Loannou, W. Huda, and A. F. Laine, “Circle recognition through a 2D Hough transform and radius histogramming,” Image and Vision Computing, vol. 17, no. 1, pp. 1526, 1999. [52] Y. C. Cheng and S. C. Lee, “A New method for quadratic curve detection using K-RANSAC with acceleration technique,” Pattern Recognition, vol. 28, no. 5, pp. 663682, 1995. [53] H. Ukida, N. Yamato, Y. Tanimoto, T. Sano and H. Yamamoto, “Omni-directional 3D measurement by hyperbolic mirror cameras and pattern projection,” Proc. 2008 IEEE Conf. on Instrumentation and Measurement Technology, Victoria, BC, Canada, pp. 365-370, 2008. [54] R. I. Hartley and P. Sturm, “Triangulation,” Proc. ARPA Image Understanding Workshop, pp. 957-966, 1994. [55] M. Abramowitz and I. A. Stegun, Handbook of Mathematical Functions With Formulas, Graphs, and mathematical Tables. pp. 72. [56] S. Baker and S. Nayar, “A theory of single-viewpoint catadioptric image formation,” Int. J. of Computer Vision, vol. 35, no. 2, pp. 175196, 1999. [57] D. Pedoe, Circles: A Mathematical View (Spectrum), 2nd ed. The Mathematical Association of America, 1997. [58] R. S. Irving, Integers, Polynomials, and Rings. New York: Springer, 2004. [59] M. Berg, M. Kreveld, M. Overmars, and O. Schwarzkopf, Computational Geometry: Algorithms and Applications. New York: Springer, 1997. [60] O. Faugeras, Three-Dimensional Computer Vision: A Geometric Viewpoint, MIT Press, Cambridge, MA, 1996. [61] J. E. Dennis and R. B. Schnabel, Numerical Methods for Unconstrained Optimization and Nonlinear Equations, Prentice Hall, 1983. [62] J. P. Barreto and H. Araujo, “Geometric Properties of Central Catadioptric Line Images,” Proc. Seventh European Conference on Computer Vision, pp. 237-251, 2002. [63] T. Apostol, Calculus, Vol 1: One-Variable Calculus with an Introduction to Linear Algebra, Wiley, 2nd edition, June, 1967.
|