[1]W. Burgard, A. B. cremers, D. Fox, D. Hahnel, G. Lakemeyer, D. Schulz, W. Steiner, and S. Thrun, “Experiences with an interactive museum tour-guide robot,” Artificial Intelligence, pp. 3-55, 1999.
[2]NASA Jet Propulsion Laboratory, “Mar Exploration Rover Mission,” http://marsrovers.jpl.nasa.gov/overview/
[3]G. N. DeSouza and A. C. Kak, “Vision for mobile robot navigation: a survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 2, pp. 237-267, 2002.
[4]D. Filliat and J. A. Meyer, “Map-based navigation in mobile robots: I. A review of localization strategies,” Cognitive Systems Research, vol. 4, pp. 243-282, 2003.
[5]J. A. Meyer and D. Filliat, “Map-based Navigation in Mobile Robots: II. A review of Map-Learning and Path-Planning Strategies,” Cognitive Systems Research, vol. 4, pp. 283-317, 2003.
[6]F. Dellaert, D. Fox, W. Burgard, and S. Thrun, “Monte Carlo localization for mobile robots,” in Proceedings of the IEEE International Conference on Robotics & Automation, pp. 1322-1328,1999.
[7]A. Kosaka and A. Kak, “Fast Vision-Guided Mobile Robot Navigation Using Model-based Reasoning and Prediction of Uncertainties,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2177-2186, 1992.
[8]S. Thrun, “Bayesian Landmark Learning for Mobile Robot Localization,” Machine Learning, vol. 33, no. 1, pp. 41-76, 1998.
[9]S. Thrun, “Finding Landmarks for Mobile Robot Navigation,” in Proceedings of the IEEE International Conference on Robotics & Automation, pp. 958-963, 1998.
[10]S. Thrun, M. Bennewitz, W. Burgard, A. B. cremers, F. Dellaert, D. Fox, D. Hahnel, C. Rosenberg, N. Roy, J. Schulte, and D. Schulz, “MINERVA: A Tour-Guide Robot That Learns,” Lecture Notes in Computer Science, pp. 14-29, 1999.
[11]F. Dellaert, W. Burgard, D. Fox, S. Thrun, “Using the CONDENSATION Algorithm for Robust, Vision-based Mobile Robot Localization” in Proceedings of the IEEE Computer Society Conference, pp. 588-594, 1999.
[12]M. Montemerlo, S. Thrun, D. Koller, and B. Wegbreit, “FastSLAM: A Factored Solution to the Simultaneous Localization and Mapping Problem” in Proceedings of American Association on Artificial Intelligence, pp. 593-598, 2002.
[13]S. King and C. Weiman, “Helpmate Autonomous Mobile Robot Navigation System,” in Proceedings of the SPIE Conference on Mobile Robots, vol. 2352, pp. 190-198, 1990.
[14]S. Koening and R. Simmons, “Passive Distance Learning for Robot Navigation,” in Proceedings of the 13th International Conference on Machine Learning, pp. 266-274, 1996.
[15]M. Montemerlo, S. Thrun, D. Koller, and B. Wegbreit, “FastSLAM 2.0: An Improved Particle Filtering Algorithm for Simultaneous Localization and Mapping that Provably Converges,” in proceedings of International Joint Conference on Artificial Intelligence, 2003.
[16]J. J. Leonard and H. F. Durrant-Whyte, “Simultaneous Map Building and Localization for an Autonomous Mobile Robot,” IEEE/RSJ International Workshop on IROS, pp. 1442-1447, 1991.
[17]M. Dissanayake, P. Newman, S. Clark, and H. F. Durrant-Whyte “A Solution to the Simultaneous Localization and Map Building (SLAM) Problem,” IEEE Transactions on Robotics and Automation, vol. 17, no. 3, pp. 229-241, 2001.
[18]J. Folkesson, P. Jensfelt, and H. I. Christensen, “Vision SLAM in the Measurement Subspace,” in Proceedings of the IEEE International Conference on Robotics and Automation, pp. 30-35, 2005.
[19]R. Sim, P. Elinas, M. Griffin, and J. J. Little, “Vision-based SLAM using the Rao-Blackwellised Particle Filter,” IJCAI Workshop on Reasoning with Uncertainty in Robotics, 2005.
[20] D. X. Nguyen, B. J. You, and S. R. Oh, “A Simple Landmark Model for Vision-based Simultaneous Localization and Mapping,” in SICE-ICASE International Joint Conference, pp. 5016-5021, 2006.
[21] T. Bailey and H. Durrant-Whyte, “Simultaneous Localization and Mapping (SLAM):Part I The Essential Algorithms,” IEEE Robotics and Automation Magazine, vol. 13, no. 2, pp. 99-110, 2006.
[22] J. Kim, K. J. Yoon, J. S. Kim, and I. Kweon, “Visual SLAM by Single-Camera Catadioptric Stereo,” in SICE-ICASE International Joint Conference, pp. 2005-2009, 2006.
[23] T. Lemaire and S. Lacroix, “Monocular-vision based SLAM using Line Segments,” IEEE International Conference on Robotics and Automation, pp. 2791-2796, 2007.
[24] A. J. Davison, I. D. Reid, N. D. Molton, and O. Stasse, “MonoSLAM: Real-Time Single Camera SLAM,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 6, pp. 1-16, 2007.
[25] L. F. Gao, Y. X. Gai, and S. Fu, “Simultaneous Localization and Mapping for Autonomous Mobile Robots Using Binocular Stereo Vision System,” in Proceedings of the 2007 IEEE International Conference on Mechatronics and Automation, pp. 326-330, 2007.
[26] H. Liu, L. Gao, Y. Gai, and S. Fu, “Simultaneous Localization and Mapping for Mobile Robots Using Sonar Range Finder and Monocular Vision,” in Proceedings of the IEEE International Conference on Automation and Logistics, pp. 1602-1607, 2007.
[27] T. Lemaire and S. Lacroix, “SLAM with Panoramic Vision,” Journal of Field Robotics, vol. 24, pp. 91–111, 2007.
[28] T. Lemaire, C. Berger, I. K. Jung, and S. Lacroix, “Vision-Based SLAM: Stereo and Monocular Approaches,” International Journal of Computer Vision, pp. 343–364, 2007.
[29] P. Yang, W. Wu, M. Moniri, and C.C. Chibelushi, “A Sensor-based SLAM Algorithm for Camera Tracking in Virtual Studio,” International Journal of Automation and Computing, pp. 152-162, 2008.
[30] K. Celik, S. J. Chung, and A. Somani, “Mono-Vision Corner SLAM for Indoor Navigation,” in Proceedings of the IEEE International Conference on Electro/Information, pp. 343-348, 2008.
[31] S. Kim and S. Y. Oh, “SLAM in Indoor Environments using Omni-directional Vertical and Horizontal Line Features,” Journal of Intelligent and Robotic Systems, vol. 51, no. 1, pp. 31-43, 2008.
[32] Y. Matsumoto, M. Inaba, and H. Inoue, “Visual navigation using view-sequenced route representation,” in Proceedings of the IEEE International Conference on Robotics and Automation, pp. 83-88, 1996.
[33]S. D. Jones, C. Andresen, and J. L. Crowley, “Appearance based process for visual navigation,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 551-557, 1997.
[34]T. Ohno, A. Ohya, and S. Yuta, “Autonomous navigation for mobile robots referring pre-recorded image sequence,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 672-679, 1996.
[35]J. Santos-Victor, G. Sandini, F. Curotto, and S. Garibaldi, “Divergent stereo for robot navigation: learning from bees,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 434-439, 1993.
[36]程雋,應用機率論,文笙。
[37]R. C. Gonzalez and R. E. Woods, Digital image processing, Prentice Hall, 2002.
[38]Z. Xiang and G. Joy, “Color Image Quantization by Agglomerative Clustering,” IEEE Computer Graphics and Applications, vol. 14, no. 3, pp. 44-48, 1994.
[39] A. K. Jain, M. N. Murty, and P. J. Flynn, “Data Clustering: A Review,” ACM computing surveys, vol. 31, no. 3, pp. 264-323, 1999.
[40]Z. Hu, F. Lamosa, and K. Uchimura, “A Complete UV-disparity Study for Stereovision Based 3D Driving Environment Analysis,” in Proceedings of the Fifth International Conference on 3-D Digital Imaging and Modeling, pp. 204-211, 2005.
[41]王俊凱, “基於改良式適應性背景相減法與多重影像特徵比對之多功能即時視覺追蹤系統之設計與實現,” 碩士論文,國立成功大學電機工程學系,2004。[42]蘇助彬, “基於視覺之移動目標物分類與人體動作分析研究,” 碩士論文,國立成功大學電機工程學系,2007。[43]許益彰, “室內自走車之視覺導航研究,” 碩士論文,國立成功大學電機工程學系,2008。