|
[1]H. Moravec, “Obstacle avoidance and navigation in the real world by a seeing robot rover,” Ph.D. dissertation, Stanford Univ., Stanford, CA, 1980. [2]D. G. Lowe, “Object recognition from local scale-invariant features,” Proc. Int. Conf. on Computer Vision, vol. 2, pp. 1150–1157, 1999. [3]H. Bay, T. Tuytelaars and L. Van Gool, “SURF: Speeded up robust features,” Proc. European Conf. Computer Vision, 2006. [4]R. A. Newcombe, S. J. Lovegrove and A. J. Davison, “DTAM: dense tracking and mapping in real-time,” Proc. IEEE Int. Conf. on Computer Vision, pp. 2320-2327, 2011. [5]J. Engel, J. Stueckler and D. Cremers, “Large-scale direct SLAM with stereo cameras,” Proc. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pp1935-1942, 2015. [6]C. Forster, M. Pizzoli, and D. Scaramuzza, “SVO: Fast semi-direct monocular visual odometry”, Proc. IEEE Int. Conf. on Robotics and Automation, pp1050-4729, 2014. [7]D. Nister, O. Naroditsky and J. Bergen, “Visual Odometry,” Proc. IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, vol. 1, pp. 652-659, 2004. [8]M. Irani, B. Rousso and S. Peleg, “Recovery of ego-motion using image stabilization,” Proc. IEEE Conf. on Computer Vision and Pattern Recognition, pp. 21–23, 1994. [9]M. Maimone, Y. Cheng, L. Matthies, “Two years of Visual Odometry on the Mars Exploration Rovers,” J. of Field Robotics, vol. 24, no. 3, pp. 169–186. [10]H. Durrant-Whyte and T. Bailey, “Simultaneous Localization and Mapping (SLAM): Part I. The essential algorithms,” Proc. IEEE Int. Conf. Robotics and Automation, vol. 13, no. 2, pp. 99-110, 2006. [11]H. Durrant-Whyte and T. Bailey, “Simultaneous Localisation and Mapping (SLAM): Part II. State of the art,” Proc. IEEE Int. Conf. on Robotics and Automation, vol. 13, no. 3, pp. 108-117, 2006. [12]B. Williams and I. Reid, “On combining visual SLAM and visual odometry,” Proc. IEEE Int. Conf. on Robotics and Automation, pp. 3494-3500, 2010. [13]D. Scaramuzza and F. Fraundorfer, “Visual odometry: Part I - The first 30 years and fundamentals,” IEEE Robotics Automation Magazine, vol. 18, pp. 80–92, 2011. [14]P. J. Besl and N. D. McKay, “A method for registration of 3-d shapes,” IEEE Transaction on Pattern Analysis and Machine Intelligence, vol. 14, no. 2, pp.239-256, 1992. [15]W. L. D. Lui, T. J. J. Tang, T. Drummond, and W. H. Li, “Robust egomotion estimation using ICP in inverse depth coordinates,” Proc. IEEE Int. Conf. on Robotics and Automation, pp. 1671-1678, 2012. [16]D. Nister, O. Naroditsky, and J. Bergen, “Visual odometry,” Proc. IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, vol.1, pp. 652-659, 2004. [17]A. Comport, E. Malis and P. Rives, “Accurate quadrifocal tracking for robust 3d visual odometry,” Proc. IEEE Int. Conf. Robotics and Automation 2007, pp. 40-45, 2007. [18]L. Wei, C. Cappelle, Y. Ruichek, and F. Zann, "GPS and stereovision-based visual odometry: application to urban scene mapping and intelligent vehicle localization," Int. J. of Vehicular Technology, vol. 2011, 2011. [19]D. Scaramuzza and F. Fraundorfer, "Visual odometry: Part II Matching, robustness, optimization, and applications," IEEE Robotics and Automation Magazine, vol. 19, no. 2, pp. 78-90, 2012. [20]H. Badino, A. Yamamoto and T. Kanade, "Visual odometry by multi-frame feature integration," Proc. Workshop on Computer Vision for Autonomous Driving (Collocated with ICCV”13), Sydney, Australia, 2013. [21]A. Geiger, ‘The KITTI Vision Benchmark Suite’, 2016, [Online]. Available: http://www.cvlibs.net/datasets/kitti/. [Accessed: 13-Oct-2017]. [22]D. Nister, “An efficient solution to the five-point relative pose problem,” Proc. Int. Conf. Computer Vision and Pattern Recognition, pp. 195–202, 2003. [23]D. Scharstein and R. Szeliski, "A taxonomy and evaluation of dense two-frame stereo correspondence algorithms," Int. J. of Computer Vision, vol. 47, no. 1, pp. 7-42, 2002. [24]H. Hirschmuller, "Accurate and efficient stereo processing by semi-global matching and mutual information," Proc. IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, vol. 2, pp. 807-814, 2005. [25]M. J. Milford and G. Wyeth, “Single camera vision-only SLAM on a suburban road network,” Proc. IEEE Int. Conf. Robotics and Automation, pp. 3684–3689, 2008. [26]D. Scaramuzza and R. Siegwart, “Appearance-guided monocular omnidirectional visual odometry for outdoor ground vehicles,” IEEE Trans. Robot. (Special Issue on Visual SLAM), vol. 24, no. 5, pp. 1015– 1026, Oct. 2008. [27]I. Scollar, "Radial Distortion Correction," 2005, http://www.uni-koeln.de/~al001/radcor_files/hs100 [Accessed 13-Oct-2017]. [28]R. Klette, Concise Computer Vision: An Introduction into Theory and Algorithms. Springer-Verlag London, 2014. [29]S. D. Cochran and G. Medioni, “3D surface description from binocular stereo,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 14, no. 10, pp. 981-994, 1992. [30]L. Juan and O. Gwon, “A comparison of SIFT, PCA-SIFT and SURF,” Int. J. of Image Processing, vol. 3, no. 4, pp. 143-152, 2009. [31]H.-J. Chien, C.-Y. Chen, C.-C. Chuang, “When to use what feature? SIFT, SURF, ORB, or A-KAZE features for monocular visual odometry,” Proc. Conf. on Image and Vision Computing New Zealand, pp.2151-2205, 2016. [32]P. Viola, and M.Jones, “Rapid object detection using a boosted cascade of simple features,” Proc. IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, vol. 1, pp. 511-518, 2001. [33]Tornadomeet,"SURF,"http://www.cnblogs.com/tornadomeet/archive/2012/08/17/2644903.html. [Accessed 13-Oct-2017]. [34]T. H. Huang, Establishing Ego-motion Estimation from Monocular UAV Image Sequences, M. Eng. thesis, Dept. of Computer Science and Information Engineering, National University of Kaohsiung, 2016. [35]V. Lepetit, F. Moreno-Noguer and P. Fua, "EPnP: An accurate O(n) solution to the PnP problem," Int. J. Computer Vision, vol. 81, no. 2, pp. 155-166, 2009. [36]M. A. Fischler and R. C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” Comm. ACM, vol.24, no. 6, pp. 381-395, 1981. [37]Q. Zhang and R. Pless, “Extrinsic calibration of a camera and laser range finder (improves camera calibration),” Proc. IEEE/RSJ Int. Conf. Intelligent Robots and Systems, vol. 3, pp. 2301-2306, 2004. [38]C. Tomasi and T. Kanade, “Detection and tracking of point features,” Carnegie Mellon University Technical Report, CMU-CS-91-132, 1991. [39]B. K. P. Horn, “Closed-form solution of absolute orientation using unit quaternions,” J. Optical Society of America A, vol. 4, no. 4, pp. 629-642, 1987. [40]J. Stuehmer, S. Gumhold, and D. Cremers. “Real-time dense geometry from a handheld camera”. Proc. DAGM Symposium on Pattern Recognition, 2010. [41]J.-H. Zhang, Adaptive Feature Tracking Based on Epipolar Geometry and Disparity for Ego-Motion Estimation, M. Eng. thesis, Dept. Computer Science and Information Engineering, National University of Kaohsiung, 2014. [42]P. D. Sampson, “Fitting conic sections to ‘very scattered’ data: An iterative refinement of the Bookstein algorithm,” Computer Graphics Image Processing, vol. 18, pp. 97–108, 1982. [43]R. Hartley, A. Zisserman, Multiple View Geometry in Computer Vision, 2003. [44]K. A. Levenberg, “Method for the solution of certain non-linear problems in least squares,” The Quarterly Applied Math., vol. 2, pp. 164–168, 1944. [45]C. Engels, H. Stewenius, and D. Nister “Bundle adjustment rules,” J. Photogrammetric Computer Vision, vol 2, no. 32, 2006.
|