|
[1] J. Carmigniani and B. Furht, "Augmented reality: an overview", Handbook of Augmented Reality, Chapter 1, pp. 3-46, 2011. [2] C.B.Owen, X.Fan, P.Middlin, "What is the best fiducial? ," The First IEEE International Workshop Augmented Reality Toolkit, 2002. [3] H. Kato and K. T. Tan, "Pervasive 2D barcodes for camera phone applications," Pervasive Computing, Vol. 6, No. 4, pp. 76-85, October 2007. [4] I. Sutherland, "A head-mounted three-dimensional display," AFIPS Fall Joint Computer Conference, pp. 757-764, Washington, DC, 1968. [5] Haller, M., Billinghurst, M., and Thomas, B., "Emerging Technologies of Augmented Reality Interfaces and Design," Idea Group Publishing, USA, Chap.13, pp. 262, 2006. [6] P. Milgram, H. Takemura, A. Utsumi , F. Kishino, “Augmented Reality: A class of displays on the reality-virtuality continuum” , SPIE Vol. 2351, Telemanipulator and Telepresence Technologies., 1994. [7] R. Azuma, "A Survey of Augmented Reality", Presence-Teleoperators and Virtual Environments, vol. 6, no. 4, pp. 355-385, 1997. [8] Y.-B. Li, S.-P. Kang, Z.-H. Qiao, and Q. Zhu, “Development actuality and application of registration technology in augmented reality,” International Symposium on Computational Intelligence and Design, pp. 69-74, 2008. [9] V. Lepetit and P. Fua, “Monocular model-based 3D tracking of rigid objects: a survey,” Foundations and Trends in Computer Graphics and Vision, Vol. 1, No. 1, pp. 1-89, 2005.
[10] W. A. Hoff, K. Nguyen, and T. Lyon, “Computer vision-based registration techniques for augmented reality,” Intelligent Robots and Control Systems XV, Intelligent Control Systems and Advanced Manufacturing, pp. 538–548, November 1996. [11] A. State, G. Hirota, D. Chen, W. Garett, and M. Livingston, “Superior augmented reality registration by integrating landmark tracking and magnetic tracking,” Computer Graphics, SIGGRAPH Proceedings, pp. 429–438, July, 1996. [12] Y. Cho, W. Lee, and U. Neumann, “A multi-ring color fiducial system and intensity-invariant detection method for scalable fiducial-tracking augmented reality,” International Workshop on Augmented Reality, 1998. [13] H. Kato and M. Billinghurst, “Marker racking and HMD Calibration for a video-based augmented reality conferencing system,” IEEE and ACM International Workshop on Augmented Reality, October 1999. [14] H. Kato, M. Billinghurst, I. Poupyrev, K. Imamoto, and K. Tachibana, “Virtual object manipulation on a table-top AR environment,” International Symposium on Augmented Reality, pp. 111–119, 2000. [15] D. Koller, G. Klinker, E. Rose, D. Breen, R. Whitaker, and M. Tuceryan, Real-time vision-based camera tracking for augmented reality applications,” ACM Symposium on Virtual Reality Software and Technology (Lausanne, Switzerland), pp. 87–94, September 1997. [16] J. Rekimoto, “Matrix: A realtime object identification and registration method for augmented reality,” Asia Pacific Computer Human Interaction, 1998. [17] V. Teichrieb, J. P. S. d. M. Lima, and E. L. Apolinario, “A survey of online monocular markerless augmented reality,” International Journal of Modeling and Simulation for the Petroleum Industry, Vol. 1, No. 1, pp. 1-7, 2007. [18] C. Harris, Tracking with Rigid Objects. MIT Press, 1992. [19] H. Li, P. Roivainen, and R. Forchheimer, “3-D motion estimation in model-based facial image coding,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 15, pp. 545–555, June 1993. [20] S. Basu, I. Essa, and A. Pentland, "Motion regularization for model-based head tracking," 13th International Conference on Pattern Recognition, pp. 611 - 616, 1996. [21] F. Jurie and M. Dhome, “A simple and efficient template matching algorithm,” International Conference on Computer Vision (Vancouver, Canada), July 2001. [22] F. Jurie and M. Dhome, “Hyperplane approximation for template matching,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, pp. 996–100, July 2002. [23] V. LEPETIT, P. LAGGER, and P. FUA, "Randomized trees for real-time keypoint recognition," International Conference on Computer Vision and Pattern Recognition, pp. 775–781, 2005. [24] G. D. Hager and P. N. Belhumeur, "Efficient region tracking with parametric models of geometry and illumination,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 20, No. 10, pp. 1025–1039, 1998. [25] G. Simon, A. W. Fitzgibbon, and A. Zisserman, "Markerless tracking using planar structures in the scene," IEEE and ACM International Symposium on Augmented Reality, pp. 120 - 128, 2000. [26] G. Simon and M. O. Berger, “Pose estimation for planar structures,” IEEE Computer Graphics and Applications, Vol. 22, No. 6, pp. 46-53, 2002. [27] R. I. Hartley, A. Zisserman, Multiple View Geometry in Computer Vision, Cambridge University Press, 2nd edition, ISBN: 0521540518, 2003. [28] R. Sukthankar, R. Stockton, and M. Mullin, "Smarter presentations: exploiting homography in camera-projector systems," Computer Vision, 2001. ICCV 2001. [29] G. Klein and D. Murray, "Parallel tracking and mapping for small AR workspaces," 6th IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR 2007), pp. 225-234, 2007. [30] A. J. Davison and N. Kita, “3D simultaneous localisation and map-building using active vision for a robot moving on undulating terrain,” IEEE Conference on Computer Vision and Pattern Recognition, Kauai, 2001. [31] G. Klein and D. Murray, "Parallel tracking and mapping on a camera phone," 8th IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR 2009), pp. 83-86, 2009. [32] D. G. Lowe, "Object recognition from local scale-invariant features," International Conference on Computer Vision, Corfu, Greece (September 1999), pp.1150-1157. [33] D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision, Vol. 60, No. 2, pp. 91–110, 2004. [34] Y. Ke and R. Sukthankar, “PCA-SIFT: a more distinctive representation for local image descriptors,” Proc. Conf. Computer Vision and Pattern Recognition, pp. 511-517, 2004. [35] H. Bay, A. Ess, T. Tuytelaars, and L. van Gool, “Speeded-up robust features (SURF),” Computer Vision and Image Understanding, Vol. 110, No. 3, pp. 346–359, 2008.
[36] I. Skrypnyk and D. G. Lowe, “Scene modelling, recognition and tracking with invariant image features,” International Symposium on Mixed and Augmented Reality (ISMAR’04), pp. 110–119, November 2004. [37] T. Guan, L. Duan, J. Wu, Y. Chen, and X. Zhang, “Real-time camera pose estimation for wide-area augmented reality applications,” IEEE Computer Graphics and Applications, Vol. 31, No. 3, pp. 56-68, 2011. [38] J. Herling and W. Broll, “An adaptive training free tracker for mobile phones”, 17th ACM Conference on Virtual Reality Systems and Technology (VRST 2010), pp. 35–42, 2010. [39] J. Herling and W. Broll, "Markerless tracking for augmented reality," Handbook of Augmented Reality, Chapter 11, pp. 255-272, 2011. [40] T. Lee and T. Hollerer, "Multithreaded hybrid feature tracking for markerless augmented reality," IEEE Transactions on Visualization and Computer Graphics, Vol. 15, No. 3, pp. 355-368, 2009. [41] K. Mikolajczyk, and C. Schmid, "A performance evaluation of local descriptors", IEEE Transactions on Pattern Analysis and Machine Intelligence, 10, 27, pp 1615--1630, 2005. [42] L. Juan, O. Gwun "A comparison of SIFT, PCA-SIFT and SURF", International Journal of Image Processing, 3 (2009), pp. 143–152 [43] Y. Sato, K. Müller, A. Smolic, B. Fröhlich, and T. Wiegand, “SIFT implementation and optimization for general-purpose GPU,” in Proc. of Int. Conf. in Central Europe on Comput. Graphics, Visualization and Comput. Vision., pp. 317-322, Feb. 2007.
[44] S. N. Sinha , J. Frahm , M. Pollefeys , and Y. Genc, “GPU-based video feature tracking and matching,” in Workshop on Edge Computing Using New Commodity Architectures (EDGE), vol. 12, pp. 1-15, May. 2006. [45] M. A. Fischler and R. C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography, "Comm. of the ACM, vol. 24, no. 6, pp. 381-395, 1981. [46] T.W. Kan, C.H. Teng, and W.S. Chou, “Applying QR Code in augmented reality applications,” International Conference on Virtual Reality Continuum and Its Applications in Industry, Yokohama, Japan, December, 2009, pp. 253-257. [47] ARToolKit, Available at: http://www.hitl.washington.edu/artoolkit/ [48] QR Code Generator, Available at: http://qrcode.kaywa.com/. [49] ZXing, Available at: http://code.google.com/p/zxing/. [50] ARTag, Available at: http://www.artag.net/ [51] NyARToolkit, Available at: http://nyatla.jp/nyartoolkit/wp/?page_id=198/ [52] CUDA, Available at:https://developer.nvidia.com/ [53] Lifesquare, Available at:https://www.lifesquare.com/ [54] ScanMed, Available at:https://scanmedqr.com/ [55] T.-Y. Liu, T.-H. Tan, and Y.-L. Chu, “2D barcode and augmented reality supported English learning system,” 6th IEEE/ACIS International Conference on Computer and Information Science, 2007. [56] T. Nikolaos and T. Kiyoshi, “QR code calibration for mobile augmented reality applications: Linking a unique physical location to the digital world,” SIGGRAPH 2010.
[57] J.-T. Wang, C.-N. Shyi, T.-W. Hou, and C. P. Fong, “Design and implementation of augmented reality system collaborating with QR code,” 2010 International Computer Symposium, pp. 414-418, 2010. [58] A. Buchau, W. M. Rucker, U. Wossner and M. Becker, "Augmented reality in teaching of electrodynamics", International Journal for Computation and Mathematics in Electrical and Electronic Engineering, Vol. 28, No. 4., pp. 948-963, 2009.
|