|
[1]C. A. Matamoros. (2019). Unique 3D model of Notre Dame cathedral could help reconstruction efforts | Euronews. Available: https://www.euronews.com/2019/04/18/unique-3d-model-of-notre-dame-cathedral-could-help-reconstruction-efforts [2]J. Duckworth. (2019). Assassin’s Creed Unity Could Help Rebuild Notre Dame - Game Rant. Available: https://gamerant.com/assassins-creed-unity-notre-dame-cathedral/ [3]S. Zheng, Y. Zhou, R. Huang, L. Zhou, X. Xu, and C. Wang, A method of 3D measurement and reconstruction for cultural relics in museums, vol. ISPRS-International Archives of the Photogrammetry, Remote Sensing Spatial Information Sciences, pp. 145-149, 2012. [4]E. Auvinet, J. Meunier, J. Ong, G. Durr, M. Gilca, and I. Brunette, Methodology for the construction and comparison of 3D models of the human cornea, in 2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2012, pp. 5302-5305: IEEE. [5]G. Farnebäck, Two-frame motion estimation based on polynomial expansion, in Scandinavian conference on Image analysis, 2003, pp. 363-370: Springer. [6]E. Ilg, N. Mayer, T. Saikia, M. Keuper, A. Dosovitskiy, and T. Brox, Flownet 2.0: Evolution of optical flow estimation with deep networks, in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 2462-2470. [7]A. Dosovitskiy, P. Fischer, E. Ilg, P. Hausser, and C. Hazirbas, Flownet: Learning optical flow with convolutional networks, in Proceedings of the IEEE international conference on computer vision, 2015, pp. 2758-2766. [8]T.-W. Hui, X. Tang, and C. Change Loy, Liteflownet: A lightweight convolutional neural network for optical flow estimation, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 8981-8989. [9]F. Remondino and S. El‐Hakim, Image‐based 3D modelling: a review, The photogrammetric record, vol. 21, no. 115, pp. 269-291, 2006. [10]S. Foix, G. Alenya, and C. Torras, Lock-in time-of-flight (ToF) cameras: A survey, IEEE Sensors Journal, vol. 11, no. 9, pp. 1917-1926, 2011. [11]D. Scharstein and R. Szeliski, High-accuracy stereo depth maps using structured light, in IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2003, vol. 1: IEEE. [12]J. G. D. França, M. A. Gazziro, A. N. Ide, and J. H. Saito, A 3D scanning system based on laser triangulation and variable field of view, in IEEE International Conference on Image Processing 2005, 2005, vol. 1, pp. I-425: IEEE. [13]S. Schuon, C. Theobalt, J. Davis, and S. Thrun, High-quality scanning using time-of-flight depth superresolution, in 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 2008, pp. 1-7: IEEE. [14]J. Geng, Structured-light 3D surface imaging: a tutorial, Advances in Optics and Photonics, vol. 3, no. 2, pp. 128-160, 2011. [15]M. F. Costa, Surface inspection by an optical triangulation method, Optical Engineering, vol. 35, 1996. [16]J. Aloimonos, Shape from texture, Biological cybernetics, vol. 58, no. 5, pp. 345-360, 1988. [17]R. Zhang, P.-S. Tsai, J. E. Cryer, and M. Shah, Shape-from-shading: a survey, IEEE transactions on pattern analysis machine intelligence, vol. 21, no. 8, pp. 690-706, 1999. [18]K. Konolige, Small vision systems: Hardware and implementation, in Robotics research: Springer, 1998, pp. 203-212. [19]D. G. Lowe, Distinctive image features from scale-invariant keypoints, International journal of computer vision, vol. 60, no. 2, pp. 91-110, 2004. [20]E. Rublee, V. Rabaud, K. Konolige, and G. R. Bradski, ORB: An efficient alternative to SIFT or SURF, in ICCV, 2011, vol. 11, no. 1, p. 2: Citeseer. [21]H. Bay, T. Tuytelaars, and L. Van Gool, Surf: Speeded up robust features, in European conference on computer vision, 2006, pp. 404-417: Springer. [22]Z. Wu, S. Song, A. Khosla, F. Yu, and L. Zhang, 3d shapenets: A deep representation for volumetric shapes, in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 1912-1920. [23]B. K. Horn and B. G. Schunck, Determining optical flow, ARTIFICIAL INTELLIGENCE vol. 17, no. 1-3, pp. 185-203, 1981. [24]I.-C. Wang, Stereo Vision System with Non-parallel Optic Axes for Small Object Contour Detection, master, Institute of Civil Aviation, NCKU, 2018. [25]L.-H. Tang, A Dense Matching Method with Feature Based Descriptors for Non-Parallel Optical Axes Images, master, Institute of Civil Aviation, NCKU, 2018. [26]J. J. Gibson, The perception of the visual world. The Riverside Press, Cambridge, 1950. [27]G. Farnebäck, Polynomial expansion for orientation and motion estimation, Linköping University Electronic Press, 2002. [28]A. Krizhevsky, I. Sutskever, and G. E. Hinton, Imagenet classification with deep convolutional neural networks, in Advances in neural information processing systems, 2012, pp. 1097-1105. [29]P. Weinzaepfel, J. Revaud, Z. Harchaoui, and C. Schmid, DeepFlow: Large displacement optical flow with deep matching, in Proceedings of the IEEE International Conference on Computer Vision, 2013, pp. 1385-1392.
|