[1]Angeli, A., Doncieux, S., Meyer, J. A., & Filliat, D. (2008). Real-time visual loop-closure detection. In International Conference on Robotics and Automation (pp. 1842-1847).
[2]Angeli, A., Filliat, D., Doncieux, S., & Meyer, J. A. (2008). Fast and incremental method for loop-closure detection using bags of visual words. IEEE Transactions on Robotics, 24(5), 1027-1037.
[3]Armeni, I., Sax, S., Zamir, A. R., & Savarese, S. (2017). Joint 2D-3D-Semantic Data for Indoor Scene Understanding. arXiv preprint arXiv: 1702.01105.
[4]Babacan, K., Chen, L., & Sohn, G. (2017). SEMANTIC SEGMENTATION OF INDOOR POINT CLOUDS USING CONVOLUTIONAL NEURAL NETWORK. ISPRS Annals of Photogrammetry, Remote Sensing & Spatial Information Sciences, 4.
[5]Braun, A., Tuttas, S., Borrmann, A., & Stilla, U. (2015). A concept for automated construction progress monitoring using BIM-based geometric constraints and photogrammetric point clouds. Journal of Information Technology in Construction (ITcon), 20(5), 68-79.
[6]Brenneke, C., Wulf, O., & Wagner, B. (2003). Using 3d laser range data for slam in outdoor environments. In Intelligent Robots and Systems, 2003.(IROS 2003). Proceedings. 2003 IEEE/RSJ International Conference on , 188-193.
[7]Chang, K. T., Chang, J. R., & Liu, J. K. (2005). Detection of pavement distresses using 3D laser scanning technology. In Computing in Civil Engineering (2005), 1-11.
[8]Chawla, N. V., Japkowicz, N., & Kotcz, A. (2004). Special issue on learning from imbalanced data sets. ACM Sigkdd Explorations Newsletter, 6(1), 1-6.
[9]Cole, D. M., & Newman, P. M. (2006). Using laser range data for 3D SLAM in outdoor environments. In Robotics and Automation, 2006. ICRA 2006. Proceedings 2006 IEEE International Conference on (pp. 1556-1563). IEEE.
[10]Cole, L., Austin, D., & Cole, L. (2004). Visual object recognition using template matching. In Australian conference on robotics and automation.
[11]Czerniawski, T., Sankaran, B., Nahangi, M., Haas, C., & Leite, F. (2018). 6D DBSCAN-based segmentation of building point clouds for planar object classification. Automation in Construction, 88, 44-58.
[12]Dewez, T. J., Plat, E., Degas, M., Richard, T., Pannet, P., Thuon, Y., ... & Dian, G. (2016, September). Handheld Mobile Laser Scanners Zeb-1 and Zeb-Revo to map an underground quarry and its above-ground surroundings. In Virtual Geosciences Conference (VGC2016) , 22-23.
[13]Freund, Y., & Schapire, R. E. (1997). A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences, 55(1), 119-139.
[14]Haala, N., & Brenner, C. (1999). Extraction of buildings and trees in urban environments. ISPRS Journal of Photogrammetry and Remote Sensing, 54(2-3), 130-137.
[15]Hamledari, H., McCabe, B., & Davari, S. (2017). Automated computer vision-based detection of components of under-construction indoor partitions. Automation in Construction, 74, 78-94.
[16]Han, K. K., & Golparvar-Fard, M. (2017). Potential of big visual data and building information modeling for construction performance analytics: An exploratory study. Automation in Construction, 73, 184-198.
[17]Kolar, Z., Chen, H., & Luo, X. (2018). Transfer learning and deep convolutional neural networks for safety guardrail detection in 2D images. Automation in Construction, 89, 58-70.
[18]Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, 1097-1105.
[19]Labbe, M., & Michaud, F. (2013). Appearance-based loop closure detection for online large-scale and long-term operation. IEEE Transactions on Robotics, 29(3), 734-745.
[20]Lin, F., Liang, D., & Chen, E. (2011). Financial ratio selection for business crisis prediction. Expert Systems with Applications, 38(12), 15094-15102
[21]Liu, Y., & Zhang, H. (2012, May). Indexing visual features: Real-time loop closure detection using a tree structure. In Robotics and Automation (ICRA), 2012 IEEE International Conference on, 3613-3618.
[22]Navon, R. (2007). Research in automated measurement of project performance indicators. Automation in Construction, 16(2), 176-188.
[23]Nocerino, E., Menna, F., Remondino, F., Toschi, I., & Rodríguez-Gonzálvez, P. (2017). Investigation of indoor and outdoor performance of two portable mobile mapping systems. In Videometrics, Range Imaging, and Applications XIV (Vol. 10332, p. 103320I). International Society for Optics and Photonics.
[24]Pan, S. J., & Yang, Q. (2010). A survey on transfer learning. IEEE Transactions on knowledge and data engineering, 22(10), 1345-1359.
[25]Patil, A. K., Holi, P., Lee, S. K., & Chai, Y. H. (2017). An adaptive approach for the reconstruction and modeling of as-built 3D pipelines from point clouds. Automation in Construction, 75, 65-78.
[26]Png, L. C. (2013). Morphological Shared-Weight Neural Network For Face Recognition. LAP LAMBERT Academic Publishing .
[27]Poh, Clive. Q., Ubeynarayana, Chalani. U., & Goh, Y. M. (2018). Safety leading indicators for construction sites: A machine learning approach. Automation in Construction.
[28]Qi, C. R., Su, H., Mo, K., & Guibas, L. J. (2017). PointNet: Deep learning on point sets for 3d classification and segmentation. Proc. Computer Vision and Pattern Recognition (CVPR), IEEE, 1(2), 4.
[29]Qi, C. R., Yi, L., Su, H., & Guibas, L. J. (2017). PointNet++: Deep hierarchical feature learning on point sets in a metric space. In Advances in Neural Information Processing Systems (pp. 5105-5114).
[30]Richard Szeliski, Xing Junliang, 2012, Computer Vision Algorithm and Application [M]. Beijing: Tsinghua University Press.2012
[31]Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S. & Berg, A. C. (2015). Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3), 211-252 .
[32]Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S.& Berg, A. C. (2015). Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3), 211-252 .
[33]Soodamani, R., & Liu, Z. Q. (2000). GA-based learning for a model-based object recognition system. International Journal of Approximate Reasoning, 23(2), 85-109.
[34]Yang, J., Shi, Z., & Wu, Z. (2016). Vision-based action recognition of construction workers using dense trajectories. Advanced Engineering Informatics, 30(3), 327-336.
[35]Zhang, C., & Arditi, D. (2013). Automated progress control using laser scanning technology. Automation in construction, 36, 108-116.
中文文獻:
[36]王斌弘. (2006).空間點雲形態導向之建築工程監督與管理. 國立台灣科技大學建築系博士論文. 1-45.
[37]何海群. (2017).零起點TensorFlow快速入門.北京: 電子工業出版社.
[38]李其真. (2014). 基於資料庫影像與RGB-D相機影像之同步定位與建圖. 國立臺灣科技大學. 機械工程系碩士學位元論文. 1-61.[39]周宏達. (2001). 以最小二乘法進行參數式模型與影像之最佳套合. 國立成功大學測量工程研究所碩士論文. 12-25.[40]林志交. (2002). 基因演算法於模型影像套合計算之應用. 碩士論文. 國立成功大學測量工程學系. 1-119.