|
參考文獻
[1] WAYMO. [Online]. Available: https://waymo.com/ [2] NavLab11. [Online]. Available: http://www.cs.cmu.edu/~tyata/Project/NavLab11.html [3] GOOGLE'S AUTONOMOUS PRIUS. [Online]. Available: https://www.wired.com/2012/03/googles-autonomous-prius-drives-blind-man-to-taco-bell/ [4] Self-Driving "Firefly" Pod Cars. [Online]. Available: https://www.motor1.com/news/148354/waymo-self-driving-cars-retired/ [5] Society of Automotive Engineers Automation Levels. [Online]. Available: https://www.sae.org/news/press-room/2018/12/sae-international-releases-updated-visual-chart-for-its-%E2%80%9Clevels-of-driving-automation%E2%80%9D-standard-for-self-driving-vehicles [6] Airborne Laser. [Online]. Available: http://www.grounddatasolutions.com/airbornelaser.html [7] Spatial variability. [Online]. Available: https://www.researchgate.net/figure/Spatial-variability-of-A-gayanus-cover-mapped-using-airborne-LiDAR-showing-the-advancing_fig6_275517307 [8] Terrestrial Lidar Scanning Research. [Online]. Available: http://sites.bu.edu/lidar/ [9] Terrestrial Lidar Scanning. [Online]. Available: http://www.ollerhead.ca/technology/terrestrial-lidar-scanning/ [10] Velodyne Lidar. [Online]. Available: https://velodynelidar.com/hdl-64e.html [11] Point Cloud Library with Velodyne LiDAR. [Online]. Available: http://unanancyowen.com/en/pcl-with-velodyne/ [12] Z. Chen, J. Zhang, and D Tao, “Progressive LiDAR Adaptation for Road Detection,” in IEEE/CAA Journal of Automatica Sinica, vol. 6, pp. 693-702, 2019. [13] L. Caltagirone, M. Bellone, L. Svensson, and M. Wahde. “Lidar-camera fusion for road detection using fully convolutional neural networks,” arXiv preprint arXiv:1809.07941, 2018. [14] X. Han, J. Lu, C. Zhao, S. You, and H. Li, “Semi-supervised and weakly-supervised road detection based on generative adversarial networks,” in IEEE Signal Process. Lett., vol. 25, pp. 551-555 2018. [15] B. Douillard, J. Underwood, N. Kuntz, V. Vlaskine, A. Quadros, P. Morton, and A. Frenkel, “On the Segmentation of 3D LIDAR Point Clouds,” in 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, May 9-13, 2011, pp. 2798-2805. [16] M. Himmelsbach, F. V. Hundelshausen, and H.-J. Wuensche, “Fast segmentation of 3D point clouds for ground vehicles,” in 2010 IEEE Intelligent Vehicles Symposium, San Diego, CA, USA, Jun. 21-24, 2010, pp. 560–565. [17] M. Himmelsbach and H. J. Wuensche, “Tracking and classification of arbitrary objects with bottom-up/top-down detection,” in 2012 IEEE Intelligent Vehicles Symposium, Spain, Jun. 3-7, 2012, pp. 577–582. [18] J. Cheng, Z. Xiang, T. Cao, and J. Liu, “Robust vehicle detection using 3D Lidar under complex urban environment,” in 2014 IEEE International Conference on Robotics and Automation, Hong Kong, China, May 2014, pp. 691–696. [19] M. Himmelsbach, T. Luettel, and H. Wuensche, “Real-time object classification in 3D point clouds using point feature histograms,” in 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, MO, USA, Oct. 10-15, 2009, pp. 994–1000. [20] K. Kidono, T. Miyasaka, A. Watanabe, T. Naito, and J. Miura, “Pedestrian recognition using high-definition LIDAR,” in 2011 IEEE Intelligent Vehicles Symposium, Baden-Baden, Germany, Jun. 5-9, 2011, pp. 405–410. [21] A. Teichman, J. Levinson, and S. Thrun, “Towards 3D object recognition via classification of arbitrary object tracks,” in 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, May 9-13, 2011, pp. 4034–4041. [22] D. Held, J. Levinson, S. Thrun, and S. Savarese, “Combining 3D shape, color, and motion for robust anytime tracking,” in 2014 Robotics: Science and Systems Conference, Berkeley, CA, USA, Jul. 12-16, 2014. [23] An intuitive guide to Convolutional Neural Networks. [Online]. Available: https://www.freecodecamp.org/news/an-intuitive-guide-to-convolutional-neural-networks-260c2de0a050/ [24] Deep Learning learns layers of feature. [Online]. Available: https://github.com/NirViaje/DeepLearningRobotics/blob/master/NeuronTalk.md [25] Convolutional neural network. [Online]. Available: https://en.wikipedia.org/wiki/Convolutional_neural_network [26] Sigmoid function. [Online]. Available: https://en.wikipedia.org/wiki/Sigmoid_function [27] Rectifier (neural networks). [Online]. Available: https://en.wikipedia.org/wiki/Rectifier_(neural_networks) [28] Regression Loss Functions. [Online]. Available: https://heartbeat.fritz.ai/5-regression-loss-functions-all-machine-learners-should-know-4fb140e9d4b0 [29] R. Girshick, J. Donahue, T. Darrell, and J. Malik. “Rich feature hierarchies for accurate object detection and semantic segmentation,” in 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, Jun. 23-28, 2014, pp. 580-587. [30] M. V, V. V.R, and N. A, “A Deep Learning RCNN Approach for Vehicle Recognition in Traffic Surveillance System,” in 2019 International Conference on Communication and Signal Processing, Chennai, India, Apr. 4-6, 2019. [31] X. Ren, S. Du, and Y. Zheng, “Parallel RCNN: A deep learning method for people detection using RGB-D images,” in 2017 10th International Congress on Image and Signal Processing, Shanghai, China, Oct. 14-16, 2017. [32] M. Braun, Q. Rao, Y. Wang, and F. Flohr, “Pose-rcnn: Joint object detection and pose estimation using 3d object proposals,” in 2016 IEEE 19th International Conference on Intelligent Transportation Systems, Nov. 1-4, 2016, pp. 1546–1551. [33] S. Shi, X. Wang, and H. Li, “Pointrcnn: 3d object proposal generation and detection from point cloud,” arXiv preprint arXiv:1812.04244, 2018. [34] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real-time object detection with region proposal networks,” in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, pp. 1137-1149, 2017. [35] K. He, X. Zhang, S. Ren, and J. Sun, “Spatial pyramid pooling in deep convolutional networks for visual recognition,” in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 37, pp. 1904-1916, 2015. [36] R. Zhang and Y. Yang, “Merging recovery feature network to faster RCNN for low-resolution images detection,” in 2017 IEEE Global Conference on Signal and Information Processing, Montreal, QC, Canada, Nov. 14-16, 2017. [37] C. Fu, W. Si, Q. Lu, C. Shi, Q. Gao, H. Wang, and C. Wang, “Study of a Detection and Recognition Algorithm for High-Voltage Switch Cabinet Based on Deep Learning with an Improved Faster-RCNN,” in 2018 International Conference on Engineering Simulation and Intelligent Control, Changsha, China, Aug. 10-11, 2018. [38] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” arXiv preprint arXiv:1506.02640, 2015. [39] J. Redmon and A. Farhadi, “Yolo9000: Better, faster, stronger,” arXiv preprint arXiv:1612.08242, 2016. [40] J. Redmon and A. Farhadi, “Yolov3: An incremental improvement,” arXiv preprint arXiv:1804.02767, 2018. [41] T.-Y. Lin, P. Dollar, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature pyramid networks for object detection,” arXiv preprint arXiv:1612.03144, 2016. [42] YOLO: Real-Time Object Detection. [Online]. Available: https://pjreddie.com/darknet/yolo/ [43] F. Yang, W. Choi, and Y. Lin, “Exploit All the Layers: Fast and Accurate CNN Object Detector with Scale Dependent Pooling and Cascaded Rejection Classifiers,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, Jun. 27-30, 2016. [44] J. Ren, X. Chen, J. Liu, W. Sun, J. Pang, Q. Yan, Y. -W. Tai, and L. Xu, “Accurate Single Stage Detector Using Recurrent Rolling Convolution,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, Jul. 21-26, 2017. [45] M. Liang, B. Yang, Y. Chen, R. Hu, and R. Urtasun, “Multitask multi-sensor fusion for 3d object detection,” in IEEE Conference on Computer Vision and Pattern Recognition, 2019. [46] X. Du1, M. H, A. Jr, and D. Rus, “Car Detection for Autonomous Vehicle: LIDAR and Vision Fusion Approach Through Deep Learning Framework,” in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vancouver, BC, Canada, Sept. 24-28 2017. [47] KITTI Dataset. [Online]. Available: http://www.cvlibs.net/datasets/kitti/ [48] Pascal VOC Dataset. [Online]. Available: http://host.robots.ox.ac.uk/pascal/VOC/ [49] yolov3-tiny.cfg. [Online]. Available: https://github.com/pjreddie/darknet/blob/master/cfg/yolov3-tiny.cfg [50] HDL-64E. [Online]. Available: https://velodynelidar.com/hdl-64e.html [51] Object Detection Evaluation 2012. [Online]. Available: http://www.cvlibs.net/datasets/kitti/eval_object.php?obj_benchmark=2d [52] M. Oeljeklaus, F. Hoffmann, and T. Bertram1, “A Fast Multi-Task CNN for Spatial Understanding of Traffic Scenes,” in IEEE 2018 21st International Conference on Intelligent Transportation Systems, Maui, HI, USA, Nov. 4-7, 2018. [53] M. C. Chang, Z. G. Pan, and J. L. Chen, “Hardware Accelerator for Boosting Convolution Computation in Image Classification Applications,” in Global Conference on Consumer Electronics, Nagoya, Japan, Oct. 24-27, 2017. [54] H. M. Chang, M. H. Sunwoo, “An Efficient Programmable 2-D Convolver Chip,” in IEEE International Symposium on Circuits and Systems, CA, USA, May 31-June 3, 1998, pp. 429-432. [55] K. Benkrid, S. Belkacemi, “Design and Implementation of a 2D Convolution Core for Video Applications on FPGAs,” in Third International Workshop on Digital and Computational Video, FL, USA, Nov. 15-15, 2002, pp. 85-92. [56] Y. -H. Chen, T. Krishna, J. S. Emer, and V. Sze, “Eyeriss: An Energy-Efficient Reconfigurable Accelerator for Deep Convolutional Neural Networks,” in IEEE Journal of Solid-State Circuits, vol. 52, pp. 127-138, 2017. [57] S. Li, O. Ning, and Z. Wang, “Accelerator Design for Convolutional Neural Network with Vertical Data Streaming,” in 2018 IEEE Asia Pacific Conference on Circuits and Systems, Chengdu, China, Oct. 26-30, 2018. [58] VGG16 – Convolutional Network for Classification and Detection, [Online]. Available: https://neurohive.io/en/popular-networks/vgg16/
|