|
[1] K. Potdar, C. D. Pai, and S. Akolkar, “A convolutional neural network based live object recognition system as blind aid,” arXiv preprintarXiv: 1811.10399, 2018. [2] S. Saha, “A comprehensive guide to convolutional neural networks—the eli5 way,” 2018. [3] M. Yani, M. B. I. S, Si., and M. C. S. S.T., “Application of transfer learning using convolutional neural network method for early detection of terry's nail,” Journal of Physics: Conference Series, vol. 1201, no. 1, p. 012052, may 2019. [Online]. Available: https://dx.doi.org/10.1088/1742- 6596/1201/1/012052 [4] H. Bui, “From convolutional neural network to variational auto encoder,” 2020. [5] K. Courses, “Overfitting and underfitting.” [6] COCO. [Online]. Available: https://cocodataset.org/#home [7] P. Babu and E. Parthasarathy, “Hardware acceleration for object detection using yolov4 algorithm on xilinx zynq platform,” Journal of Real-Time Image Processing, vol. 19, no. 5,pp. 931–940, 2022. [8] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real- time object detection,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 779–788. [9] X. Ding, X. Zhang, N. Ma, J. Han, G. Ding, and J. Sun, “Repvgg: Making vgg- style convnets great again,” 2021. [10] C.-Y. Wang, H.-Y. M. Liao, Y.-H. Wu, P.-Y. Chen, J.-W. Hsieh, and I.-H. Yeh, “Cspnet: A new backbone that can enhance learning capability of cnn,” in Proceedings of the IEEE/ CVF conference on computer vision and pattern recognition workshops, 2020, pp. 390– 391. [11] XILINX, “Zynq ultrascale+ mpsoc zcu102 evaluation kit.” [Online]. Available: https://www.xilinx.com/products/boards-and-kits/ek-u1-zcu102- g.html [12] DPUCZDX8G, “Dpuczdx8g for zynq ultrascale+ mpsocs product guide (pg338).” [13] XILINX, “Vitis ai user guide (ug1414).” [14] J. Redmon and A. Farhadi, “Yolo9000: better, faster, stronger,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 7263–7271. [15] T. M. G. Jocher, K. Nishimura and R. Vilariño, “Yolov5,” 2020. [Online]. Available: https://github.com/ultralytics/yolov5 [16] R. Girshick, “Fast r-cnn,” in Proceedings of the IEEE international conference on com- puter vision, 2015, pp. 1440–1448. [17] S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detec- tion with region proposal networks,” Advances in neural information processing systems, vol. 28, 2015. [18] K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Mask r-cnn,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2961–2969. [19] C. Wang and Z. Luo, “A review of the optimal design of neural networks based on fpga,”Applied Sciences, vol. 12, no. 21, p. 10771, 2022. [20] J. Mendez, K. Bierzynski, M. Cuéllar, and D. P. Morales, “Edge intelligence: Concepts, architectures, applications, and future directions,” ACM Transactions on Embedded Com- puting Systems (TECS), vol. 21, no. 5, pp. 1–41, 2022. [21] Y. Tang, R. Dai, and Y. Xie, “Optimization of energy efficiency for fpga- based convolu- tional neural networks accelerator,” Journal of Physics: Conference Series, vol. 1487, p. 012028, 03 2020. [22] F. Muslim, L. Ma, M. Roozmeh, and L. Lavagno, “Efficient fpga implementation of opencl high-performance computing applications via high- level synthesis,” IEEE Access, vol. PP,pp. 1–1, 02 2017. [23] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2014, pp. 580–587. [24] Redmon, Joseph and Farhadi, Ali, “Yolov3: An incremental improvement,” arXiv preprint arXiv:1804.02767, 2018. [25] A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, “Yolov4: Optimal speed and accuracy of object detection,” arXiv preprint arXiv:2004.10934, 2020. [26] C.-Y. Wang, A. Bochkovskiy, and H.-Y. M. Liao, “Yolov7: Trainable bag-of- freebies sets new state-of-the-art for real-time object detectors,” 2022. [27] C.-Y. Wang, H.-Y. M. Liao, and I.-H. Yeh, “Designing network design strategies through gradient path analysis,” 2022. [28] Y. Lee, J. won Hwang, S. Lee, Y. Bae, and J. Park, “An energy and gpu- computation efficient backbone network for real-time object detection,” 2019. [29] TensorFlow. [Online]. Available: https://www.tensorflow.org/?hl=zh-tw [30] PyTorch. [Online]. Available: https://pytorch.org/ [31] ONNX. [Online]. Available: https://onnx.ai/ [32] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” 2015. [33] C.-Y. Wang, A. Bochkovskiy, and H.-Y. M. Liao, “Scaled-yolov4: Scaling cross stage par- tial network,” in Proceedings of the IEEE/cvf conference on computer vision and pattern recognition, 2021, pp. 13 029–13 038. [34] H. Xu and Y. Wang, “Target detection network for underwater image based on adaptive anchor frame and re-parameterization,” in Journal of Physics: Conference Series, vol. 2363, no. 1. IOP Publishing, 2022, p. 012012. [35] S. Zhang, J. Cao, Q. Zhang, Q. Zhang, Y. Zhang, and Y. Wang, “An fpga-based reconfig- urable cnn accelerator for yolo,” in 2020 IEEE 3rd International Conference on Electronics Technology (ICET). IEEE, 2020, pp. 74–78. [36] P. Li and C. Che, “Mapping yolov4-tiny on fpga-based dnn accelerator by using dynamic fixed-point method,” in 2021 12th International Symposium on Parallel Architectures, Algorithms and Programming (PAAP), 2021, pp. 125– 129. [37] S. Oh, J.-H. You, and Y.-K. Kim, “Implementation of compressed yolov3-tiny on fpga- soc,” in 2020 IEEE International Conference on Consumer Electronics - Asia (ICCE-Asia), 2020, pp. 1–4. [38] VOC2012. [Online].Available:http://host.robots.ox.ac.uk/pascal/VOC/voc2012/ [39] Y. He, X. Zhang, and J. Sun, “Channel pruning for accelerating very deep neural networks,” 2017. [40] Vitis-AI. [Online]. Available: https://github.com/Xilinx/Vitis-AI [41] Docker. [Online]. Available: https://www.docker.com/ [42] Anaconda. [Online]. Available: https://www.anaconda.com/download-old [43] XILINX, “Vitis ai library user guide(ug1354).”
|