|
[1]維基百科, Artificial intelligence. [2]世界經濟論壇(WEF), 2016. [3]大和有話說, AI人工智慧:3大浪潮+3大技術+3大應用|大和有話說, 2018. [4]Y. Lecun, L. Bottou, Y. Bengio and P. Haffner, “Gradient-based learning applied to document recognition,” in Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, Nov 1998. [5]Krizhevsky, A., Sutskever, I. and Hinton, G. E., “ImageNet Classification with Deep Convolutional,” NIPS'12: Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume , p. 1097–1105, Dec 2012. [6]Karen Simonyan & Andrew Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” ICLR, 2015, 2015. [7]C. Szegedy et al, “Going deeper with convolutions,” 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, pp. 1-9, 2015. [8]K. He, X. Zhang, S. Ren and J. Sun, “Deep Residual Learning for Image Recognition,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, pp. 770-778, 2016. [9]L. Fei-Fei, "ImageNet Large Scale Visual Recognition Challenge," 2010. [Online]. Available: http://image-net.org. [10]"CNN Architectures — LeNet, AlexNet, VGG, GoogLeNet and ResNet," mc.ai, 2018.[Online].Available:https://mc.ai/cnn-architectures-lenet-alexnet-vgg-googlenet-and-resnet/. [11]JT, "DeepLearning," [Online]. Available: https://medium.com/@danjtchen. [12]Min Lin, Qiang Chen, Shuicheng Yan, “Network In Network,” CoRR, vol, 2013. [13]G. Huang, Z. Liu, L. Van Der Maaten and K. Q. Weinberger, “Densely Connected Convolutional Networks,” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, pp. 2261-2269, 2017. [14]Krizhevsky, Alex, "The CIFAR-10 and CIFAR-100," 2004. [Online]. Available: https://www.cs.toronto.edu/~kriz/cifar.html. [15]"Tiny ImageNet Visual Recognition Challenge," Stanford, 2015. [Online]. Available: https://tiny-imagenet.herokuapp.com/. [16]Sergey Ioffe and Christian Szegedy, "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift," ICML'15: Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37, p. 448–456, July 2015. [17]Rémy,Philippe, "keract," [Online]. Available: https://github.com/philipperemy/keract. [18]xilinx. [Online]. Available: https://www.xilinx.com/. [19]xilinx, "ZCU104 Evaluation Board User Guide (UG1267)," 2018, p. 9. [20]xilinx, "Zynq DPU v3.1 IP Product Guide," 2019. [21]Benoit Jacob, Skirmantas Kligys, Bo Chen, Menglong Zhu, Matthew Tang, Andrew Howard, Hartwig Adam, Dmitry Kalenichenko, “Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference,” 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, pp. 2704-2713, 2018. [22]xilinx, "Vitis AI User Guide(UG1414)," 2019. [Online]. Available: https://www.xilinx.com/support/documentation/sw_manuals/vitis_ai/1_0/ug1414-vitis-ai.pdf. [23]Y. Wang, J. Xu, Y. Han, H. Li, and X. Li, “Automatic generation of FPGA-based learning accelerators for the neural network family,” 2017. [24]C. Zhang, Z. Fang, P. Zhou, P. Pan, and J. Cong, “Caffeine: towards uniformed representation and acceleration for deep convolutional neural networks,” in ICCAD, 2016. [25]S. I. Venieris and C.-S. Bouganis, ‘‘Latency-driven design for FPGAbased convolutional neural networks,’’ in Proc. IEEE 27th Int. Conf. Field Program. Logic Appl. (FPL), Sep. 2017, pp. 1–8. [26]J. Mairal, “End-to-end kernel learning with supervised convolutional kernel networks,” CoRR, vol. abs/1605.06265, pp. 1–16, Dec. 2016. [27]A. Coates and A. Y. Ng, “The importance of encoding versus training with sparse coding and vector quantization,” in Proc. ICML, Jul. 2011. [28]T. H. Chan, K. Jia, S. Gao, J. Lu, Z. Zeng, and Y. Ma, “PCANet: A simple deep learning baseline for image classification,” IEEE Trans. Image Process, vol. 24, no. 12, pp. 5017–5032, Dec. 2015. [29]T. Lin and H. T. Kung, “Stable and efficient representation learning with nonnegativity constraints,” in Proc. ICML, Jun. 2014, pp. 1323–1331. [30]C. Lee, S. Xie, P. W. Gallagher, Z. Zhang, and Z. Tu, “Deeply-supervised nets,” in Proc. JMLR, Feb. 2015, pp. 562–570. [31]S. Zagoruyko and N. Komodakis, “Wide residual networks,” CoRR, vol. abs/1605.07146, pp. 1–15, Jun. 2016. [32]K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. CVPR, Jul. 2016, pp. 770–778. [33]S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He, “Aggregated residual transformations for deep neural networks,” Nov. 2016, arXiv:1611.05431. [Online]. Available. [34]郭豐源,夏世昌, “改良 VGGNET 之類神經網路架構,” DLT2020數位生活科技研討會 , May 2020, pp. 259-262.
|