|
[1] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015. [2] A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei, “Large-scale video classification with convolutional neural networks,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1752–1732, 2014. [3] G. Hinton, L. Deng, D. Yu, G. Dahl, A. rahman Mohamed, N. Jaitly, A. Senior, V. Van- houcke, P. Nguyen, T. Sainath, and B. Kingsbury, “Deep neural networks for acoustic modeling in speech recognition,” Signal Processing Magazine, 2012. [4] L. A. Gatys, A. S. Ecker, and M. Bethge, “Image style transfer using convolutional neu- ral networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun 2016. [5] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to doc- ument recognition,” pp. 2278–2324, IEEE, 1998. [6] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep con- volutional neural networks,” in Advances in Neural Information (F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, eds.), pp. 1097–1105, Cur- ran Associates, Inc., 2012. [7] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” International Conference on Learning Representations (ICLR), 2014. [8] B. Liu, M. Wang, H. Foroosh, M. Tappen, and M. Penksy, “Sparse convolutional neu- ral networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015. [9] E. L. Denton,W. Zaremba, J. Bruna, Y. LeCun, and R. Fergus, “Exploiting linear structure within convolutional networks for efficient evaluation,” in Advances in Neural Information Processing Systems 27 (Z. Ghahramani, M.Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, eds.), pp. 1269–1277, 2014. [10] Y. Gong, L. Liu, M. Yang, and L. Bourdev, “Compressing deep convolutional networks using vector quantization,” 2014. [11] J. Albericio, P. Judd, T. H. Hetherington, T. M. Aamodt, N. D. E. Jerger, and A. Moshovos, “Cnvlutin: Ineffectual-neuron-free deep neural network computing,” in 43rd ACM/IEEE Annual International Symposium on Computer Architecture (ISCA), pp. 1–13, 2016. [12] S. Han, J. Pool, J. Tran, andW. Dally, “Learning both weights and connections for efficient neural network,” in Advances in Neural Information Processing Systems (NIPS), pp. 1135– 1143, 2015. [13] S. Han, X. Liu, H. Mao, J. Pu, A. Pedram, M. A. Horowitz, andW. J. Dally, “Eie: Efficient inference engine on compressed deep neural network,” 2016. [14] L. Z. H. L. S. L. L. L. Q. G. T. C. Shijin Zhang, Zidong Du and Y. Chen, “Cambricon-x: An accelerator for sparse neural networks,” 49th IEEE/ACM International Symposium on Microarchitecture (MICRO), 2016. [15] T. Chen, Z. Du, N. Sun, J.Wang, C.Wu, and O. T. Yunji Chen, “Diannao: a small-footprint high-throughput accelerator for ubiquitous machine-learning,” pp. 269–284, 2014. [16] S. Han, H. Mao, and W. J. Dally, “Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding,” International Conference on Learning Representations (ICLR), 2016. [17] J. E. Yu-Hsin Chen and V. Sze, “Eyeriss: An energy-efficient reconfigurable accelerator for deep convolutional neural networks,” IEEE Journal of Solid-State Circuits (JSSC), 2017. [18] S. L. S. Z. L. H. J. W. L. L. T. C. Z. X. N. S. Yunji Chen, Tao Luo and O. Temam, “Dadiannao: A machine-learning supercomputer,” 2014. [19] G. S. Y. G. B. X. Chen Zhang, Peng Li and J. Cong, “Optimizing fpga-based accelerator design for deep convolutional neural networks,” pp. 161–170, 2015. [20] S. Y. K. G. B. L. Jiantao Qiu1, Jie Wang, T. T. N. X. S. S. Y. W. Erjin Zhou, Jincheng Yu, and H. Yang, “Going deeper with embedded fpga platform for convolutional neural network,” pp. 26–35, 2016. [21] J. E. Yu-Hsin Chen and V. Sze, “Eyeriss: A spatial architecture for energy-efficient dataflow for convolutional neural networks,” 2016. [22] N. P. J. Naveen Muralimanohar, Rajeev Balasubramonian, “Cacti 6.0: A tool to model large caches,” HP Laboratories, 2009. [23] J. Deng, W. Dong, R. Socher, L. jia Li, K. Li, and L. Fei-fei, “ hierarchical image database,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2009. [24] S. Han, “Deep-compression-alexnet,” 2016. [25] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Computer Vision and Pattern Recognition (CVPR), 2015. [26] S. Y. Min Lin, Qiang Chen, “Network in network,” in Computer Vision and Pattern Recognition (CVPR), 2014. [27] T. D. Jonathan Long, Evan Shelhamer, “Fully convolutional models for semantic segmentation,” in Computer Vision and Pattern Recognition (CVPR), 2015. [28] R. Gonzalez and M. Horowitz, “Energy dissipation in general purpose microprocessors, ieee journal of solid-state circuits,” vol.
|