|
[1] Yann LeCun, Bernhard E Boser, John S Denker, Donnie Henderson, Richard E Howard, Wayne E Hubbard, and Lawrence D Jackel, “Handwritten digit recognition with a back-propagation network,” in Advances in neural information processing systems, 1990, pp. 396–404. [2] Chris Burges, Tal Shaked, Erin Renshaw, Ari Lazier, Matt Deeds, Nicole Hamilton, and Greg Hullender, “Learning to rank using gradient descent,” in Proceedings of the 22nd international conference on Machine learning. ACM, 2005, pp. 89–96. [3] Ali Sharif Razavian, Hossein Azizpour, Josephine Sullivan, and Stefan Carlsson, “Cnn features off-the-shelf: an astounding baseline for recognition,” in Proceed- ings of the IEEE conference on computer vision and pattern recognition workshops, 2014, pp. 806–813. [4] Felix A Gers, Ju ̈rgen Schmidhuber, and Fred Cummins, “Learning to forget: Con- tinual prediction with lstm,” 1999. [5] Sheng-syun Shen and Hung-yi Lee, “Neural attention models for sequence classifi- cation: Analysis and application to key term extraction and dialogue act detection,” arXiv preprint arXiv:1604.00077, 2016. [6] Rada Mihalcea and Paul Tarau, “Textrank: Bringing order into text.,” in EMNLP, 2004, vol. 4, pp. 404–411. [7] Chia-hsing Hsu and Hung-yi Lee, “Enhanced spoken term detection by deep learn- ing,” M.S. thesis, 2017. [8] Stuart Rose, Dave Engel, Nick Cramer, and Wendy Cowley, “Automatic keyword extraction from individual documents,” Text Mining: Applications and Theory, pp. 1–20, 2010. [9] Tony Lindeberg, “Scale invariant feature transform,” Scholarpedia, vol. 7, no. 5, pp. 10491, 2012. [10] Alex Krizhevsky and Geoffrey Hinton, “Learning multiple layers of features from tiny images,” 2009. [11] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information pro- cessing systems, 2012, pp. 1097–1105.
|