|
Cho, K., Van Merriënboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., & Bengio, Y. (2014). Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078. Chung, J., Gulcehre, C., Cho, K., & Bengio, Y. (2014). Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555. Elman, J. L. (1990). Finding structure in time. Cognitive science, 14(2), 179-211. Gong, Y., & Liu, X. (2001). Generic text summarization using relevance measure and latent semantic analysis. Paper presented at the Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval. Greff, K., Srivastava, R. K., Koutník, J., Steunebrink, B. R., & Schmidhuber, J. (2016). LSTM: A search space odyssey. IEEE transactions on neural networks and learning systems, 28(10), 2222-2232. Hinton, G. E., Sejnowski, T. J., & Poggio, T. A. (1999). Unsupervised learning: foundations of neural computation: MIT press. Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural computation, 9(8), 1735-1780. Hu, B., Chen, Q., & Zhu, F. (2015). Lcsts: A large scale chinese short text summarization dataset. arXiv preprint arXiv:1506.05865. Jozefowicz, R., Zaremba, W., & Sutskever, I. (2015). An empirical exploration of recurrent network architectures. Paper presented at the International Conference on Machine Learning. Junyi, S. (Retrieved 2019). Jieba.[online]. Retrieved from https://github.com/fxsjy/jieba. Kupiec, J., Pedersen, J., & Chen, F. (1999). A trainable document summarizer. Advances in Automatic Summarization, 55-60. LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. nature, 521(7553), 436. Li, P., Lam, W., Bing, L., & Wang, Z. (2017). Deep recurrent generative decoder for abstractive text summarization. arXiv preprint arXiv:1708.00625. Lin, C.-Y. (2004). Rouge: A package for automatic evaluation of summaries. Paper presented at the Text summarization branches out. Lin, S.-H., & Chen, B. (2009). Improved speech summarization with multiple-hypothesis representations and Kullback-Leibler divergence measures. Paper presented at the Tenth Annual Conference of the International Speech Communication Association. Luong, M.-T., Pham, H., & Manning, C. D. (2015). Effective approaches to attention-based neural machine translation. arXiv preprint arXiv:1508.04025. Mani, I. (1999). Advances in automatic text summarization: MIT press. Mazur, M. (2015). A Step by Step Backpropagation Example. Retrieved from https://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/. Mihalcea, R., & Tarau, P. (2004). Textrank: Bringing order into text. Paper presented at the Proceedings of the 2004 conference on empirical methods in natural language processing. Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., & Dean, J. (2013). Distributed representations of words and phrases and their compositionality. Paper presented at the Advances in neural information processing systems. Olah, C. (2015). Understanding LSTM Networks. Retrieved from http://colah.github.io/posts/2015-08-Understanding-LSTMs/. Paulus, R. (2018). Deep Reinforced Model for Abstractive Summarization. In: Google Patents. Paulus, R., Xiong, C., & Socher, R. (2017). A deep reinforced model for abstractive summarization. arXiv preprint arXiv:1705.04304. Ranzato, M. A., Chopra, S., Auli, M., & Zaremba, W. (2015). Sequence level training with recurrent neural networks. arXiv preprint arXiv:1511.06732. Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1988). Learning representations by back-propagating errors. Cognitive modeling, 5(3), 1. Schmidhuber, J. (2015). Deep learning in neural networks: An overview. Neural networks, 61, 85-117. Schuster, M., & Paliwal, K. K. (1997). Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11), 2673-2681. See, A., Liu, P. J., & Manning, C. D. (2017). Get to the point: Summarization with pointer-generator networks. arXiv preprint arXiv:1704.04368. Shen, D., Sun, J.-T., Li, H., Yang, Q., & Chen, Z. (2007). Document summarization using conditional random fields. Paper presented at the IJCAI. Sutskever, I., Vinyals, O., & Le, Q. V. (2014). Sequence to sequence learning with neural networks. Paper presented at the Advances in neural information processing systems. Wang, H., & Yeung, D.-Y. (2016). Towards bayesian deep learning: A survey. arXiv preprint arXiv:1604.01662. Weng, C., Cui, J., Wang, G., Wang, J., Yu, C., Su, D., & Yu, D. (2018). Improving Attention Based Sequence-to-Sequence Models for End-to-End English Conversational Speech Recognition. Paper presented at the Interspeech. Young, T., Hazarika, D., Poria, S., & Cambria, E. (2018). Recent trends in deep learning based natural language processing. ieee Computational intelligenCe magazine, 13(3), 55-75. 林婷嫻 , 張. (2018). 中研院-斷開中文的鎖鍊!自然語言處,馬偉雲專訪. Retrieved from http://research.sinica.edu.tw/nlp-natural-language-processing-chinese-knowledge-information/. 陳運文. (2019). 在NLP領域中文對比英文的難點分析(達觀數據陳運文). Retrieved from http://www.52nlp.cn/11458-2.
|