|
Reference [1] I. V. Serban, et al., “Multi-resolution recurrent neural networks: An application to dialogue response generation,” in Proc. Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, California, USA, pp. 3288–3294, Feb. 2017. [2] I. V. Serban, A. Sordoni, Y. Bengio, A. Courville, and J. Pineau, “Building end-to-end dialogue systems using generative hierarchical neural network models,” in Proc. Thirtieth AAAI Conference on Artificial Intelligence, Phoenix, Arizona, USA, pp. 3776– 3784, Feb. 2016. [3] R. Lowe, N. Pow, I. Serban, and J. Pineau, “The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems,” in Proc. 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, Prague, Czech Republic, pp. 285–294, Sep. 2015. [4] Y. Wu, W. Wu, C. Xing, M. Zhou, and Z. Li, “Sequential matching network: A new architecture for multi-turn response selection in retrieval-based chatbots,” in Proc. 55th Annual Meeting of the Association for Computational Linguistics, Vancouver, Canada, pp. 496–505, Jul. 2017. 30 [5] X. Zhou, et al., “Multi-turn response selection for chatbots with deep attention matching network,” in Proc. 56th Annual Meeting of the Association for Computational Linguistics, Melbourne, Australia, pp. 1118–1127, Jul. 2018. [6] Y. Song, et al., “An Ensemble of Retrieval-Based and Generation-Based Human- Computer Conversation Systems,” in Proc. Twenty-Seventh International Joint Conference on Artificial Intelligence, Stockholm, Sweden, pp.4382–4388, Jul. 2018. [7] L. Yang, et al., “A Hybrid Retrieval-Generation Neural Conversation Model,” in Proc. 28th ACM Int’l Conf. Information and Knowledge Management (CIKM’19), New York, NY, USA, pp. 1341–1350, Nov. 2019. [8] H. Wang, Z. Lu, H. Li, and E. Chen, “A dataset for research on short-text conversations,” in Proc. EMNLP 2013, Seattle, USA, pp. 935–945, Oct. 2013. [9] X. Zhou, et al., “Multi-view response selection for human-computer conversation,” in Proc. EMNLP 2016, Austin, Texas, USA, pp. 372–381, Nov. 2016. [10] A. Vaswani, et al., “Attention is all you need,” in Advances in Neural Information Processing Systems, Long Beach, CA, USA, pp. 6000–6010, Dec. 2017. [11] Q. Zhou and H. Wu, “NLP at IEST 2018: BiLSTM-attention and LSTM-attention via soft voting in emotion classification,” in Proc. 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, Brussels, Belgium, pp. 189–194, Oct. 2018. 31 [12] X. L. Yao, “Attention-based BiLSTM neural networks for sentiment classification of short texts,” in Proc. 9th IEEE International Conference on Cloud Computing Technology and Science (CloudCom 2017), Hong Kong, pp. 110–117, Dec. 2017. [13] A. See, P. Liu, and C. Manning, “Get to the point: Summarization with pointergenerator networks,” in Proc. 55th Annual Meeting of the Association for Computational Linguistics, Vancouver, Canada, vol. 1, pp. 1073–1083, Jul. 2017. [14] E. Kiperwasser and Y. Goldberg, “Simple and accurate dependency parsing using bidirectional LSTM feature representations,” Transactions of the Association for Computational Linguistics, vol. 4, pp. 313–327, 2016. [15] U. Ehsan, P. Tambwekar, L. Chan, B. Harrison, and M. Riedl, “Automated rationale generation: a technique for explainable AI and its effects on human perceptions,” in Proc. 24th International Conference on Intelligent User Interfaces, Los Angeles, California, USA, pp. 263–274, Mar. 2019. [16] S. Young, et al., “The hidden information state model:A practical framework for pomdp-based spoken dialogue management,” Computer Speech & Language, vol. 24, no. 2, pp. 150–174, 2010. [17] R. Banchs and H. Li, “IRIS: a chat-oriented dialogue system based on the vector space model,” in Proc. ACL 2012 System Demonstrations, Jeju Island, Korea, pp. 37–42, Jul. 2012. [18] Z. Wei, et al., “Task-oriented dialogue system for automatic diagnosis,” in Proc. 56th Annual Meeting of the Association for Computational Linguistics, Melbourne, Australia, vol. 2, pp. 201–207, Jul. 2018. 32 [19] C. Xing, et al., “Topic aware neural response generation,” in Proc. Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, California, USA, pp. 3351– 3357, Feb. 2017. [20] B. Deb, P. Bailey, and M. Shokouhi, “Diversifying Reply Suggestions using a Matching-Conditional Variational Autoencoder,” in Proc. NAACL 2019, Minneapolis, Minnesota, vol. 2, pp. 40–47, Jun. 2019. [21] K. Swanson, L. Yu, C. Fox, J. Wohlwend, and T. Lei, “Building a Production Model for Retrieval-Based Chatbots,” in Proc. First Workshop on NLP for Conversational AI, Florence, Italy, pp. 32–41, Aug. 2019. [22] T. Wen, et al., “Semantically conditioned lstm-based natural language generation for spoken dialogue systems,” in Proc. 2015 Conference on Empirical Methods in Natural Language Processing, Lisbon, Portugal, pp. 1711–1721, Sep. 2015. [23] A. Graves, A. Mohamed, and G. Hinton, “Speech recognition with deep recurrent neural networks,” in Proc. 2013 IEEE international conference on acoustics, speech and signal, Vancouver, Canada, pp. 6645–6649, May 2013. [24] H. Zhou, M. Huang, T. Zhang, X. Zhu, and B. Liu, “Emotional chatting machine: Emotional conversation generation with internal and external memory,” in Proc. Thirty-Second AAAI Conference on Artificial Intelligence, New Orleans, Louisiana, USA, pp. 730–738, Feb. 2018. [25] C. Tao, S. Gao, M. Shang, W. Wu, D. Zhao, and R. Yan, “Get The Point of My Utterance! Learning Towards Effective Responses with Multi-Head Attention Mechanism,” in Proc. IJCAI 2018, Stockholm, Sweden, pp. 4418–4424, July. 2018. 33 [26] K. Yao, G. Zweig, and B. Peng, “Attention with intention for a neural network conversation model,” in NIPS 2015 Workshop on Machine Learning for Spoken Language Understanding and Interaction, Montreal, QC, Canada, Dec. 2015. [27] M. Zhu, A. Ahuja, W. Wei, and C. Reddy “A Hierarchical Attention Retrieval Model for Healthcare Question Answering,” in Proc. The World Wide Web Conference, San Francisco, California, USA, pp. 2472–2482, May. 2019. [28] A. Ritter, C. Cherry, andW. Dolan, “Data-driven response generation in social media,” in Proc. EMNLP, Edinburgh, UK, pp. 583–593, Jul. 2011. [29] L. Shang, Z. Lu, and H. Li, “Neural responding machine for short-text conversation,” in Proc. ACL 2015, Beijing, China, vol. 1, pp. 1577–1586, Jul. 2015. [30] O. Vinyals and Q. Le, “A neural conversational model,” arXiv, vol. 1506, no. 05869, 2015, [Online] Available: https://arxiv.org/pdf/1506.05869.pdf. [31] Z. Ji, Z. Lu, and H. Li, “An information retrieval approach to short text conversation,” arXiv, vol. 1408, no. 6988, 2014, [Online] Available: https://arxiv.org/pdf/1408.6988.pdf. [32] L. Nio, et al., “Developing non-goal dialog system based on examples of drama television,” in Natural Interaction with Robots, Knowbots and Smartphones, pp. 355–361, 2014. [33] B. Hu, Z. Lu, H. Li, and Q. Chen, “Convolutional neural network architectures for matching natural language sentences,” in Proc. Advances in Neural Information Processing Systems, Montreal, Quebec, Canada, pp. 2042–2050, Dec. 2014. 34 [34] R. Yan, Y. Song, and H. Wu, “Learning to respond with deep neural networks for retrievalbased human-computer conversation system,” in Proc. SIGIR 2016, pp. 55– 64, Pisa, Italy, Jul. 2016. [35] M. Wang, Z. Lu, H. Li, and Q. Liu, “Syntax-based deep matching of short texts,” in Proc. Twenty-Fourth International Joint Conference on Artificial Intelligence, Buenos Aires, Argentina, pp. 1354–1361, Jul. 2015. [36] J. L. Ba, J. R. Kiros, and G. Hinton, “Layer normalization,” arXiv, vol. 1607, no. 06450, 2016, [Online] Available: https://arxiv.org/pdf/1607.06450.pdf. [37] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, Dec. 2015. [38] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. IEEE conference on computer vision and pattern recognition, Las Vegas, NV, USA, pp. 770–778, Jun. 2016. [39] X. Lu, M. Lan, and Y. Wu, “Memory-Based Matching Models for Multi-turn Response Selection in Retrieval-Based Chatbots,” in Proc. NLPCC 2018, Hohhot, China, pp. 269–278, Aug. 2018. [40] S. Hochreiter and J. Schmidhuber, “Long short-term memory” Neural Computation, vol. 9, no. 8, pp. 1735–1780, 1997. [41] Q. Chen and W. Wang, “Sequential Matching Model for End-to-end Multi-turn Response Selection,” in Proc. ICASSP 2019, Brighton, UK, pp. 7350–7354, May 2019. 35 [42] Q. Chen, X. Zhu, Z. Ling, S.Wei, H. Jiang, and D. Inkpen, “Enhanced lstm for natural language inference,” in Proc. 55th Annual Meeting of the Association for Computational Linguistics, Vancouver, Canada, pp. 1657–1668, Jul. 2017. [43] L. Mou, et al., “Natural language inference by tree-based convolution and heuristic matching,” in Proc. 54th Annual Meeting of the Association for Computational Linguistics, Berlin, Germany, pp.130–136, Aug. 2016. [44] L. Pang, et al., “Text matching as image recognition,” in Proc. Thirtieth AAAI Conference on Artificial Intelligence, Phoenix, Arizona USA, pp. 2793–2799, Feb. 2016. [45] S. Wang and J. Jiang, “Machine comprehension using match-lstm and answer pointer,” in Proc. International Conference on learning representations,Palais des Congr`es Neptune, Toulon, France, pp. 2793–2799, Apr. 2017. [46] T. Mikolov, I. Sutskever, K. Chen, G. Corrado, and J. Dean, “Distributed representations of words and phrases and their compositionality,” in Proc. 27th Annual Conf. Neural Information Processing Systems 2013, Lake Tahoe, Nevada, USA, pp. 3111– 3119, Dec. 2013. [47] D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” CoRR, vol. 1412, abs. 6980, 2014. [48] https://ourworldindata.org/internet [49] https://wearesocial.com/us/blog/2018/01/global-digital-report-2018
|