|
[1] D. Bahdanau, K. Cho and Y. Bengio, "Neural Machine Translation by Jointly Learning to Align and Translate", International Conference on Learning Representations, 2015. [2] K. Bollacker, C. Evans, P. Paritosh, T. Sturge, and J. Taylor, “Freebase: a collaboratively created graph database for structuring human knowledge,” in Proceedings of the 2008 ACM SIGMOD international conference on Management of data, pages 1247–1250, 2008. [3] Y. Chali and S. A. Hasan, “Towards topic-to-question generation, ” Computational Linguistics, vol. 41, pp. 1-20, 2015. [4] J. Chung, C. Gulcehre, K. Cho and Y. Bengio, Empirical evaluation of gated recurrent neural networks on sequence modeling, [online] Available: http://arxiv.org/abs/1412.3555, Dec. 2014. [5] Y. A. Chung, H. Y. Lee, and J. Glass, “Supervised and Unsupervised Transfer Learning for Question Answering,” in Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1, 2018. [6] J. Devlin, M. W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of deep bidirectional transformers for language understanding,” in Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2, 2019. [7] X. Du, J. Shao, and C. Cardie, “Learning to ask: Neural question generation for reading comprehension,” in Proceedings 55th Annual Meeting Association for Computational Linguistics, pp. 1342–1352, 2017. [8] N. Duan, D. Tang, P. Chen, and M. Zhou, “Question generation for question answering,” in Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 866–874, 2018. [9] V. Harrison and M. Walker, “Neural generation of diverse questions using answer focus, contextual and linguistic features,” arXiv:1809.02637, 2018. [10] M. Heilman and N. A. Smith, “Good question! Statistical ranking for question generation,” in Proceedings Annual Conference of the North American Chapter of the Association for Computational Linguistics, 2010, pages 609–617. [11] C.-Y. Lin, "Rouge: A package for automatic evaluation of summaries", in Proceedings of the Association for Computational Linguistics Workshop Text Summarization Branches Out, vol. 8, 2004. [12] T. Luong, H. Pham and C. D. Manning, "Effective approaches to attention-based neural machine translation", in Proceedings of the 2015 Conference on Empirical Methods Natural Language Process., pp. 1412-1421, Sep. 2015. [13] T. Nguyen, M. Rosenberg, X. Song, J. Gao, S. Tiwary, R. Majumder, and L. Deng, “MS MARCO: A human generated machine reading comprehension dataset,” in Proceedings of the NIPS Workshop on Cognitive Computation: Integrating neural and symbolic approaches, 2016. [14] L. Pan, W. Lei, T. S. Chua, and M.Y. Kan, “Recent advances in neural question generation,” arXiv preprint arXiv:1905.08949, 2019. [15] K. Papineni, S. Roukos, T. Ward and W. Jing Zhu, "Bleu: A Method for Automatic Evaluation of Machine Translation", in Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pp. 311-318, 2002. [16] P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang, “Squad: 100, 000+questions for machine comprehension of text,” in Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, 2016. [17] O. Rokhlenko and I. Szpektor, “Generating synthetic comparable questions for news articles,” in Annual Meeting of the Association for Computational Linguistics, pages 742–751, 2013. [18] C. C. Shao, T. Liu, Y. Lai, Y. Tseng, and S. Tsai. “DRCD: a Chinese Machine reading Comprehension Dataset,” arXiv preprint arXiv:1806.00920, 2018. [19] A. See, P. J. Liu and C. D. Manning, "Get to the Point: Summarization with pointer-generator networks", in Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, vol. 1, pp. 1073-1083, 2017. [20] I. V. Serban, A. G. Durán, C. Gulcehre, S. Ahn, S. Chandar, A. Courville, and Y. Bengio, “Generating factoid questions with recurrent neural networks: The 30m factoid question-answer corpus,” in Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, 2016, pages 588–598. [21] X. Sun, J. Liu, Y. Lyu, W. He, Y. Ma, and S. Wang, “Answer-focused and position-aware neural question generation,” in Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3930–3939, 2018. [22] I. Sutskever, O. Vinyals and Q. V. Le, "Sequence to sequence learning with neural networks", in Proceeding Advances Neural Information Processing Systems, pp. 3104-3112, 2014. [23] C. Tan, F. Sun, T. Kong, W. Zhang, C. Yang and C. Liu, "A survey on deep transfer learning", arXiv:1808.01974, 2018. [24] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, et al., "Attention is all you need", in Proceedings 30th Neural Information Processing System, pp. 1-11, 2017. [25] T. Wang and X. Wan, “T-cvae: Transformer-based conditioned variational autoencoder for story completion”, in Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, pp. 5233–5239, 2019. [26] Z. Wang, A. S. Lan, W. Nie, A. E.Waters, P. J. Grimaldi, and R. G. Baraniuk, “QG-Net: a data-driven question generation model for educational content,” in Proceedings of the fifth annual ACM conference on learning at scale, pages 7:1–7:10, 2018. [27] R. J. Williams and D. Zipser, "A learning algorithm for continuously running fully recurrent neural networks", Neural Computation, vol. I, pp. 270-280, 1989. [28] Y. Wu, M. Schuster, Z. Chen, Q. V. Le, M. Norouzi, W. Macherey, M. Krikun, Y. Cao, Q. Gao, K. Macherey et al., "Google’s neural machine translation system: Bridging the gap between human and machine translation", 2016, [online] Available: https://arxiv.org/abs/1609.08144. [29] Z. Yang, J. Hu, R. Salakhutdinov, and W. W. Cohen, “Semi-supervised QA with generative domain-adaptive nets,” in the Association for Computational Linguistics annual meeting (ACL), 2017. [30] X. Yuan, T. Wang, C. Gulcehre, A. Sordoni, P. Bachman, S. Zhang, S. Subramanian, and A. Trischler, “Machine comprehension by text-to-text neural question generation,” in The 2nd Workshop on Representation Learning for NLP (Rep4NLP@ACL), pages15–25, 2017. [31] Y. Zhao, X. Ni, Y. Ding, and Q. Ke, “Paragraph-level Neural Question Generation with Maxout Pointer and Gated Self-attention Networks,” in Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, 3901–3910, 2018. [32] Q. Zhou, N. Yang, F. Wei, C. Tan, H. Bao, and M. Zhou, “Neural question generation from text: A preliminary study,” in National CCF Conference on Natural Language Processing and Chinese Computing, pages 662–671, 2018.
|