|
1.Sutskever, I., Vinyals, O., and Le, Q. (2014). Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems. 2.Bahdanau, D., Cho , K., Bengio ,Y. (2014). Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 [cs.CL]. 3.Junczys-Dowmunt, M., Grundkiewicz, R., Guha, S., Heafield, K. (2018). Approaching neural grammatical error correction as a low-resource machine translation task. arXiv preprint arXiv:1804.05940. 4.Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, Yoav Artzi (2019). BERTScore: Evaluating Text Generation with BERT. arXiv preprint arXiv:1904.09675. 5.Roman Grundkiewicz, Marcin Junczys-Dowmunt (2018). Near Human-Level Performance in Grammatical Error Correction with Hybrid Machine Translation. arXiv preprint arXiv:1804.05945 [cs.CL]. 6.Tao Ge, Furu Wei, Ming Zhou (2018). Fluency Boost Learning and Inference for Neural Grammatical Error Correction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. 7.Yoshimoto, I., Kose, T., Mitsuzawa, K., Sakaguchi, K., Mizumoto ,T., Hayashibe, Y., Komachi, M., Matsumoto, Y. (2013). NAIST at 2013 CoNLL grammatical error correction shared task. In Proceedings of the 17th Conference on Computational Natural Language Learning. 8.Marcin Junczys-Dowmunt, Roman Grundkiewicz (2016). Phrase-based Machine Translation is State-of-the-Art for Automatic Grammatical Error Correction. arXiv preprint arXiv:1605.06353 [cs.CL]. 9.Yiming Wang, Longyue Wang, Derek F. Wong, Lidia S. Chao, Xiaodong Zeng, Yi Lu. (2014). Factored Statistical Machine Translation for Grammatical Error Correction. In Proceedings of the 8th Conference on Computational Natural Language Learning. 10.Tao Ge, Furu Wei, Ming Zhou (2018). Reaching Human-level Performance in Automatic Grammatical Error Correction: An Empirical Study. arXiv preprint arXiv:1807.01270 [cs.CL]. 11.Zheng Yuan, Ted Briscoe (2016). Grammatical error correction using neural machine translation. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 12.Wei Zhao, Liang Wang, Kewei Shen, Ruoyu Jia, Jingming Liu (2019). Improving Grammatical Error Correction via Pre-Training a Copy-Augmented Architecture with Unlabeled Data. arXiv preprint arXiv:1903.00138 [cs.CL]. 13.Ramesh Nallapati, Bowen Zhou, Cicero Nogueira dos santos, Caglar Gulcehre, Bing Xiang (2016). Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond. arXiv preprint arXiv:1602.06023 [cs.CL]. 14.Abigail See, Peter J. Liu, Christopher D. Manning (2017). Get To The Point: Summarization with Pointer-Generator Networks. arXiv preprint arXiv:1704.04368 [cs.CL]. 15.K. Shetty and J. S. Kallimani, "Automatic extractive text summarization using K-means clustering," 2017 International Conference on Electrical, Electronics, Communication, Computer, and Optimization Techniques (ICEECCOT), Mysuru, 2017, pp. 1-9, doi: 10.1109/ICEECCOT.2017.8284627. 16.Christian, Hans & Agus, Mikhael & Suhartono, Derwin. (2016). Single Document Automatic Text Summarization using Term Frequency-Inverse Document Frequency (TF-IDF). ComTech: Computer, Mathematics and Engineering Applications. 7. 285. 10.21512/comtech.v7i4.3746. 17.M. Chandra, V. Gupta and S. K. Paul (2011). A statistical approach for automatic text summarization by extraction. Proc. Int. Conf. CSNT, pp. 268-271. 18.Alexander M. Rush, Sumit Chopra, Jason Weston (2015). A Neural Attention Model for Sentence Summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. 19.Sumit Chopra, Michael Auli, Alexander M. Rush (2016). Abstractive Sentence Summarization with Attentive Recurrent Neural Networks. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 20.Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio (2014). Neural Machine Translation by Jointly Learning to Align and Translate. arXiv preprint arXiv:1409.0473 [cs.CL]. 21.Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin (2017). Attention is All You Need. arXiv preprint arXiv:1706.03762 [cs.CL]. 22.Sebastian Ruder (2017). An Overview of Multi-Task Learning in Deep Neural Networks. arXiv preprint arXiv:1706.05098 [cs.LG]. 23.Alex Kendall, Yarin Gal, Roberto Cipolla (2017). Multi-Task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics. arXiv preprint arXiv:1705.07115 [cs.CV]. 24.P. Bojanowski*, E. Grave*, A. Joulin, T. Mikolov (2016). Enriching Word Vectors with Subword Information. arXiv preprint arXiv:1607.04606 [cs.CL]. 25.A. Stent, M. Marge, M. Singhai. (2005). Evaluating evaluation methods for generation in the presence of variation. In International Conference on Intelligent Text Processing and Computational Linguistics, pages 341–351. Springer. 26.T. Zhang, V. Kishore, F. Wu, Kilian Q. Weinberger, Y. Artzi (2019). BERTScore: Evaluating Text Generation with BERT. arXiv preprint arXiv:1904.09675 [cs.CL]. 27.Thibault Sellam, Dipanjan Das, Ankur P. Parikh (2020). BLEURT: Learning Robust Metrics for Text Generation. arXiv preprint arXiv:2004.04696 [cs.CL]. 28.Chin-Yew Lin (2004). ROUGE: A Package for Automatic Evaluation of Summaries. In Association for Computational Linguistics. 29.Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu (2004). BLEU: a Method for Automatic Evaluation of Machine Translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics.
|