跳到主要內容

臺灣博碩士論文加值系統

(44.192.48.196) 您好!臺灣時間:2024/06/23 20:50
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:查爾斯
研究生(外文):Charles Hinson
論文名稱:基於異質回收式生成的中文文法錯誤更正
論文名稱(外文):Heterogeneous Recycle Generation for Chinese Grammatical Error Correction
指導教授:陳信希陳信希引用關係
指導教授(外文):Hsin-Hsi Chen
口試委員:蔡宗翰古倫維陳冠宇
口試委員(外文):Tzong-Han TsaiLun-Wei KuKuan-Yu Chen
口試日期:2020-07-03
學位類別:碩士
校院名稱:國立臺灣大學
系所名稱:資訊工程學研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2020
畢業學年度:108
語文別:英文
論文頁數:54
中文關鍵詞:中文文法錯誤更正異質回收式生成
外文關鍵詞:Chinese Grammatical Error CorrectionGECHeterogeneous Recycle Generation
DOI:10.6342/NTU202001483
相關次數:
  • 被引用被引用:0
  • 點閱點閱:140
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
近年來,語法錯誤糾正系統都依賴於神經機器翻譯(NMT)的模型。儘管這些模型表現出令人印象深刻的結果,但它們仍存在幾個主要的缺點。它們不僅需要大量數據來進行適當的訓練,而且還需要通過將來源句子轉換成目標句子的機制來完成,而非直接對其進行修正。本文提出了一種中文語法錯誤糾正系統,該系統由神經機器翻譯的模型、序列編輯模型及拼寫檢查器所組成。這個由三個模型組成的異構系統使用回收生成進行組合,其中一個模型的輸出用作另一個模型的輸入。該方法不僅在NLPCC2018 數據集上實現了最先進的表現,而且在沒有GEC 特定體系結構更改或數據擴充的情況下也可以實現。我們更透過不同的模型組成順序和生成迭代次數進行試驗,以找到組成系統的最佳方式。此外,我們修改了英文GEC 的ERRANT 評分器,使其能夠自動註釋和評分中文句子,不僅使我們,而且使未來的研究人員能夠基於不同錯誤類型來檢驗模型的性能。
In recent years, grammatical error correction systems have all relied on neural machine translation based (NMT-based) models. Although these model can yield impressive results, they have several major drawbacks. Not only do they require a massive amount of data to properly train, but also they work by translating a source sentence into a target sentence, an are unable to simply edit it. In this thesis, we propose a system for Chinese grammatical error correction (GEC) that consists of a neural machine translation based model, a sequence editing model, and a spell checker. This heterogeneous system of three models is combined using recycle generation, where the output from one model serves as input to another. This method not only achieves a new state-of-the-art performance on the NLPCC2018 dataset, but also does it without GEC specific architecture changes or data augmentation. We experiment with model composition order and number of generation iterations to find the optimal way compose our system. Furthermore, we modify the ERRANT scorer for English GEC to be able to automatically annotate and score Chinese sentences, giving not only us but also future researchers the ability to report model performance with respect to error type.
Contents
誌謝iii
Acknowledgements v
摘要vii
Abstract ix
1 Introduction 1
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Challenges of Grammatical Error Correction . . . . . . . . . . . . . . . . 2
1.2.1 Ambiguity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.2 Context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.3 Dependency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Previous Approaches to GEC . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.5 Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2 Related Work 7
2.1 English GEC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Chinese GEC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3 Sequence Editing Models . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.4 Recycle Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3 Datasets 11
3.1 Error Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.2 Training Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.3 Testing Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.3.1 Inconsistencies in Annotations . . . . . . . . . . . . . . . . . . . 13
3.4 Distribution of Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4 System Overview 17
4.1 Preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.2 Component 1 : NMT System . . . . . . . . . . . . . . . . . . . . . . . . 18
4.2.1 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4.2.2 Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.3 Component 2 : Sequence Editing System . . . . . . . . . . . . . . . . . 21
4.3.1 Tagging Operations . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.3.2 Phrase Vocabulary Optimization . . . . . . . . . . . . . . . . . . 23
4.3.3 Converting Target Sentences to Tagged Representation . . . . . . 23
4.3.4 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.3.5 Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.4 Component 3 : Spell Checker . . . . . . . . . . . . . . . . . . . . . . . . 25
4.4.1 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4.4.2 LM Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4.4.3 LM Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
5 Recycle Generation 29
6 Results 33
6.1 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
6.2 Individual Component Results . . . . . . . . . . . . . . . . . . . . . . . 34
6.3 Recycle Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
6.3.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
6.4 Comparison With State of the Art . . . . . . . . . . . . . . . . . . . . . 36
7 Discussion 37
7.1 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
7.1.1 Adapting ERRANT for Chinese Sentences . . . . . . . . . . . . 38
7.1.2 Annotation comparison : Auto vs Gold . . . . . . . . . . . . . . 40
7.1.3 Error-type Specific Performance . . . . . . . . . . . . . . . . . . 41
7.1.4 Comparison of SE and NMT Models . . . . . . . . . . . . . . . 42
8 Conclusion 45
8.1 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Bibliography 47
[1] Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
[2] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.
[3] Loïc Barrault, Ondřej Bojar, Marta R. Costa-jussà, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Matthias Huck, Philipp Koehn, Shervin Malmasi, Christof Monz, Mathias Müller, Santanu Pal, Matt Post, and Marcos Zampieri. Findings of the 2019 conference on machine translation (wmt19). In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 1–61, Florence, Italy, August 2019. Association for Computational Linguistics. URL http://www.aclweb.org/anthology/W19-5301.
[4] Nicolas Boulanger-Lewandowski, Yoshua Bengio, and Pascal Vincent. Audio chord recognition with recurrent neural networks. In ISMIR, pages 335–340. Citeseer, 2013.
[5] Christopher Bryant, Mariano Felice, and Ted Briscoe. Automatic annotation and evaluation of error types for grammatical error correction. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 793–805, Vancouver, Canada, July 2017. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/P17-1074.
[6] Christopher Bryant, Mariano Felice, Øistein E. Andersen, and Ted Briscoe. The BEA-2019 shared task on grammatical error correction. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 52–75, Florence, Italy, August 2019. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/W19-4406.
[7] Wanxiang Che, Jianmin Jiang, Zhong Su, Yue Pan, and Ting Liu. Improved-edit distance kernel for chinese relation extraction. In Companion Volume to the Proceedings of Conference including Posters/Demos and tutorial abstracts, 2005.
[8] Kyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014.
[9] Shamil Chollampatt and Hwee Tou Ng. A multilayer convolutional encoder-decoder neural network for grammatical error correction. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
[10] Shamil Chollampatt, Weiqi Wang, and Hwee Tou Ng. Cross-sentence grammatical error correction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 435–445, 2019.
[11] Daniel Dahlmeier and Hwee Tou Ng. Better evaluation for grammatical error correction. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 568–572, Montréal, Canada, June 2012. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/N12-1067.
[12] Robert Dale and Adam Kilgarriff. Helping our own: The HOO 2011 pilot shared task. In Proceedings of the 13th European Workshop on Natural Language Generation, pages 242–249, Nancy, France, September 2011. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/W11-2838.
[13] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pretraining of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
[14] Yue Dong, Zichao Li, Mehdi Rezagholizadeh, and Jackie Chi Kit Cheung. Editnts: An neural programmer-interpreter model for sentence simplification through explicit editing. arXiv preprint arXiv:1906.08104, 2019.
[15] Mariano Felice and Ted Briscoe. Towards a standard evaluation method for grammatical error detection and correction. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 578–587, 2015.
[16] Kai Fu, Jin Huang, and Yitao Duan. Youdao’s winning solution to the nlpcc-2018 task 2 challenge: a neural machine translation approach to chinese grammatical error correction. In CCF International Conference on Natural Language Processing and Chinese Computing, pages 341–350. Springer, 2018.
[17] Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. Convolutional sequence to sequence learning. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1243–1252. JMLR. org, 2017.
[18] Alex Graves. Sequence transduction with recurrent neural networks. arXiv preprint arXiv:1211.3711, 2012.
[19] Roman Grundkiewicz and Marcin Junczys-Dowmunt. Near human-level performance in grammatical error correction with hybrid machine translation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 284–290, New Orleans, Louisiana, June 2018. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/N18-2046.
[20] Roman Grundkiewicz, Marcin Junczys-Dowmunt, and Kenneth Heafield. Neural grammatical error correction systems with unsupervised pre-training on synthetic data. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 252–263, Florence, Italy, August 2019. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/W19-4427.
[21] Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Li. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1631–1640, Berlin, Germany, August 2016. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/P16-1154.
[22] Jiatao Gu, Changhan Wang, and Junbo Zhao. Levenshtein transformer. In Advances in Neural Information Processing Systems, pages 11179–11189, 2019.
[23] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997.
[24] Hen-Hsen Huang, Yen-Chi Shao, and Hsin-Hsi Chen. Chinese preposition selection for grammatical error diagnosis. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 888–899, 2016.
[25] Marcin Junczys-Dowmunt, Roman Grundkiewicz, Shubha Guha, and Kenneth Heafield. Approaching neural grammatical error correction as a low-resource machine translation task. arXiv preprint arXiv:1804.05940, 2018.
[26] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014.
[27] Shun Kiyono, Jun Suzuki, Masato Mita, Tomoya Mizumoto, and Kentaro Inui. An empirical study of incorporating pseudo data into grammatical error correction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1236–1242, Hong Kong, China, November 2019. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/D19-1119.
[28] Jared Lichtarge, Christopher Alberti, Shankar Kumar, Noam Shazeer, and Niki Parmar. Weakly supervised grammatical error correction using iterative decoding. ArXiv, abs/1811.01710, 2018.
[29] Eric Malmi, Sebastian Krause, Sascha Rothe, Daniil Mirylenka, and Aliaksei Severyn. Encode, tag, realize: High-precision text editing. In EMNLP-IJCNLP, 2019.
[30] Jiaju Mei, Yiming Lan, Yunqi Gao, and Hongxiang Yin. Chinese thesaurus tongyici cilin (2nd edtion), 1996.
[31] Vinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted Boltzmann machines. In Proceedings of the 27th international conference on machine learning (ICML-10), pages 807–814, 2010.
[32] Courtney Napoles and Chris Callison-Burch. Systematically adapting machine translation for grammatical error correction. In Proceedings of the 12th Workshop on Innovative use of NLP for Building Educational Applications, pages 345–356, 2017.
[33] Courtney Napoles, Keisuke Sakaguchi, Matt Post, and Joel Tetreault. Ground truth for grammatical error correction metrics. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 588–593, 2015.
[34] Hwee Tou Ng, Siew Mei Wu, Ted Briscoe, Christian Hadiwinoto, Raymond Hendy Susanto, and Christopher Bryant. The CoNLL-2014 shared task on grammatical error correction. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task, pages 1–14, Baltimore, Maryland, June 2014. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/W14-1701.
[35] Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), pages 48–53, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/N19-4009.
[36] Z. Qiu and Y. Qu. A two-stage model for chinese grammatical error correction. IEEE Access, 7:146772–146777, 2019.
[37] Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training. URL https://s3-us-west-2. amazonaws.com/openai-assets/researchcovers/languageunsupervised/language understanding paper. pdf, 2018.
[38] Hongkai Ren, Liner Yang, and Endong Xun. A sequence to sequence learning for chinese grammatical error correction. In CCF International Conference on Natural Language Processing and Chinese Computing, pages 401–410. Springer, 2018.
[39] Abigail See, Peter J. Liu, and Christopher D. Manning. Get to the point: Summarization with pointer-generator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073–1083, Vancouver, Canada, July 2017. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/P17-1099.
[40] Rico Sennrich, Barry Haddow, and Alexandra Birch. Improving neural machine translation models with monolingual data. arXiv preprint arXiv:1511.06709, 2015.
[41] Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715–1725, Berlin, Germany, August 2016. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/P16-1162.
[42] Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112, 2014.
[43] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2818–2826, 2016.
[44] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc., 2017. URL http://papers.nips.cc/paper/7181-attention-is-all-you-need.pdf.
[45] Paul J Werbos. Backpropagation through time: what it does and how to do it. Proceedings of the IEEE, 78(10):1550–1560, 1990.
[46] Shih-Hung Wu, Chao-Lin Liu, and Lung-Hao Lee. Chinese spelling check evaluation at sighan bake-off 2013. In Proceedings of the Seventh SIGHAN Workshop on Chinese Language Processing, pages 35–42, 2013.
[47] Zheng Yuan and Ted Briscoe. Grammatical error correction using neural machine translation. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 380–386, 2016.
[48] Wei Zhao, Liang Wang, Kewei Shen, Ruoyu Jia, and Jingming Liu. Improving grammatical error correction via pre-training a copy-augmented architecture with unlabeled data. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 156–165, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/N19-1014.
[49] Yuanyuan Zhao, Nan Jiang, Weiwei Sun, and Xiaojun Wan. Overview of the nlpcc 2018 shared task: grammatical error correction. In CCF International Conference on Natural Language Processing and Chinese Computing, pages 439–445. Springer, 2018.
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top