(3.238.235.155) 您好!臺灣時間:2021/05/12 00:01
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

我願授權國圖
: 
twitterline
研究生:梁韶中
研究生(外文):Liang, Shao Zhong
論文名稱:適用於中文史料文本之作者語言模型分析方法研究
論文名稱(外文):An enhanced writer language model for Chinese historical corpora
指導教授:蔡銘峰蔡銘峰引用關係
指導教授(外文):Tsai, Ming Feng
口試委員:王釧茹蘇家玉
學位類別:碩士
校院名稱:國立政治大學
系所名稱:資訊科學學系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
畢業學年度:105
語文別:中文
論文頁數:35
中文關鍵詞:語言模型中文史料文本長字詞遞歸神經網絡語言模型平滑法
外文關鍵詞:Kneser-Ney
相關次數:
  • 被引用被引用:0
  • 點閱點閱:268
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
因應近年來數位典藏的趨勢日漸發展,越來越多珍貴中文歷史文本 選擇進行數保存,而保存的同時會面對文本的作者遺失或從缺,進而 影響文本的完整性,而本論文提出了一個適用於中文史料文本作者分 析的方法,主要是透過語言模型的建構,為每一位潛在的作者訓練出 一個專屬的語言模型,而搭配不同的平滑方法能避免掉某一受測文本 單詞出現的機率為零的機率進而造成計算上的錯誤,而本論文主要採 用改良式 Kneser–Ney 平滑方法,該平滑方法因其會同時考慮到 N 詞彙 語言模型的高低頻詞的影響,而使其成為建構語言模型普遍選擇的平 滑方式。
若僅將每一位潛在作者的所有文章進行合併訓練成單一的語言模型 會忽略掉許多特性,所以本篇論文在取得附有價值的歷史文本之外, 又加入後設資料 (Metadata) 進行綜合分析,包括人工標記的主題分類 的統計資訊,使建構出來的語言模型更適配受測文本,增加預測結果 的準確性。和加入額外的自定義的字詞以符合文本專有名詞的用詞習 慣,還會在一般建構語言模型的基礎上,加入長字詞的權重,以確定 字詞長度對預測準確度的關係。最後還會採用遞歸神經網路 (Recursive neural networks) 結合語言模型進行作者預測,與傳統的語言模型分析 作進一步的比較。
In recent years, the trend of digital collections has been developing day by day, and more and more precious Chinese historical corpora have been selected for preservation. The preservation of the corpora at the same time will face the loss or lack of the authors, thus affecting the integrity of the corpora. A method for analyzing the author of the Chinese historical text is mainly through the construction of the language model, for each potential author to train a specific language model, and with a different smoothing method can be avoided zero probability of words and the error is caused by the calculation. This paper mainly adopts the Interpolated Modified Kneser-Ney smoothing method, which will take into account the influence of higher order and lower order n-grams string frequency. So, Interpolated Modified Kneser-Ney smoothing is become a very popular way to construct a general choice of language models.
The combination of all the articles of each potential author into a single language model will ignore many of the features, so this paper in addition to the value of the historical corpora, but also to add the metadata to integrate analysis, including the statistical information of the subject matter classification of the artificial mark, so that the constructed language model is more suitable for the measured text, increase the accuracy of the forecast results, add additional custom words to match the language of the proper nouns, in addition. But also on the basis of the general construction language model, the weight of the long word to join, to determine the length of the word on the relationship between the accuracy of prediction. Finally, recursive neural networks language models are also used to predict the authors and to make further comparisons with the traditional language model analysis.
第一章 緒論................................... 1
1.1 前言..................................... 1
1.2 N詞彙語言模型與其缺點 ...................... 1
1.3 遞歸神經網絡語言模型 (Recurrent Neural Net Language Model) . . . . 2
1.4 研究目的................................. 3
第二章 相關文獻探討............................ 4
2.1 平滑方法.................................. 4
第三章 研究方法................................ 6
3.1 Kneser-Ney語言模型 ....................... 6
3.1.1 Kneser-Ney平滑法 ....................... 7
3.1.2 改良式Kneser-Ney平滑法 .................. 9
3.1.3 改良式語言模型套件Kenlm .................. 10
3.2 遞 歸 神 經 網 絡 語 言 模 型 (recurrent neural network language model,RNNLM)......................... 11
3.2.1 遞迴神經網路語言模型套件Tensorflow.......... 12
3.3 適用中文文本之改良 .......................... 13
3.3.1 斷詞問題.............................. 13
3.3.2 人工關鍵詞 ............................ 14
3.3.3 長字詞加權 ............................ 14
第四章 實驗結果與討論.......................... 16
4.1 實驗設定................................. 16
4.1.1 實驗流程.............................. 16
4.1.2 資料集以及資料前處理..................... 18
4.1.3 斷詞工具.............................. 19
4.1.4 語言模型評估函式 ....................... 19
4.2 實驗結果分析與討論 .......................... 21
4.2.1 改良式 Kneser-Ney 語言模型與遞迴神經網路語言模型比較...21
4.2.2 改良式Kneser-Ney語言模型長字詞加權 ............ 24
第五章 結論....................................... 28
附錄............................................. 30
[1] S.F.ChenandJ.Goodman.Anempiricalstudyofsmoothingtechniquesforlanguage modeling. In Proceedings of the 34th annual meeting on Association for Computa- tional Linguistics, pages 310–318. Association for Computational Linguistics, 1996.
[2] K.W.ChurchandW.A.Gale.Acomparisonoftheenhancedgood-turinganddeleted estimation methods for estimating probabilities of english bigrams. Computer Speech & Language, 5(1):19–54, 1991.
[3] I. J. Good. The population frequencies of species and the estimation of population parameters. Biometrika, 40(3-4):237–264, 1953.
[4] K. Heafield. Kenlm: Faster and smaller language model queries. In Proceedings of the Sixth Workshop on Statistical Machine Translation, pages 187–197. Association for Computational Linguistics, 2011.
[5] K. Heafield, I. Pouzyrevsky, J. H. Clark, and P. Koehn. Scalable modified kneser-ney language model estimation. In ACL (2), pages 690–696, 2013.
[6] S. M. Katz. Estimation of probabilities from sparse data for the language model com- ponent of a speech recogniser. IEEE Int. Conf. Acoust, Speech and Signal Processing, 35(3):400–401, 1987.
[7] R. Kneser and H. Ney. Improved backing-off for m-gram language modeling. In
Acoustics, Speech, and Signal Processing, 1995. ICASSP-95., 1995 International Conference on, volume 1, pages 181–184. IEEE, 1995.
[8] W. Zaremba, I. Sutskever, and O. Vinyals. Recurrent neural network regularization. arXiv preprint arXiv:1409.2329, 2014.
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關論文
 
無相關期刊
 
無相關點閱論文
 
系統版面圖檔 系統版面圖檔