跳到主要內容

臺灣博碩士論文加值系統

(44.211.31.134) 您好!臺灣時間:2024/07/23 07:38
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:江東峻
研究生(外文):Tung-Chun Chiang
論文名稱:利用多查詢記憶網路學習詞袋文件表達法
論文名稱(外文):Learning Bag-of-words Document Representation with Multi-queries Memory Networks
指導教授:鄭卜壬鄭卜壬引用關係
指導教授(外文):Pu-Jen Cheng
口試委員:陳柏琳李宏毅林守德
口試委員(外文):Berlin ChenHung-Yi LeeShou-De Lin
口試日期:2018-07-13
學位類別:碩士
校院名稱:國立臺灣大學
系所名稱:資訊工程學研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2018
畢業學年度:106
語文別:英文
論文頁數:37
中文關鍵詞:文本向量注意力機制記憶網路預測模型非監督式學習
相關次數:
  • 被引用被引用:0
  • 點閱點閱:141
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
文本向量能提供有效且基於統計的資訊,並廣泛地被應用在文字領域,例如:網頁搜尋、問答系統、文本相似度上。大部份既有的方法以文本詞頻作為特徵,並依靠詞向量衡量全域性的重要性。然而,文章中詞的重要性不只需要考慮詞頻以及全域性的重要性,也必須考慮該文章本身的含義。在這篇論文中,我們提出一個基於注意力機制的非監督式預測模型來衡量每個字在文本中的重要性。並且,考慮一個文本能有多種解釋方式,我們使用多查詢記憶網路來抽取各種面向的文本含義,並使用循環式及門來匯集各種含義。最後,我們使用了兩個公開資料集去驗證我們的模型,實驗結果顯示我們的方法能夠顯著的超越相關方法中的頂尖技術。
Document representation provide essential statistical information compressed features for many tasks in the text domain, e.g., web search, question answering, document similarity and relevance judgement. Current methods use term frequencies as local features and rely on word embeddings to measure the global importance. However, the importance of words in a document might depend on the meaning of the document and can not globally measured. In this work, we propose an attention-based unsupervised predictive model to reweight the importance of words in a document. Also, considering the multiple interpretations of a single document, we multi-queries memory networks to extract the semantics in different views. And we use recurrent and gating method to combine the semantics. The experimental results show our proposed model outperforms the state-of-the-art works on two benchmark datasets.
摘要.......................................... i
Abstract........................................ ii
Contents........................................ iii
ListofFigures..................................... v
ListofTables ..................................... vi
1 Introduction.................................... 1
2 RelatedWorks................................... 4
2.1 Unsupervised Document Representation Learning . . . . . . . . . . . . . 4
2.1.1 ObjectTypes ............................ 4
2.1.2 LinkTypes ............................. 5
2.2 Document Autoregressive Distribution Estimator (DocNADE) . . . . . . 6
3 ProblemFormulation ............................... 9
3.1 Motivation.................................. 9
3.2 Assumptions ................................ 10
3.2.1 MixedSemantics.......................... 10
3.2.2 MutualSpecialization ....................... 11
3.3 ProblemFormulation ............................ 11
3.3.1 Formulation............................. 11
4 Methodology ................................... 13
4.1 MemoryNetworksforQuestionAnsweringTasks . . . . . . . . . . . . . 13
4.2 Multi-queriesBag-of-wordsEncoder.................... 14 4.2.1
4.2.1 QueryasFeatureExtractor..................... 14
4.2.2 InformationPiecesEncoding.................... 15
4.2.3 MutualSpecializationbyHops................... 15
4.2.4 SemanticsAggregation....................... 16
4.3 Decoder................................... 17
4.4 Illustration.................................. 17
5 Experiments.................................... 19
5.1 DatasetsandPreprocessing......................... 19
5.2 BaselineMethods.............................. 20
5.3 ImplementationDetails........................... 20
5.4 DocumentRetrieval............................. 21
5.5 Perplexity.................................. 22
6 QualitativeAnalysis................................ 25
6.1 RelationbetweenHopsandAttentionEntropy . . . . . . . . . . . . . . . 25
6.2 CompareNumberofQueries........................ 25
6.3 CompareNumberofHops ......................... 26
6.4 CaseStudy:Co-occurringTermWeighting . . . . . . . . . . . . . . . . 26
6.5 CaseStudy:Multi-queriesWeighting ................... 29
6.6 Relation between Term Weights and TF-IDF Weights . . . . . . . . . . . 29
6.7 Relation between Term Weights and Supervised Information . . . . . . . 31
6.8 Word-wordLinkv.s.Doc-wordLink.................... 33
7 ConclusionsandFutureWorks .......................... 35
7.1 Conclusions................................. 35
7.2 FutureWorks ................................ 35
Bibliography ..................................... 36
[1] Blei, D. M., Ng, A. Y., and Jordan, M. I. (2003). Latent dirichlet allocation. Journal of machine Learning research, 3(Jan):993–1022.
[2] Chen, M. (2017). Efficient vector representation for documents through corruption. arXiv preprint arXiv:1707.02377.
[3] Glover, J. (2016). Modeling documents with generative adversarial networks. arXiv preprint arXiv:1612.09122.
[4] Hinton, G. E. and Salakhutdinov, R. R. (2009). Replicated softmax: an undirected topic model. In Advances in neural information processing systems, pages 1607–1614.
[5] Holmer, E. and Marfurt, A. (2018). Explaining away syntactic structure in semantic document representations. arXiv preprint arXiv:1806.01620.
[6] Kiros, R., Zhu, Y., Salakhutdinov, R. R., Zemel, R., Urtasun, R., Torralba, A., and Fidler, S. (2015). Skip-thought vectors. In Advances in neural information processing systems, pages 3294–3302.
[7] Lang, K. (1995). Newsweeder: Learning to filter netnews. In Proceedings of the Twelfth International Conference on Machine Learning, pages 331–339.
[8] Larochelle, H. and Lauly, S. (2012). A neural autoregressive topic model. In Pereira, F., Burges, C. J. C., Bottou, L., and Weinberger, K. Q., editors, Advances in Neural Information Processing Systems 25, pages 2708–2716. Curran Associates, Inc.
[9] Lauly, S., Zheng, Y., Allauzen, A., and Larochelle, H. (2017). Document neural autoregressive distribution estimation. The Journal of Machine Learning Research, 18(1):4046–4069.
[10] Le, Q. and Mikolov, T. (2014). Distributed representations of sentences and docu- ments. In International Conference on Machine Learning, pages 1188–1196.
[11] Lewis, D. D., Yang, Y., Rose, T. G., and Li, F. (2004). Rcv1: A new bench- mark collection for text categorization research. Journal of machine learning research, 5(Apr):361–397.
[12] Miao, Y., Yu, L., and Blunsom, P. (2016). Neural variational inference for text processing. In International Conference on Machine Learning, pages 1727–1736.
[13] Mikolov,T.,Sutskever,I.,Chen,K.,Corrado,G.S.,andDean,J.(2013).Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119.
[14] Salakhutdinov, R. and Hinton, G. (2009). Semantic hashing. International Journal of Approximate Reasoning, 50(7):969–978.
[15] Srivastava,N.,Salakhutdinov,R.R.,andHinton,G.E.(2013).Modelingdocuments with deep boltzmann machines. arXiv preprint arXiv:1309.6865.
[16] Sukhbaatar, S., Weston, J., Fergus, R., et al. (2015). End-to-end memory networks. In Advances in neural information processing systems, pages 2440–2448.
[17] Van Gysel, C., de Rijke, M., and Kanoulas, E. (2017). Neural vector spaces for unsupervised information retrieval. arXiv preprint arXiv:1708.02702.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top