(3.236.222.124) 您好!臺灣時間:2021/05/19 11:28
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

: 
twitterline
研究生:亞瑟
研究生(外文):Arthur Amalvy
論文名稱:應用自然語言處理技術分析文學小說角色 之關係:以互動視覺化呈現
論文名稱(外文):Natural Language Processing applied to Interactive Character Relationships Visualization in Novels
指導教授:蔡宗翰蔡宗翰引用關係Frédéric Lassable
指導教授(外文):Frédéric Lassabe
學位類別:碩士
校院名稱:國立中央大學
系所名稱:資訊工程學系
學門:工程學門
學類:電資工程學類
論文出版年:2020
畢業學年度:108
語文別:英文
論文頁數:69
中文關鍵詞:自然語言處理數字人文科學角色知識圖
外文關鍵詞:Natural Language ProcessingDigital HumanitiesCharacter NetworksAutomatic Quote Attribution
相關次數:
  • 被引用被引用:0
  • 點閱點閱:132
  • 評分評分:
  • 下載下載:35
  • 收藏至我的研究室書目清單書目收藏:0
文學作品對人類文化具有深遠的影響,因此在歷史演進過程中,人們對文學作品進行大量且廣泛的研究。在這些作品中,角色們之間的關係常為作品核心。對於人物關係的探討,我們可視為研究其網路架構;意即使用圖形網路來表示角色關係。

由於角色之間對話的重要性,我們可根據其內容抽取特殊的網路,稱對話網路,該網路僅透過角色之間的談話而生成。可以透過圖論或是資訊學領域的相關工具進行此研究,分析並挖掘傳統文學背後的涵義。

本研究致力於對動態權重的對話網路研發新的自動化抽取方式。我們首先提出一種關於此類型網絡的通用辦法,並將其應用於文學小說。基於該應用,本研究提出了一種新的自動話語歸因方法。最後,我們開發一款簡易的網路視覺化工具,藉由此工具,我們可以針對所抽取的特徵網路,進行更深入的分析。
Due to their importance in humanity culture, literary works have been extensively studied during the course of history. In those works, characters and their relationships often play a central role. The study of the structure of those relationships is the study of character networks : that is, a special kind of graph that can be used to represent these structures.

Due to the importance of dialogue between characters, one can extract a specialised kind of network : a conversational network, extracted using only dialogues between characters. Using tools from graph theory or other fields of computer science, those networks can be studied to reveal original insights unattainable fromtraditional literary analysis.

This work is dedicated to the automatic extraction of dynamic signed conversational networks. We propose a general method to extract those kinds of networks, that can be used in any type of work. We then show an example where we apply this method on novels in particular, which makes us propose a new technique for automatic utterance attribution. Lastly, we create a simple example of software allowing the visualization of extracted networks to analyze them.
Dedication iv

Acknowledgments v

Contents vi

List of Figures ix

List of Tables xi

1 Introduction 1
1.1 Character Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Natural Language Processing . . . . . . . . . . . . . . . . . . . . . . . . . 3

2 Natural Language Processing Background 4
2.1 Natural Language Processing in a Nutshell . . . . . . . . . . . . . . . . . . 4
2.2 Deep Learning and Natural Language Processing . . . . . . . . . . . . . . . . 5
2.3 Recurrent Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.4 Attention Mechanisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.5 Transformers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.6 Transfer-learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

3 Conversational Networks Extraction Framework 14
3.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.1.1 Relationship Polarity Hypothesis . . . . . . . . . . . . . . . . . . . . . . 15
3.1.2 Interactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.1.3 Relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.1.3.1 General Intuition . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.1.3.2 Sliding Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.2 Example Application : Screenplays . . . . . . . . . . . . . . . . . . . . . . 17
3.2.1 Screenplays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.2.2 Interactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.2.3 Addressee Identification . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.2.4 Sentiment Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

4 Conversational Networks Extraction Applied to Novels 21
4.1 Interactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.2 Speaker Attribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.2.1 Previous works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.2.2 Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
4.2.3 Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.2.3.1 Quote Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.2.3.2 Candidate Speaker Representation . . . . . . . . . . . . . . . . . . . . . 26
4.2.3.3 Scoring of a Quote / Candidate Speaker Pair . . . . . . .. . . . . . . . . 26
4.2.4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.2.4.1 SpanBERT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.2.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.2.6 Attention Visualization . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.3 Addressees Attribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4.3.1 Previous Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.3.2 Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.3.3 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.3.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.4 Sentiment Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.5 Extracted Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

5 Interactive Visualization 39
5.1 Standardized input format . . . . . . . . . . . . . . . . . . . . . . . . . . 39
5.2 Force-based Graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
5.3 Analysis Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
5.3.1 Rendering loop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
5.3.2 Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
5.3.2.1 Importance Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
5.3.2.2 Custom filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
5.3.3 Lenses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
5.3.3.1 Spatial clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
5.3.3.2 Centrality Lenses . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

6 Conclusion and Future Work 45

Bibliography 47

A Visualization Software Input Format 53
[1] Syntactic Structures. Mouton Publishers, The Hague, Paris, 1957.
[2] Aspects of the Theory of Syntax. MIT Press, 1965.
[3] The One vs. the Many : Minor Characters and the Space of the Protagonist in the Novel. 2003.
[4] Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Sofia, Bulgaria, aug 2013. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/P13-1129.
[5] Proceedings of the 15th Conference of the European Chapter of the Assocation for Computational Linguistics, volume Volume 1, Long Papers, April 2017.
[6] Apoorv Agarwal, Sriramkumar Balasubramanian, Jiehan Zheng, and Sarthak Dash. Parsing screenplays for extracting social networks from movies. In CLfL@EACL, 2014.
[7] Arthur Amalvy. Visualisation de relations entre personnages à l’aide de techniques de traitement du langage. Technical Report, University of Technology of Belfort-Montbéliard, 2019.
[8] Mathieu Bastian, Sebastien Heymann, and Mathieu Jacomy. Gephi: open source software for exploring and manipulating networks, 2009. An URL http://www.aaai.org/ocs/index.php/ICWSM/09/paper/view/154.
[9] Anthony Bonato, David Ryan D’Angelo, Ethan R. Elenberg, David F. Gleich, and Yangyang Hou. Mining and modeling character networks. ArXiv, abs/1608.00646, 2016.
[10] Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. On the properties of neural machine translation: Encoder-decoder approaches. ArXiv, abs/1409.1259, 2014.
[11] Hutto C.J. and Gilber Eric. Vader: A parsimonious rule-based model for sentiment analysis of social media text. 2014.
[12] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT, 2019.
[13] Adam Ek, Mats Wirén, Robert Östling, Kristina Nilsson Björkenstam, Gintare Grigonyte, and Sofia Gustafson-Capková. Identifying speakers and addressees in dialogues extracted from literary fiction. In LREC, 2018.
[14] Jeffrey L. Elman. Finding structure in time. Cognitive Science, 14(2):179–211, 1990. doi: 10.1207/s15516709cog1402_1.
[15] David Elson and Kathleen McKeown. Automatic attribution of quoted speech in literary narrative. 01 2010.
[16] Assocation for Computational Linguistics, editor. Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, 2012.
[17] Santo Fortunato. Community detection in graphs. ArXiv, abs/0906.0612, 2009.
[18] Sebastian Gil, Laney Kuenzel, and Caroline Suen. Extraction and analysis from plays and movies. Technical report, Stanford University, 2011.
[19] Kevin R. Glass and Shaun Bangay. A naïve, salience-based method for speaker identification in fiction books. 2007.
[20] Hua He, Denilson Barbosa, and Grzegorz Kondrak. Identification of speakers in novels. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) acl [4], pages 1312–1320. URL https://www.aclweb.org/anthology/P13-1129.
[21] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learn-
ing for image recognition. 2016 IEEE Conference on Computer Vision and Pattern
Recognition (CVPR), pages 770–778, 2016.
[22] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9:1735–80, 12 1997. doi: 10.1162/neco.1997.9.8.1735.
[23] Sarthak Jain and Byron C. Wallace. Attention is not explanation. ArXiv, abs/1902.10186, 2019.
[24] Zhengbao Jiang, Wei Xu, Jun Araki, and Graham Neubig. Generalizing natural language analysis through span-relation representations. ArXiv, abs/1911.03822, 2019.
[25] Sethunya Joseph, Kutlwano Sedimo, Freeson Kaniwa, Hlomani Hlomani, and Keletso Letsholo. Natural language processing: A review. Natural Language Processing: A Review, 6:207–210, 03 2016.
[26] Mandar Joshi, Omer Levy, Daniel S. Weld, and Luke Zettlemoyer. Bert for corefer ence resolution: Baselines and analysis. In EMNLP/IJCNLP, 2019.
[27] Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. Spanbert: Improving pre-training by representing and predicting spans. Transactions of the Association for Computational Linguistics, 8:64–77, 2020. [28] Rafal Jozefowicz, Wojciech Zaremba, and Ilya Sutskever. An empirical exploration of recurrent network architectures. In Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37, ICML’15, page 2342–2350. JMLR.org, 2015.
[29] Vincent Labatut and Xavier Bost. Extraction and analysis of fictional character networks : A survey. ACM Computing Surveys, 2019.
[30] Elizabeth D. Liddy. Natural language processing. 2001.
[31] Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. ArXiv, abs/1907.11692, 2019.
[32] Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. The Stanford CoreNLP natural language processing toolkit. In Association for Computational Linguistics (ACL) System Demonstrations, pages 55–60, 2014. URL http://www.aclweb.org/anthology/P/P14/P14-5010.
[33] Tomas Mikolov, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. CoRR, abs/1301.3781, 2013.
[34] George A. Miller. Wordnet : A lexical database for english. In Communications of the ACM, 1995.
[35] Franco Moretti. Network theory, plot analysis. New Left Review, 2011.
[36] Grace Muzny, Michael Fang, Angel X. Chang, and Dan Jurafsky. A two-stage sieve approach for quote attribution. In Proceedings of the 15th Conference of the European Chapter of the Assocation for Computational Linguistics eac [5], pages 460–470.
[37] Timothy O’Keefe, Silvia Pareti, James R. Curran, Irena Koprinska, and Mathew Honnibak. A sequence labelling approach to quote attribution. In for Computational Linguistics [16], pages 790–799.
[38] Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neural networks. In ICML, 2013.
[39] Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. Deep contextualized word representations. ArXiv, abs/1802.05365, 2018.
[40] Alec Radford, Karthik Narasimhan, Tim Salimans, and Sutskever Ilya. Improving language understanding by generative pre-training. 2018.
[41] Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. 2019.
[42] Yannick Rochat. Character Networks and Centrality. PhD thesis, Université de Lausanne, 2014.
[43] Alexander Rush. The annotated transformer. pages 52–60, 01 2018. doi: 10.18653/v1/W18-2509.
[44] Lloyd P. Stuart. Least squares quantization. pcm.IEEE Transactions on Information Theory 28, 2:129–136, 1982.
[45] Wilson L. Taylor. "cloze procedure": a new tool for measuring readability. Journalism & Mass Communication Quarterly, 30:415–433, 1953.
[46] Hardik Vala, Stefan Dimitrov, David Jurgens, Andrew Piper, and Derek Ruths. Annotating characters in literary corpora: A scheme, the CHARLES tool, and an annotated novel. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16), pages 184–189, Portorož, Slovenia, May 2016. European Language Resources Association (ELRA). URL https://www.aclweb.org/anthology/L16-1028.
[47] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. ArXiv, abs/1706.03762, 2017.
[48] Jesse Vig. A multiscale visualization of attention in the transformer model. In ACL, 2019.
[49] Weizenbaum. Eliza - a computer program for the study of natural language communication between man and machine. Communications of the Association for Computing Machinery 9, pages 36–45, 1966.
[50] Sarah Wiegreffe and Yuval Pinter. Attention is not not explanation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 11–20, Hong Kong, China, November 2019. Association for Computational Linguistics. doi: 10.18653/v1/D19-1002. URL https://www.aclweb.org/anthology/D19-1002.
[51] P. Simard Y. Bengio and P. Frasconi. Learning long-term dependencies with gradient descent is difficult. IEEE Transactions on Neural Networks, 5(2):157–166, March 1994.
[52] Rochat Yannick and Triclot Mathieu. Les réseaux de personnages de science-fiction : échantillons de lectures intermédiaires. ReS Futurae, 2017.
[53] Chak Yan Yeung and John Lee. Identifying speakers and listeners of quoted speech in literary works. In IJCNLP, 2017.
[54] Tom Young, Devamanyu Hazarika, Soujanya Poria, and Erik Cambria. Recent trends in deep learning based natural language processing. IEEE Computational Intelligence Magazine, 13:55–75, 2018.
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關論文
 
無相關期刊
 
無相關點閱論文