跳到主要內容

臺灣博碩士論文加值系統

(44.211.31.134) 您好!臺灣時間:2024/07/13 02:28
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:吳肇中
研究生(外文):Chao-Chung Wu
論文名稱:注意力模型類神經網路在無監督式學習下的自動歌詞改編生成
論文名稱(外文):An Attention Based Neural Network Model for Unsupervised Lyrics Rewriting
指導教授:林守德林守德引用關係
指導教授(外文):Shou-De Lin
口試委員:林軒田鄭卜壬陳縕儂李宏毅
口試委員(外文):Hsuan-Tien LinPu-Jen ChengYun-Nung ChenHung-Yi Lee
口試日期:2018-07-17
學位類別:碩士
校院名稱:國立臺灣大學
系所名稱:資訊工程學研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2018
畢業學年度:106
語文別:英文
論文頁數:34
中文關鍵詞:自然語言生成機器學習創意寫作
相關次數:
  • 被引用被引用:1
  • 點閱點閱:197
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
本研究的主要目的為,透過多重編碼器的模型架構,結合語言模型的訓練,達成可以在保留輸出格式的限制下,完成能以無監督式學習的方式對原歌詞下一句的作改寫預測。本研究主要分成二個部分,一部分探討改寫歌詞的品質。透過自動評價以及人工評價的標註,對改寫結果在中心主旨、押韻、可唱度上可以比擬原歌詞甚至好於原歌詞,在自動評價方面亦顯示出押韻的高準確度。另一個部分基於前面部分所建構的模型,觀察模型在學習正確韻律、詞性、情感的改寫上。本研究觀察了改寫條件在情緒、押韻的改動(transfer)結果,以及隨著訓練回數的注意力分布,和每次預測規一化結果的(softmax)分布上前10名在韻律、詞性上的準確率。
Creative writing has become a standard task to showcase the power of artificial intelligence. This work tackles a challenging task in this area, the lyrics rewriting. This task possesses several unique challenges. First, we require the outputs to be not only semantically correlated with the original lyrics, but also coherent in segmentation structure, rhyme as the rewritten lyrics must be performed by the artist with the same music. Second, there is no parallel rewriting lyrics corpus available for supervised training. We propose a deep neural network based model for this task and exploit both general evaluation metrics such as ROUGE and human study to evaluate the effectiveness of the model.
Acknowledgements..........................ii
摘要................................iii
Abstract..............................iv
Contents............................v
List of Figures............................vii
List of Tables............................ix
Chapter 1 Introduction........................1
Chapter 2 Design Rationale......................4
Chapter 3 Training Phase.......................7
3.1 Model Structure. . . . . . . . . . . . . . . . . . . . . . . . . . . . .8
3.2 Parameter Learning. . . . . . . . . . . . . . . . . . . . . . . . . . . .10
Chapter 4 Generation and Smoothing..................12
Chapter 5 Experiment........................14
5.1 Feature extraction. . . . . . . . . . . . . . . . . . . . . . . . . . . .14
5.2 Competitors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15
5.3 Automatic Evaluation. . . . . . . . . . . . . . . . . . . . . . . . . .15
5.4 Human study. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16
Chapter 6 Discussion.........................20
6.1 How to learn specific rhyme and POS?. . . . . . . . . . . . . . . . .20
6.2 Learning to do rhyme from start. . . . . . . . . . . . . . . . . . . .22
6.3 How to learn to rhyme at the last POS?. . . . . . . . . . . . . . . .23
6.4 Multi-encoder. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .24
6.5 How is one feature learned by our model?. . . . . . . . . . . . . . .25
Chapter 7 Related Work.......................28
Chapter 8 Conclusion and Future Works................30
References.......................31
[1]Peter Potash, Alexey Romanov, and Anna Rumshisky. Ghostwriter: Using anlstm for automatic rap lyric generation. In Proceedings of the 2015 Confer-ence on Empirical Methods in Natural Language Processing, pages 1919–1924.Association for Computational Linguistics, 2015.
[2]Dekai Wu, Karteek Addanki, Markus Saers, and Meriem Beloucif. Learning tofreestyle: Hip hop challenge-response induction via transduction rule segmen-tation. In Proceedings of the 2013 Conference on Empirical Methods in NaturalLanguage Processing, pages 102–112, Seattle, Washington, USA, October 2013.Association for Computational Linguistics.
[3]Kento Watanabe, Yuichiroh Matsubayashi, Kentaro Inui, and Masataka Goto.Modeling structural topic transitions for automatic lyrics generation. In Pro-ceedings of the 28th Pacific Asia Conference on Language, Information, andComputation, pages 422–431, Phuket,Thailand, December 2014. Departmentof Linguistics, Chulalongkorn University.
[4]Ananth Ramakrishnan A., Sankar Kuppan, and Sobha Lalitha Devi. Auto-matic generation of tamil lyrics for melodies. In Proceedings of the Workshopon Computational Approaches to Linguistic Creativity, pages 40–46, Boulder,Colorado, June 2009. Association for Computational Linguistics.
[5]Jack Hopkins and Douwe Kiela. Automatically generating rhythmic verse withneural networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 168–178.Association for Computational Linguistics, 2017.
[6]Hugo Gonçalo Oliveira. Poetryme: a versatile platform for poetry generation.Computational Creativity, Concept Invention, and General Intelligence, 1:21,2012.
[7]Hugo. Gonçalo Oliveira. Tra-la-Lyrics 2.0: Automatic Generation of SongLyrics on a Semantic Domain. Journal of Artificial General Intelligence, 6:87–110, December 2015.
[8]Jiang He, Long Jiang, and Zhou Ming. Generating chinese couplets using a sta-tistical mt approach. In Proceedings of the 22nd International Conference onComputational Linguistics - Volume 1, COLING ’08, pages 377–384, Strouds-burg, PA, USA, 2008. Association for Computational Linguistics.
[9]Marjan Ghazvininejad, Xing Shi, Jay Priyadarshi, and Kevin Knight. Hafez:an interactive poetry generation system. In Proceedings of ACL 2017, SystemDemonstrations, pages 43–48. Association for Computational Linguistics, 2017.
[10]Xingxing Zhang and Mirella Lapata. Chinese Poetry Generation with RecurrentNeural Networks, pages 670–680. Association for Computational Linguistics,10 2014.
[11]Aaditya Prakash, Sadid A. Hasan, Kathy Lee, Vivek Datla, Ashequl Qadir,Joey Liu, and Oladimeji Farri. Neural paraphrase generation with stackedresidual lstm networks. In Proceedings of COLING 2016, the 26th InternationalConference on Computational Linguistics: Technical Papers, pages 2923–2934,Osaka, Japan, December 2016. The COLING 2016 Organizing Committee.
[12]Ziqiang Cao, Chuwei Luo, Wenjie Li, and Sujian Li. Joint copying and restrictedgeneration for paraphrase. CoRR, abs/1611.09235, 2016.32
[13]Erik Cambria, Soujanya Poria, Devamanyu Hazarika, and Kenneth Kwok. Sen-ticnet 5: discovering conceptual primitives for sentiment analysis by means ofcontext embeddings. In AAAI, 2018.
[14]Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machinetranslation by jointly learning to align and translate. CoRR, abs/1409.0473,2014.
[15]Minh-Thang Luong, Quoc V. Le, Ilya Sutskever, Oriol Vinyals, and LukaszKaiser. Multi-task sequence to sequence learning. CoRR, abs/1511.06114,2015.
[16]D. P. Kingma and J. Ba. Adam: A Method for Stochastic Optimization. ArXive-prints, December 2014.
[17]Kyunghyun Cho, Bart van Merriënboer, Çağlar Gülçehre, Dzmitry Bahdanau,Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase rep-resentations using rnn encoder–decoder for statistical machine translation. InProceedings of the 2014 Conference on Empirical Methods in Natural LanguageProcessing (EMNLP), pages 1724–1734, Doha, Qatar, October 2014. Associa-tion for Computational Linguistics.
[18]Tomas Mikolov, Martin Karafiát, Lukás Burget, Jan Cernocký, and SanjeevKhudanpur. Recurrent neural network based language model. In INTER-SPEECH, 2010.
[19]Chris Hokamp and Qun Liu. Lexically constrained decoding for sequence gen-eration using grid beam search. arXiv preprint arXiv:1704.07138, 2017.
[20]Franz Josef Och and Hermann Ney. The alignment template approach to sta-tistical machine translation. Computational Linguistics, 30(4):417–449, 2004.
[21]Alex Graves. Sequence transduction with recurrent neural networks. arXivpreprint arXiv:1211.3711, 2012.33
[22]Nicolas Boulanger-Lewandowski, Yoshua Bengio, and Pascal Vincent. Audiochord recognition with recurrent neural networks. In ISMIR, pages 335–340.Citeseer, 2013.
[23]Xing Wu, Zhikang Du, Mingyu Zhong, Shuji Dai, and Yazhou Liu. Chineselyrics generation using long short-term memory neural network. In Salem Ben-ferhat, Karim Tabia, and Moonis Ali, editors, Advances in Artificial Intelli-gence: From Theory to Practice, pages 419–427, Cham, 2017. Springer Inter-national Publishing.
[24]Eric Malmi, Pyry Takala, Hannu Toivonen, Tapani Raiko, and Aristides Gio-nis. Dopelearning: A computational approach to rap lyrics generation. InProceedings of the 22Nd ACM SIGKDD International Conference on Knowl-edge Discovery and Data Mining, KDD ’16, pages 195–204, New York, NY,USA, 2016. ACM.
[25]Ananth Ramakrishnan A and Sobha Lalitha Devi. An alternate approach to-wards meaningful lyric generation in tamil. In Proceedings of the NAACL HLT2010 Second Workshop on Computational Approaches to Linguistic Creativity,CALC ’10, pages 31–39, Stroudsburg, PA, USA, 2010. Association for Compu-tational Linguistics.
[26]Zichao Li, Xin Jiang, Lifeng Shang, and Hang Li. Paraphrase generation withdeep reinforcement learning. CoRR, abs/1711.00279, 2017.
[27]Ankush Gupta, Arvind Agarwal, Prawaan Singh, and Piyush Rai. A deepgenerative framework for paraphrase generation. CoRR, abs/1709.05074, 2017.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top