跳到主要內容

臺灣博碩士論文加值系統

(44.220.44.148) 您好!臺灣時間:2024/06/14 09:33
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:郭易倫
研究生(外文):GUO, YI-LUN
論文名稱:基於生成對抗網路之音樂創作系統
論文名稱(外文):Music Creation based on Generative Adversarial Network
指導教授:朱元三
指導教授(外文):CHU, YUAN-SUN
口試委員:朱元三劉宗憲黃穎聰許明華
口試委員(外文):CHU, YUAN-SUNLIU, TSUNG-HSIENHWANG, YIN-TSUNGSHEU, MING-HUA
口試日期:2021-01-18
學位類別:碩士
校院名稱:國立中正大學
系所名稱:電機工程研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2021
畢業學年度:109
語文別:中文
論文頁數:66
中文關鍵詞:機器學習深度學習生成對抗網路音樂創作音樂風格
外文關鍵詞:Machine LearningDeep LearningGenerative Adversarial NetworkMusic CreationMusic Style
相關次數:
  • 被引用被引用:1
  • 點閱點閱:290
  • 評分評分:
  • 下載下載:1
  • 收藏至我的研究室書目清單書目收藏:0
近年來,行動裝置與網路媒體快速發展,因此對音樂的需求量也是日益增加。例如個人音樂,影片配樂等。音樂已成為生活中不可或缺的元素,但創作音樂的過程往往需要高成本、高門檻。例如時間、人員以及樂理知識。完成後還可能面臨智財權問題。因此本論文提出音樂創作系統,使音樂愛好者能夠低成本、低門檻的快速產出音樂。而所產出之音樂不會有智財權問題、能有風格之間的差異且又是聆聽者所能接受的。
本文之音樂創作系統主要分為三個部分,分別是資料前處理、生成對抗網路模型與資料後處理。前處理部分將音訊檔案轉換為記譜圖儲存作為訓練資料集。訓練過程中,加入風格信息,最後產出對應風格的記譜圖。後處理部分將風格對應的記譜圖轉換回音訊檔案。評測階段,請受試者根據風格相關度與可接受度兩個指標為系統產出之音樂給予1至5的評分。由評分結果顯示,在兩指標均得到3分以上,此表示本系統產出的音樂是具風格特徵且可被接受。

In recent years, mobile devices and Internet media have developed rapidly, so the demand for music is also increasing. Such as personal music, film soundtrack, etc. Music has become an indispensable element in life, but the process of creating music often requires high costs and high thresholds. Such as time, personnel, and knowledge of music theory. After completion, it may also face the problem of intellectual property rights. Therefore, this paper proposes a music creation system to enable music lovers to quickly produce music at low cost and low threshold. The produced music will not face the problem of intellectual property rights, with differences between style, and is acceptable to the listener.
The music creation system of this paper mainly has three parts, which are data pre-processing, generative adversarial network model, and data post-processing. The pre-processing part converts audio files into notational images and save them as training dataset. In the training process, add style information, and finally produce the notational images which are corresponding to the style. The post-processing part converts notational images corresponding to the style back to audio files. In the evaluation stage, the subjects are asked to give the music which is produced by the system a score of 1 to 5 based on two indicators: style relevance and acceptability. The scoring results show that the music gets 3 points or more in two indicators, which means that the music produced by the system is accepted.

誌謝 iii
摘要 iv
Abstract iv
目錄 vi
圖目錄 ix
表目錄 xi
第一章 簡介 1
1.1 研究動機 1
1.2 研究目標 1
1.3 論文架構 2
第二章 背景知識 3
2.1 機器學習(Machine Learning) 3
2.1.1 感知器(Perceptrons) 3
2.1.2 激活函數 6
2.1.3 損失函數 8
2.1.4 梯度下降法(Gradient Descent) 9
2.2 卷積神經網路(Convolutional Neural Networks, CNN) 10
2.2.1 卷積層(Convolution Layer) 10
2.2.2 池化層(Pooling Layer) 12
2.2.3 全連接層(Fully-Connected Layer) 13
2.3 生成對抗網路(Generative Adversarial Network, GAN) 13
2.3.1 生成對抗網路概念 13
2.3.2 生成對抗網路演算法 14
2.4 基礎音樂組成 15
2.4.1 音高(Pitch) 15
2.4.2 音色(Timbre) 16
2.4.3 音符(Notes) 16
2.4.4 節奏(Rhythm) 16
2.4.5 和聲(Harmony)與旋律(Melody)相 18
2.5 音樂數位介面(Musical Instrument Digital Interface, MIDI) 18
第三章 相關研究 20
3.1 Unsupervised representation learning with deep convolutional generative adversarial networks [20] 20
3.1.1 穩定DCGAN之規則 20
3.1.2 網路架構與訓練細節 21
3.1.3 實驗結果 22
3.2 Conditional Generative Adversarial Nets [24] 23
3.2.1 網路架構 23
3.2.2 訓練細節與實驗結果 24
3.3 Wasserstein GAN [21] 25
3.4 Music Generation with Deep Learning [22] 27
3.5 Music Generation Using Generative Adversarial Networks [23] 28
第四章 系統實作與分析 29
4.1 系統架構 29
4.2 資料蒐集 30
4.3 資料前處理 30
4.4 生成對抗網路 32
4.4.1 梯度消失(Vanishing Gradient) 33
4.4.2 模式崩潰(Mode Collapse) 33
4.4.3 網路架構 35
4.5 資料後處理 38
第五章 實驗結果 40
5.1 實驗環境 40
5.2 訓練參數 41
5.3 Loss 41
5.4 實驗結果調查 42
5.4.1 評測方式 42
5.4.2 評測結果 43
5.4.3 評測總結 51
第六章 結論與未來規劃 52
6.1 結論 52
6.2 未來規劃 52
參考資料 53
[1]Introduction to Neural Networks. Nov 6,2018
https://medium.com/datadriveninvestor/introduction-to-neural-networks-a0fe9ec0a947
[2]黃柏源,“基於卷積神經網路之調變分類技術研究”, 國立中央大學通訊工程研究所, 2018
[3]Hsuan-Tien Lin. Machine Learning Foundations. 2019
https://www.csie.ntu.edu.tw/~htlin/course/mlfound19fall/
[4]Deep Learning via Multilayer Perceptron Classifier.
https://laptrinhx.com/deep-learning-via-multilayer-perceptron-classifier-2095018276/
[5]How Does Convolutional Neural Network work. 2. DECEMBER 2019.
https://mc.ai/how-does-convolutional-neural-network-work/
[6]I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, "Generative adversarial nets, " NIPS, 2014.
[7]V. Dumoulin and F. Visin, "A guide to convolution arithmetic for deep learning, " MILA, Université de Montréal, January 12, 2018.
[8]Deep Convolutional Generative Adversarial Networks(DCGANs) https://medium.com/datadriveninvestor/deep-convolutional-generative-adversarial-networks-dcgans-3176238b5a3d
[9]Setting the learning rate of your neural network, Jeremy Jordan.
https://www.jeremyjordan.me/nn-learning-rate/
[10]Encyclopaedia Britannica online
https://www.britannica.com/
[11]How to Read Sheet Music in 1 day, Neil Nguyen
https://skoolopedia.com/blog/how-to-read-piano-sheet-music-in-1-day/
[12]Western Michigan University, The Element of Music (online)
https://wmich.edu/mus-gened/mus170/RockElements.pdf
[13]Music, Movement & Drama for kids.
http://musicmovementdrama4kids.blogspot.com/2011/01/we-are-learning-about-music-notes-tempo.html
[14]賴建宇,基礎樂理(一)
http://www.midi.twmail.net/musicclass01.htm
[15]Doug McKenzie Jazz Piano
https://bushgrafts.com/
[16]Music21
http://web.mit.edu/music21/
[17]L. Hiller and L. Isaacson, "Experimental Music: Composition with an Electronic Computer, " McGraw-Hill, 1959.
[18]Melodyne 5
https://www.celemony.com/en/melodyne/new-in-melodyne-5
[19]Hung-Yi Lee
https://www.youtube.com/channel/UC2ggjtuuWvxrHHHiaDH1dlQ
[20]A. Radford, L. Metz, and S. Chintala, "Unsupervised representation learning with deep convolutional generative adversarial networks, " ICLR, 2016.
[21]M. Arjovsky, S. Chintala, and L. Bottou, "Wasserstein GAN, " ICML, 2017.
[22]V. Kalingeri and S. Grandhe, "Music Generation with Deep Learning, " arXiv preprint arXiv:1612.04928, 2016.
[23]D. de Almeida and M. Pinho, "Music Generation Using Generative Adversarial Networks, " Técnico Lisboa, Universidade de Lisboa, 2018.
[24]M. Mirza and S. Osindero, "Conditional Generative Adversarial Nets," arXiv preprint arXiv:1411.1784, 2014.
[25] F. Yu, A. Seff, Y. Zhang, S. Song, T. Funkhouser, and J. Xiao, "LSUN: Construction of a Large-Scale Image Dataset using Deep Learning with Humans in the loop, " arXiv preprint arXiv:1506.03365, 2015.
[26]T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen, "Improved Techniques for Training GANs, " NIPS, 2016.
[27]Wasserstein GAN and Kantorovich-Rubinstein Duality
https://vincentherrmann.github.io/blog/wasserstein/

電子全文 電子全文(網際網路公開日期:20260128)
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊