(3.238.186.43) 您好!臺灣時間:2021/03/05 22:30
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

我願授權國圖
: 
twitterline
研究生:許丞愷
研究生(外文):Hsu, Cheng-Kai
論文名稱:基於不同曲風訓練資料之音樂情緒分類與演繹系統比較及應用於聲景情緒辨識與混音分析
論文名稱(外文):Comparison of Music Emotion Classification and Interpretation System Based on Different Genre of Training Data and Applied to Soundscape Emotion Recognition and Mixing Audio Analysis
指導教授:鄭泗東
指導教授(外文):Cheng, Stone
口試委員:黃志方鄭文雅鄭泗東
口試委員(外文):Huang, Chih-FangJang, Wen-YeaCheng, Stone
學位類別:碩士
校院名稱:國立交通大學
系所名稱:工學院聲音與音樂創意科技碩士學位學程
學門:工程學門
學類:其他工程學類
論文種類:學術論文
論文出版年:2017
畢業學年度:106
語文別:中文
論文頁數:150
中文關鍵詞:音樂情緒辨識高斯混合模型聲景
外文關鍵詞:Music Emotion RecognitionGaussian Mixture Model (GMM)Soundscape
相關次數:
  • 被引用被引用:2
  • 點閱點閱:422
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
本研究融合類別式情緒分類法與二維情緒平面作為情緒辨識模型,搭配機器學習技術和音樂訊號處理,建立即時性音樂情緒軌跡追蹤系統,將音樂訊號誘發的情感成份分類,並以視覺化平面呈現樂曲演譯情緒變動的軌跡,本研究亦以此系統分析聲景(Soundscape)所喚起的人類情緒感受,設計混音樂曲,及分析混音後之情緒變動軌跡。實驗過程中蒐集預判情緒標記「Pleasant」、「Solemn」、「Agitated」、「Exuberant」的古典音樂與流行音樂風格的樣本各192首作為兩套訓練資料,從中萃取音量、音樂事件密度、調性、和聲不和諧度和音色以代表音樂樣本的特徵,計算音訊特徵與情緒辨識之關聯性,透過情緒分數計算程序,並使用高斯混合模型(GMM)作為分類器劃定四種情緒類別的邊界,以建立圖像化情緒辨識介面,追蹤由音樂所誘發的人類情緒感受變化。實驗結果證實不同的訓練資料將導致兩個情緒辨識平面的邊界差異。聲景即為人類日常活動場域的聽覺環境,對於人類的情緒狀態、生活品質皆有影響,本研究側重於針對各種場域之商業目的或聽覺環境氣氛營造等需求,提供一套基於情緒辨識與心理聲學的環境聲音設計依據,應用音樂情緒辨識系統至聲景情緒分析之方法,為評估聲景錄音檔與音樂訊號混音後的聲音情緒軌跡變化,以模擬真實場域中,藉由播放背景音樂並善用其情感特性來幫助人類達到情緒狀態改變、轉換心境,進而影響人類的商業行為與決策之研究。
This study presents an approach to analyze the inherent emotional ingredients in the polyphonic music signals, and applied to the soundscape emotion analysis. The proposed real-time music emotion trajectory tracking systems are established by maching learning techniques, music signal processing, and the integration of two-dimensional emotion plane and categorical taxonomy as emotion recognition model. Two sets of training data are collected, one is consisted of popular music and the other is consisted of western classical music, each set contains 192 emotion-predefined music clips respectively. Volume, onset density, mode, dissonance, and timbre are extracted to serve as the characteristics of a music excerpt. After emotion score counting process, Gaussian mixture model (GMM) is used to demarcate the margins between four emotion states. A graphical interface with mood locus on emotion plane is established to trace the alteration of music-evoked human emotions. Experimental result verified that different sets of training data would lead to the variation of boundaries among two emotion recognition models. Soundscape specifies the auditory environment of human daily activities, which can affect emotion states and living quality of human beings. This study proposed an access to environmental sound designing based on emotion recognition and psychoacoustic, especially focusing on the needs of various fields for commercial purpose or auditory atmosphere creation. The soundscape study is conducted by evaluating the effectiveness of emotion locus variation of selected urban soundscape sets blending with music signals. The simulation of playing background music in authentic field makes good use of music emotional characteristics to help people alter the emotion states and the state of mind, and further affect human behavior and decision-making.
摘要-I
ABSTRACT-II
致謝-III
目錄-V
表目錄-VIII
圖目錄-IX
一、緒論-1
1.1 研究動機-1
1.2 音樂資訊相關研究-3
1.2.1 小波轉換(Wavelet Transform)與音訊壓縮、樂器辨識-3
1.2.2 隱藏式馬可夫模型(Hidden Markov Model,HMM)與和弦辨識-5
1.2.3 聲景(Soundscape)-7
1.2.4 音樂情緒辨識相關研究-9
1.3 文獻回顧-11
1.4 研究流程-15
二、音樂與情緒-17
2.1 情緒模型-17
2.1.1 類別式-17
2.1.2 維度式-21
2.1.3 樣板式-23
2.2 音樂特徵-26
2.2.1 音色(Timbre)-26
2.2.2 音量(Volume)-27
2.2.3 速度(Tempo)-28
2.2.4 和聲(Harmony)-29
2.2.5 調性(Mode)-30
2.2.6 相關研究使用之音樂特徵值統整歸納-30
2.3 音樂特徵與情緒模型的對應關係-31
2.3.1 Patrik N. Juslin 與 Renee Timmers研究-31
2.3.2 Cyril Laurier等人研究-32
2.3.3 Emery Schubert研究-35
三、音樂訊號分析理論-39
3.1 短時距傅立葉轉換(SHORT-TIME FOURIER TRANSFORM,STFT)-39
3.2 高斯混合模型(GAUSSIAN MIXTURE MODEL,GMM)-42
3.3 音調層級(PITCH CLASS PROFILE,PCP)-46
四、研究方法-48
4.1 系統架構-48
4.2 訓練資料-49
4.3 預處理-50
4.4 音樂訊號特徵萃取演算法-50
4.4.1 音色分析(Timbre Analysis)-50
4.4.2 音量計算(Volume Calculation)-53
4.4.3 音樂事件密度(Onset Density)-55
4.4.4 和聲不和諧度(Dissonance)-58
4.4.5 調性偵測(Mode Detection)-59
4.5 情緒分數計算-62
4.6 訓練模式結果分析-65
4.7 音樂情緒辨識系統-73
4.7.1 古典音樂情緒軌跡追蹤系統展示-73
4.7.2 古典音樂情緒軌跡追蹤系統表現評估-78
4.7.3 流行音樂情緒軌跡追蹤系統展示-88
4.7.4 流行音樂情緒軌跡追蹤系統表現評估-92
4.8 人聲語音情緒分析-99
4.8.1 北韓新聞主播李春姬播音情緒分析-99
4.8.2 台灣新聞主播葉佳蓉播音情緒分析-102
4.8.3 台灣體育主播陳宏宜播音情緒分析-105
五、聲景場域情緒分析-108
5.1 研究方法與目的-108
5.2 實驗使用設備與軟體-110
5.2.1 硬體設備-110
5.2.2 音訊編輯軟體-113
5.3 聲景音訊錄製與情緒分析-114
5.3.1 餐廳聲景情緒分析-114
5.3.2 賣場聲景情緒分析-123
六、結論與未來展望-134
七、參考文獻-136
八、附錄-141
附錄A –文獻[7]音樂特徵萃取明細-141
附錄B –文獻[8]音樂特徵萃取明細-142
附錄C –古典音樂訓練資料明細-143
[1] A. Grossmann, J. Morlet, “Decomposition of Hardy Functions into Square Integrable Wavelets of Constant Shape”, SIAM Journal on Mathematical Analysis, Vol. 15, No. 4, pp. 723-736, 1984.
[2] 黃百祥、李建興,「應用小波轉換特徵於音樂樂器之辨識」,Journal of Information Technology and Applications, Vol. 8, No. 1, pp. 1-9, 2014.
[3] M. Müller, Fundamentals of Music Processing: Audio, Analysis, Algorithms, Applications, Springer Publishing Company, 2015
[4] M. Southworth, “The Sonic Environment of Cities”, Environment and Behavior, Vol. 1, Issue. 1, pp. 49-70, June 1969.
[5] R. M. Schafer, The Tuning of the World, Alfred A. Knopf, June 1977.
[6] R. M. Schafer, The Soundscape: Our Sonic Environment and the Tuning of the World, Inner Traditions/Bear & Co, 1993.
[7] Y. H. Yang, H. H. Chen, Music Emotion Recognition, CRC Press, 2011.
[8] L. Lu, et al., “Automatic Mood Detection and Tracking of Music Audio Signals”, IEEE Transactions on Audio, Speech, and Language Processing, Vol. 14, No. 1, pp. 5-18, January 2006.
[9] Y. H. Yang, et al., “A Regression Approach to Music Emotion Recognition”, IEEE Transactions on Audio, Speech, and Language Processing, Vol. 16, No. 2, pp. 448-457, February 2008.
[10] B. Schuller, et al., “Automatic recognition of emotion evoked by general sound events”, Proceedings of 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2012), pp. 341-344, Kyoto Japan, March 25-30, 2012.
[11] J. C. Wang, et al., “Modeling the Affective Content of Music with a Gaussian Mixture Model”, IEEE Transactions on Affective Computing, Vol. 6, No. 1, pp. 56-68, January-March 2015.
[12] K. Yoon, et al., “Music Recommendation System Using Emotion Triggering Low-level Features”, IEEE Transactions on Consumer Electronics, Vol. 58, No. 2, pp. 612-618, May 2012.
[13] J. Y. Fan, et al., “Automatic Soundscape Affect Recognition Using A Dimensional Approach”, Journal of the Audio Engineering Society, Vol. 64, No. 9, September 2016.
[14] P. Gomez, B. Danuser, “Relationships Between Musical Sturcture and Psychophysiological Measures of Emotion”, Emotion, Vol. 7, No. 2, pp. 377-387, 2007.
[15] P. N. Juslin, J. A. Sloboda (Eds.), Handbook of Music and Emotion: Theory, Research, Applications, Oxford University Press, 2010.
[16] 郭柏祥,「產品型態與情緒之關聯性研究 – 以電子式煮水壺為例」,國立雲林科技大學,碩士論文,民國95年。
[17] A. Ortony, T. J. Turner, “What's basic about basic emotions? ”, Psychological Review, Vol. 97, No. 3, pp. 315-331, July 1990.
[18] K. Hevner, “The Affective Character of the Major and Minor Modes in Music”, The American Journal of Psychology, Vol. 47, No. 1, pp. 103-118, January 1935.
[19] K. Hevner, “Experimental studies of the elements of expression in music”, The American Journal of Psychology, Vol. 48, No. 2, pp. 246-268, April 1936.
[20] P. R. Farnsworth, “A Study of the Hevner Adjective List”, The Journal of Aesthetics and Art Criticism, Vol. 13, No. 1, pp. 97-103, September 1954.
[21] J. A. Russell, “A Circumplex Model of Affect”, Journal of Personality and Social Psychology, Vol. 39, No. 6, pp. 1161-1178, January 1980.
[22] D. Watson, A. Tellegen, “Toward a Consensual Structure of Mood”, Psychological Bulletin, Vol. 98, No. 2, pp. 219-235, September 1985.
[23] R. E. Thayer, The Biopsychology of Mood and Arousal, Oxford University Press, 1989.
[24] P. Shaver, et al., “Emotion Knowledge: Further Exploration of a Prototype Approach”, Journal of Personality and Social Psychology, Vol. 52, No. 6, pp. 1061-1086, 1987.
[25] M. Zentner, et al., “Emotions Evoked by the Sound of Music: Characterization, Classification and Measurement”, Emotion, Vol. 8, No. 4, pp. 494-521, August 2008.
[26] C. Laurier, et al., “Exploring Relationships between Audio Features and Emotion in Music”, Proceedings of the 7th Triennial Conference of European Society for the Cognitive Sciences of Music (ESCOM 2009), pp. 260-264, Jyväskylä Finland, August 12-16, 2009.
[27] E. Schubert, “Measurement and Time Series Analysis of Emotion in Music”, University of New South Wales, Ph. D Dissertation, 1999.
[28] 傅俊傑,「具時變情緒軌跡介面之自動音樂情緒追蹤系統」,國立交通大學,碩士論文,民國99年。
[29] C. L. Krumhansl, “A Perceptual Analysis of Mozart's Piano Sonata K. 282: Segmentation, Tension, and Musical Ideas”, Music Perception, Vol. 13, No. 3, pp. 401-432, 1996.
[30] J. A. Sloboda, A. C. Lehmann, “Tracking Performance Correlates of Changes in Perceived Intensity of Emotion During Different Interpretations of a Chopin Piano Prelude”, Music Perception, Vol. 19, No. 1, pp. 87-120, 2001.
[31] E. Schubert, W. Dunsmuir, “Regression modelling continuous data in music psychology” Music, Mind, and Science, pp. 298-352, 1999.
[32] E. Schubert, “Modeling Perceived Emotion with Continuous Musical Features”, Music Perception, Vol. 21, No. 4, pp. 561-585, June 2004.
[33] Official Charts, “Queen’s Bohemian Rhapsody voted the Nation’s Favourite Number 1 Single”, http://www.officialcharts.com/chart-news/queen-s-bohemian-rhapsody-voted-the-nation-s-favourite-number-1-single__2258/, July 2012.
[34] N. D. Cook, Tone of voice and mind : the connections between intonation, emotion, cognition, and consciousness, Vol. 47, John Benjamins Publishing, 2002.
[35] D. Grandjean, et al., “Intonation as an interface between language and affect”, Progress in Brain Research, Vol. 156, pp. 235-247, 2006.
[36] G. C. Bruner II, “Music, Mood, and Marketing”, Journal of Marketing, Vol. 54, No. 4, pp. 94-104, October 1990.
[37] R. E. Milliman, “The Influence of Background Music on the Behavior of Restaurant Patrons”, Journal of Consumer Research, Vol. 13, No. 2, pp. 286-289, September 1986.
[38] C. Caldwell, S. A. Hibbert, “The Influence of Music Tempo and Musical Preference on Restaurant Patrons’ Behavior”, Psychology & Marketing, Vol. 19, No. 11, pp. 895-917, November 2002.
[39] A. C. North, D. J. Hargreaves, “The Effect of Music on Atmosphere and Purchase Intentions in a Cafeteria”, Journal of Applied Social Psychology, Vol. 28, Issue. 24, pp. 2254-2273, December 1998.
[40] R. E. Milliman, “Using Background Music to Affect the Behavior of Supermarket Shoppers”, Journal of Marketing, Vol. 46, No. 3, pp. 86-91, 1982.
[41] R. Yalch, E. Spangenberg, “Effects of Store Music on Shopping Behavior”, Journal of Consumer Marketing, Vol. 7, Iss.2, pp. 55-63, 1990.
[42] R. Yalch, E. Spangenberg, “The Effects of Music in a Retail Setting on
Real and Perceived Shopping Times”, Journal of Business Research, Vol. 49, Iss.2, pp. 139-147, August 2000.
[43] L. Dubé, S. Morin, “Background music pleasure and store evaluation: Intensity effects and psychological mechanisms”, Journal of Business Research, Vol. 54, Iss.2, pp. 107-113, November 2001.
[44] E. R. Spangenberg, et al., “It’s beginning to smell (and sound) a lot like Christmas:
the interactive effects of ambient scent and music in a retail setting”, Journal of Business Research, Vol.58, Iss.11, pp. 1583-1589, November 2005.
[45] M. Morrison, “In-store music and aroma influences on shopper behavior and satisfaction”, Journal of Business Research, Vol. 64, Iss.6, pp. 558-564, June 2011.
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
系統版面圖檔 系統版面圖檔