(3.239.33.139) 您好!臺灣時間:2021/02/27 01:00
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

我願授權國圖
: 
twitterline
研究生:陳雨棠
研究生(外文):Yu-tang Chen
論文名稱:透過語意特徵與自動產生瀏覽紀錄學習之音樂存取
論文名稱(外文):Music Retrieval by Learning from Automated Logs with Semantic Features
指導教授:洪宗貝洪宗貝引用關係
指導教授(外文):Tzung-Pei Hong
學位類別:碩士
校院名稱:國立中山大學
系所名稱:資訊工程學系研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2017
畢業學年度:105
語文別:英文
論文頁數:70
中文關鍵詞:聲音特徵音樂內涵語意特徵音樂存取瀏覽路徑
外文關鍵詞:acoustic featuresmusic contentnavigation pathsemantic featuresmusic retrieval
相關次數:
  • 被引用被引用:0
  • 點閱點閱:89
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:7
  • 收藏至我的研究室書目清單書目收藏:0
隨著科技日新月異的發展,現今社會趨向數位化的時代,音樂已成為人們生活中不可或缺的多媒體之一,因此音樂搜尋在近年來越來越受到重視。然而,由於語意隔閡的緣故,要準確地搜尋出使用者想要的音樂,其實是一項不容易的挑戰。在此篇論文中,我們提出一個有效的方法來改善這個問題。就效果而言,有別於傳統的低階特徵,我們使用了一些語義特徵來增加音樂存取的準確度。而就效率而言,我們提出以流覽路徑為基礎之音樂擷取系統來提升音樂存取的速度。因此本論文的主要技術包括:(1) 音樂語意特徵產生技術,及(2) 以音樂語意特徵為基礎之音樂自我回饋學習技術。我們首先利用離線處理的方式來建立模型。在這個過程中中,先從音樂資料庫裡擷取聲音特徵值,再將這些特徵值透過支持向量機的技術轉換成語意特徵值,最後透過所提的自動學習機制,經由多次自動回饋後獲取近似最佳化瀏覽路徑的索引。而對於線上搜尋,經由與離線處理同樣的步驟先將音樂查詢轉換成語意特徵值,接下來利用前述的瀏覽路徑來進行深度搜尋以得到最相關的音樂回傳給使用者。我們所提的方法也和其他的十一種相關方法做比較,從實驗結果顯示我們的方法不僅比他們要準確有效,而搜尋的速度也更快。
Along with the quick development of new technology in the modern digital era, music has become inevitable media in our life. Much attention has been paid to music retrieval. However, it is not easy to conduct high-performance music retrieval due to semantic gaps. This thesis presents an effective and efficient method to partially solve this problem. In terms of effectiveness, some semantic features are designed to increase the precision of retrieval. In terms of efficiency, a novel method called Music Retrieval by Automated Navigation Paths with Semantic Features is proposed to raise the performance for retrievals. The major techniques proposed in this thesis are as follows: (1) the generation of semantic features; and (2) an automated learning technique based on proposed semantic features. Offline pre-processing is first conducted to build the model. In this process, some audio features are extracted from music data and then are transformed into semantic features using the SVM classifier. Next, through the proposed learning mechanism, the efficient indices for approximate optimal navigation paths can be obtained from multiple automated feedback. For online retrieval, the semantic features of a query are extracted in the same way as that in the offline steps. The navigation paths are then used for depth-first-search to find the most relevant pieces of music for the user. The proposed approach is also compared to eleven previous approaches and the experimental results reveal that it can achieve higher quality and faster speeds for music retrieval than the others.
Contents
論文審定書 i
誌謝 ii
摘要 iii
Abstract iv
Contents v
List of Figures vii
List of Tables viii
Chapter 1 Introduction 1
1.1 Background 1
1.2 Motivation 5
1.3 Contribution 6
1.4 Preview of Our Proposed Method 8
1.5 Thesis Organization 10
Chapter 2 Related Works 11
2.1 Relevance Feedback 11
2.1.1 Query Re-Weighting 12
2.1.2 Query Point Movement 13
2.1.3 Query Expansion 14
2.2 Music Retrieval 15
Chapter 3 The Proposed Music Retrieval 20
3.1 Overview of the Proposed Music Retrieval 21
3.2 Offline Preprocessing Stage 24
3.2.1 Transformation of High-Level Semantic Features 24
3.2.2 Establishment of Navigation-Path-Based Learning Module 28
3.3 Online Retrieval Stage 34
Chapter 4 Experiments 38
4.1 Experimental Environment 38
4.2 Experimental Settings 40
4.2.1. Parameter Settings 40
4.2.2. Evaluations of Our Proposed Methods 45
4.3 Experimental Results 46
4.3.1. Experimental Methods 46
4.3.2. Evaluations of Compared Methods 48
4.3.3. Evaluations for Different Numbers of Top Returned 50
4.3.4. Precisions of Genres 52
4.4 Experimental Discussions 54
Chapter 5 Conclusions and Future Works 56
5.1. Conclusions 56
5.2. Future Works 57
References 59
References
[1] D. Byrd, "Problems of Music Information Retrieval in the Real World", Information Processing and Management: an International Journal, Vol. 38, No. 2, pp. 249-272, 2002.
[2] P. Cano, E. Batlle, T. Kalker, and J. Haitsma, "A Review of Audio Fingerprinting", Journal of VLSI Signal Processing, Vol. 41, No. 3, pp. 271–284, 2005.
[3] M. A. Casey, R. Veltkamp, M. Goto, M. Leman, C. Rhodes, and M. Slaney, "Content-Based Music Information Retrieval: Current Directions and Future Challenges", Proceedings of the IEEE, Vol. 96, No. 4, pp. 668-696, 2008.
[4] M. Casey and M. Slaney, "Song Intersection by Approximate Nearest Neighbor Search", Proceedings of the 7th International Conference on Music Information Retrieval (ISMIR), pp.144-149, 2006, Canada.
[5] J. S. Downie, "Music Information Retrieval", Annual Review of Information Science and Technology, Vol. 37, No. 1, pp. 295-340, 2003.
[6] H. Fujihara and M. Goto, "A Music Information Retrieval System Based on Singing Voice Timbre", Proceedings of the 8th International Conference on Music Information Retrieval (ISMIR), pp. 467-470, 2007, Austria.
[7] A. Ghias, J. Logan, D. Chamberlin, and B. C. Smith, "Query by Humming: Musical Information Retrieval in an Audio Database", Proceedings of the 3rd ACM International Conference on Multimedia, pp. 231-236, 1995, USA.
[8] P. Grosche, M. Müller, and J. Serrà, "Audio Content-Based Music Retrieval", Multimodal Music Processing, Vol. 3, Ch. 9, pp.157-174, 2012.
[9] T. Hayashi, N. Ishii, and M. Yamaguchi, "Fast Music Information Retrieval with Indirect Matching", Proceedings of the 22nd European Signal Processing Conference (EUSIPCO), pp.1567–1571, 2014, Portugal.
[10] T. Hayashi, N. Ishii, M. Ishimori, and K. Abe, "Stability Improvement of Indirect Matching for Music Information Retrieval", Proceedings of the IEEE International Symposium on Multimedia (ISM), pp. 229-232, 2015, USA.
[11] K. Hoashi, H. Ishizaki, K. Matsumoto, and F. Sugaya, "Content-Based Music Retrieval Using Query Integration for Users with Diverse Preferences", Proceedings of the 8th International Conference on Music Information Retrieval (ISMIR), pp. 463-466, 2007, Austria.
[12] K. Hoashi, K. Matsumoto, and N. Inoue, "Personalization of User Profiles for Content-Based Music Retrieval Based on Relevance Feedback", Proceedings of the 11th ACM International Conference on Multimedia, pp. 110-119, 2003, USA.
[13] H. Hoos, K. Renz, and M. Gorg, "GUIDO/MIR - An Experimental Musical Information Retrieval System Based on GUIDO Music Notation", Proceedings of the 2nd Annual International Symposium on Music Information Retrieval (ISMIR), pp. 41-50, 2001, USA.
[14] J. S. R. Jang, H. R. Lee, and J. C. Chen, "Super MBox: An Efficient/Effective Content-Based Music Retrieval System", Proceedings of the 9th ACM International Conference on Multimedia, pp. 636-637, 2001, Canada.
[15] N. Kosugi, Y. Nishihara, S. Kon''ya, M. Yamamuro, and K. Kushima, "Music Retrieval by Humming-Using Similarity Retrieval over High Dimensional Feature Vector Space", Proceedings of Pacific Rim Conference on Ceramic and Glass Technology, pp. 404-407, 1999.
[16] N. Kosugi, Y. Nishihara, T. Sakata, M. Yamamuro, and K. Kushima, "A Practical Query-by-Humming System for a Large Music Database", Proceedings of the 8th ACM International Conference on Multimedia, pp. 333-342, 2000, USA.
[17] P. Knees, T. Pohle, M. Schedl, and G. Widmer, "Combining Audio-Based Similarity with Web-Based Data to Accelerate Automatic Music Playlist Generation", Proceedings of the 8th ACM SIGMM International Workshop on Multimedia Information Retrieval, pp.147-157, 2006, USA.
[18] M. Levy and M. Sandler, "Signal-Based Music Searching and Browsing", Proceedings of the International Conference on Consumer Electronics, 2007, USA.
[19] R. Miotto and N. Orio, "A Probabilistic Approach to Merge Context and Content Information for Music Retrieval", Proceedings of the 11th International Society for Music Information Retrieval Conference (ISMIR), pp. 15-20, 2010, Netherlands.
[20] M. Panteli, E. Benetos, and S. Dixon, "Learning a Feature Space for Similarity in World Music", Proceedings of the 17th International Society for Music Information Retrieval Conference (ISMIR), pp.538-544, 2016, USA.
[21] K. Porkaew, K. Chakrabarti, and S. Mehrotra, "Query Refinement for Multimedia Similarity Retrieval in MARS", Proceedings of the 7th ACM Multimedia Conference, pp. 235-238, 1999, USA.
[22] J. J. Rocchio, "Relevance Feedback in Information Retrieval", The SMART Retrieval System – Experiment in Automatic Document Processing, pp. 313–323, Prentice-Hall, 1971.
[23] Y. Rui, T. Huang, and S. Mehrotra, "Content-Based Image Retrieval with Relevance Feedback in MARS", Proceedings of the IEEE International Conference on Image Processing, pp. 815-818, 1997, USA.
[24] Y. Rui, T. Huang, M. Ortega, and S. Mehrotra, "Relevance Feedback: A Power Tool for Interactive Content-Based Image Retrieval", IEEE Transactions on Circuits and Systems for Video Technology, Vol. 8, No. 5, pp. 644–655, 1998.
[25] J. H. Su, T. P. Hong, and Y. T. Chen, "Fast Music Retrieval with Advanced Acoustic Features", Proceedings of the IEEE International Conference on Consumer Electronics, pp. 359–360, 2017, Taiwan.
[26] J. H. Su, C. C. Hsu, and J. J. C. Ying, "High-Performance Content-Based Image Retrieval Using DFS Strategy", Proceedings of the 2013 International Conference on Granular Computing, pp. 270-275, 2013, China.
[27] J. H. Su, W. J. Huang, P. S. Yu, and V. S. Tseng, "Efficient Relevance Feedback for Content-Based Image Retrieval by Mining User Navigation Patterns", IEEE Transactions on Knowledge and Data Engineering, Vol. 23, No. 3, pp. 360-372, 2011.
[28] J. H. Su, C. Y. Wang, T. W. Chiu, and J. J. C. Ying, "Semantic Content-Based Music Retrieval Using Audio and Fuzzy-Music-Sense Features", Proceedings of the 2014 International Conference on Granular Computing, pp. 259-264, 2014, Japan.
[29] R. Typke, F. Wiering, and R. Veltkamp, "A Survey of Music Information Retrieval Systems", Proceedings of the 6th International Conference on Music Information Retrieval (ISMIR), pp. 153-160, 2005, UK.
[30] J. C. Wang, H. S. Lee, H. M. Wang, and S. K. Jeng, "Learning the Similarity of Audio Music in Bag-of-Frames Representation from Tagged Music Data", Proceedings of the 12th International Society for Music Information Retrieval Conference (ISMIR), pp. 85-90, 2011, USA.
[31] Y. H. Yang, Y. C. Lin, H. T. Cheng, and H. Chen, "Mr. Emo: Music Retrieval in the Emotion Plane", Proceedings of the 16th ACM International Conference on Multimedia, pp. 1003-1004, 2008, Canada.
[32] K. Yoshii, M. Goto, K. Komatani, T. Ogata, and H. G. Okuno, "An Efficient Hybrid Music Recommender System Using an Incrementally Trainable Probabilistic Generative Model", IEEE Transactions on Audio, Speech, and Language Processing, Vol. 16, No. 2, pp. 435-447, 2008.
[33] http://www.csie.ntu.edu.tw/~cjlin/libsvm
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關論文
 
無相關期刊
 
無相關點閱論文
 
系統版面圖檔 系統版面圖檔