(3.238.174.50) 您好!臺灣時間:2021/04/16 16:49
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

我願授權國圖
: 
twitterline
研究生:蔡旻剛
研究生(外文):Min-Kang Tsai
論文名稱:基於非均勻尺度-頻率圖之環境聲音辨識
論文名稱(外文):Non-uniform Scale-Frequency Map for Environmental Sound Recognition
指導教授:王家慶
指導教授(外文):Jia-Ching Wang
學位類別:碩士
校院名稱:國立中央大學
系所名稱:資訊工程研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2011
畢業學年度:99
語文別:英文
論文頁數:65
中文關鍵詞:匹配追蹤非均勻尺度-頻率圖環境聲音辨識加伯函數參數擷取
外文關鍵詞:Gabor functionNonuniform scale-frequency mapmatching pursuitfeature extractionenvironmental sound classification
相關次數:
  • 被引用被引用:0
  • 點閱點閱:175
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
本論文對於環境聲音的辨識提出一個新穎的參數擷取技術稱為non-uniform scale-frequency map。對每一個frame,我們利用matching pursuit演算法從Gabor字典中選取重要的atoms。忽略phase和position的資訊,我們選擇atoms的scale和frequency建構一個scale-frequency map。在應用主成分分析和線性鑑別分析於scale-frequency map後,產生最終之特徵向量。
對於環境聲音辨識,我們執行一個區段層級的多類支持向量機(SVM)。在實驗方面,我們採用17個類別的聲音資料庫,結果顯示提出的方法能夠達到86.47% 的準確率,跟其它時頻參數的效果比較,本論文所提出之特徵參數具明顯優越性。
另外,我們對於語音情緒辨識也提出一個新穎的參數擷取技術稱為SFM descriptor。對於每一個frame,我們一樣利用matching pursuit演算法選取atom,然後建構scale-frequency map。接著我們對每一個scale-frequency map擷取descriptor參數。然後建議的SFM descriptor結合non-uniform SFM 和MFCC且送進分類器。對於語音情緒辨識,我們執行一個語句層級的多類支持向量機。在實驗方面,我們採用7個類別的情緒語音資料庫,且辨識率可以達到73.96%。
In this study, we present a novel feature extraction technique called non-uniform scale-frequency map for environmental sound recognition. For each audio frame, we use matching pursuit algorithm to select important atoms from the Gabor dictionary. Ignoring phase and position information, we extract the scale and frequency of the selected atoms to construct a scale-frequency map. Principle component analysis (PCA) and linear discriminate analysis (LDA) are then applied to the scale-frequency map, generating a 16-dimensional vector. In the recognition phase, a segment-level multiclass support vector machine (SVM) is performed. Experiments are carried out on a 17-class sound database, and the result shows that the proposed approach can achieve an 86.47% accuracy rate. The performance comparison between the other time-frequency features demonstrates the superiority of the proposed feature. Other, we also present a novel feature extraction technique called SFM descriptor for emotional sound. For each frame,we use matching pursuit algorithm to select atom ,then construct scale-frequency map. Next,we extract descriptor feature for each scale-frequency map, then proposed SFM descriptor combined with non-uniform SFM feature and MFCC and sent into multiclass SVM. In the recognition phase,a file-level multiclass support vector machine (SVM) is performed. Experiments are carried out on 7-class emotional sound database and the result of recognition can achieve 73.96%.
Chapter 1 Preface - 1 -
1-1 Introduction - 1 -
1-2 Motivation - 3 -
1-3 Method Construct - 4 -
Chapter 2 Literature Review - 6 -
2- 1 Audio Feature Extraction Approach - 6 -
2-2 Time Domain Feature - 7 -
2-3 Frequency Domain Feature - 9 -
2-4 Mel-Frequency Cepstral Coefficients - 10 -
2-5 Time-Frequency Feature - 14 -
Chapter 3 Support Vector Machine - 16 -
3-1 Separable Case - 16 -
3-2 Non-Separable Case - 19 -
3-3 Non-Linear Case - 21 -
3-4 Multiple Classification - 22 -
Chapter 4 Proposed Method - 25 -
4-1 Dimension Reduction Method - 25 -
4-2 MATCHING PURSUIT ALGORITHM - 27 -
4-3 Gabor Dictionary - 29 -
4-4 Non-uniform Scale Frequency Map - 31 -
Chapter 5 SFM Descriptor for Emotion Speech - 37 -
5-1 Emotional feature - 37 -
5-2 SFM Descriptor - 39 -
Chapter 6 Experiment results - 42 -
6-1Environmental Sound Database and Emotional Sound Database - 42 -
6-2 Comparison of Uniform Band and Non-uniform Band - 42 -
6-3 The Purpose of Applying PCA and LDA - 46 -
6-4 The Recognition Result of Non-uniform SFM in Different SNR Level - 50 -
6-5 Emotion Speech Recognition - 51 -
Chapter 7 Conclusion - 54 -
Reference - 55 -
[1]L. Lu, H.-J. Zhang, and H. Jiang, “Content analysis for audio classification and segmentation,” IEEE Trans. Speech and Audio Processing, vol. 10, no. 7, pp. 504–516, Oct. 2002.
[2]J.-C. Wang, J.-F. Wang, K. W. He, and C.-S. Hsu, “Environmental sound classification using hybrid SVM/KNN classifier and MPEG-7 audio low-level descriptor,” in Proc. Int. Joint Conf. Neural Networks, Vancouver, British Columbia, Canada, July 2006 , pp. 1731–1735.
[3]S. Chu, S. Narayanan, C.-C. J. Kuo and M.J.Mataric “Where am I? Scene recognition for mobile robots using audio features,” IEEE Int. Conf. Multimedia and Expo, Toronto, Ontario, Canada, July 2006, pp.885-888.
[4]J. Huang, “Spatial auditory processing for a hearing robot,” IEEE Int. Conf. Multimedia and Expo, Lausanne, Switzerland, vol. 2, pp. Sep 2002, 253- 256.
[5]E.Wold, T. Blum, D. Keislar, and J. Wheaton, “Content-based classification, search, and retrieval of audio,” IEEE Trans. Multimedia, vol. 3, no. 3, pp. 27–36, Sep. 1996.
[6]J. T. Foote, “Content-based retrieval of music and audio,” in Proc. 1997 SPIE Conf. Multimedia Storage and Archiving Systems II, Dallas, Texas, United States, Nov 1997, pp. 138–147.
[7]V. Peltonen, J. Tuomi, A. Klapuri, J. Huopaniemi, and T.Sorsa, “Computational auditory scene recognition,” in Proc. IEEE Int. Conf. Acoustics, Speech, and Signal Processing, Orlando, Florida, United States, May 2002, pp. 1941–1944.
[8]S. Z. Li, “Content-based audio classification and retrieval using the nearest feature line method,” IEEE Trans. Speech and Audio Processing, vol. 8, no. 5, pp. 619–625, Sep. 2000.
[9]G. Guo and S. Z. Li, “Content-based audio classification and retrieval by support vector machines,” IEEE Trans. Neural Networks, vol. 14, no. 1, pp. 209–215, Jan. 2003.
[10]J. Zheng, G. Wei, and C. Yang, “Modified local discriminant bases and its application in audio feature extraction,” in Proc. Int. Forum on Information Technology and Application, Chengdu, China, May 2009, pp. 42–52.
[11]H. M. Hadi, M. Y. Mashor, M. S. Mohamed, and K. B. Tat, “Classification of heart sounds using wavelets and neural networks,” in Proc.5th Int. Conf. Electrical Engineering, Computing Science and Automatic Control, Mexico, Nov 2008, pp.177-180.
[12]Samuel P. Ebenezer, “Classification of Acoustic Emissions Using Modified Matching Pursuit,” EURASIP Journal.Signal Processing, pp.347-357, 2004.
[13]K. Umapathy, S. Krishnan, and S. Jimaa, “Multigroup classification of audio signals using time-frequency parameters,” IEEE Trans. Multimedia, vol. 7, no. 2, pp. 308–315, Apr. 2005.
[14]S. Chu, S. Narayanan, and C.-C. J. Kuo, “Environmental sound recognition with time-frequency audio features,” IEEE Trans. Audio, Speech, and Language Processing, vol. 17, no. 6, pp. 1142–1158, Aug. 2009.
[15]K. Umapathy, S. Krishnan, “Sub-dictionary selection using local discriminant bases algorithm for signal classification,” Canadian Conference on, Electrical and Computer Engineering, Canada vol.4, pp. May 2004, 2001- 2004,.
[16]K. Umapathy, S. Krishnan, “A signal classification approach using time-width vs frequency band sub-energy distributions,” IEEE Int. Conf. Acoustics, Speech, and Signal Processing, Philadelphia, Pennsylvania, USA, vol.5, pp. March 2005, 477-480.
[17]S. G. Mallat and Z. Zhang, “Matching pursuits with time-frequency dictionaries,” IEEE Trans. Signal Processing, vol. 41, no. 12, pp. 3397–3415, Dec. 1993.
[18]王小川,語音訊號處理,修訂版,全華圖書股份有限公司,台北縣,民國96年。
[19]V. Vapnik and C. Cortes, “Support vector networks,” Mach. Learn, vol. 20, pp. 273–297, 1995.
[20]J. Shlens, “A Tutorial on Principal Component Analysis,” Systems Neurobiology Laboratory, ver. 3.01, Apr. 2005.
[21]A. M. Martinez and A. C. Kak, “PCA versus LDA,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 23, no. 2, pp. 228–233, Feb. 2001.
[22]K. Fukunaga, Introduction to Statistical Pattern Recognition, second ed. Academic Press, 1990.
[23]T. Nwe, S. Fool, and L. De Silva, “Speech Emotion Recognition Using hidden Markov model,” Speech Commun. 41(2003)603-623.
[24]H. Teager and S. Teager, “Evidence for nonlinear production mechanisms in the vocal tract: Speech Production and Speech Modeling, Nato Advanced Institute,” vol. 55, pp.241-261, 1990.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
系統版面圖檔 系統版面圖檔