跳到主要內容

臺灣博碩士論文加值系統

(216.73.216.163) 您好!臺灣時間:2025/11/25 20:40
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:藍世緯
研究生(外文):Shih-Wei Lan
論文名稱:深度小腦模型控制器應用於自適應回聲消除
論文名稱(外文):Adaptive Echo Cancellation Using Deep Cerebellar Model Articulation Controller
指導教授:李仲溪
指導教授(外文):Jungh-Si Lee
口試委員:曹昱賴穎暉
口試委員(外文):Yu TsaoYing-Hui Lai
口試日期:2017-06-27
學位類別:碩士
校院名稱:元智大學
系所名稱:電機工程學系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2017
畢業學年度:105
語文別:中文
論文頁數:53
中文關鍵詞:類神經網路深度學習小腦模型控制器語者辨識回聲消除
外文關鍵詞:Neural networkDeep learningCerebellar Model Articulation ControllerSpeech RecognitionEcho Cancellation
相關次數:
  • 被引用被引用:0
  • 點閱點閱:393
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
摘要

近年在機器學習的發展上有很大的突破,不論是IBM開發的華生或是google研發的AlphaGo,皆是以深度的類神經網路所構成。其中小腦模型控制器(CMAC)已被廣泛使用在類神經網路領域上,如: 倒單擺、非線性通道均衡及機器人控制等。它有著良好泛化與學習快速的特性,足以應付現有類神經網路基本的應用,但若是複雜度更高的非線性任務,就會出現學習困難的狀況。且CMAC最初是設計在簡單的控制應用上,因此高維度輸入的處理如:語者辨識,就無法直接使用一般的CMAC,因此需加以改良。
本文提出深度小腦模型控制器(DCMAC)與結合Softmax函式的多輸入多輸出DCMAC。利用CMAC結構堆疊形成DCMAC,並使用改進的倒傳遞訓練算法來學習新的DCMAC參數。由於深層結構,DCMAC可以比一般CMAC有更好的泛化誤差,實驗結果也證明DCMAC在訊號處理上比CMAC有夠好的建模能力。
ABSTRACT


It had the very big breakthrough in the machine learning development in recent years. No matter the Watson which developed by IBM or Google’s AlphaGo, they both are based on depth of neural networks. And the cerebellar model articulation controller (CMAC) has been widely used in various applications of neural networks, such as: inverted pendulum, nonlinear channel equalization and robot control. It has great generalization and learning fast characteristics enough to deal with the basic applications of neural network. But if the complexity of the higher non-linear task, there will be learning learning-difficulty situation. And the CMAC was originally designed for simple control applications, so high-dimensional input processing, such as: speech recognition which can’t be used by normal CMAC, so it need to improved.
This paper proposed the deep cerebellar model articulation controller(DCMAC) for echo cancellation and the MIMO-DCMAC with the Softmax function for speech recognition. We stack the conventional single-layered CMAC models into multiple layers to form a DCMAC model, and re-modify the back propagation algorithm to get the update of DCMAC’s parameter. Due to the deep structure, DCMAC can have a better generalization error than the normal CMAC. The experimental results also show that DCMAC can build model more effectively than CMAC in signal processing.
目錄

書名頁..................................................................................................i
論文口試委員審定書 .........................................................................ii
授權書 ................................................................................................iii
中文摘要 ...........................................................................................vi
英文摘要 ..........................................................................................vii
誌謝 ..................................................................................................viii
目錄 ....................................................................................................ix
表目錄 .............................................................................................xi
圖目錄 ............................................................................................xii
第一章 緒論 ..................................................................................1
1.1 研究背景 ............................................................................1
1.2 論文架構 ............................................................................5
第二章 研究介紹與探討 .................................................................6
2.1 DCMAC的結構組成 ............................................................6
2.2 CMAC架構 ...........................................................................7
2.3 CMAC參數學習演算法 .....................................................11
第三章 DCMAC的深入探討……..……………….……………..13
3.1 簡介 .....................................................................................13
3.2 DCMAC架構與Softmax函式 ...........................................14
3.3 DCMAC參數學習算法與更新的倒傳遞算法 ..............16
第四章 實驗設定與結果 ..............................................................19
4.1 簡介 ......................................................................................19
4.2 自適應性濾波器系統 ................................................20
4.21 自適應性濾波器種類………………………………..21
4.22 自適應性濾波器應用………………………………..22
4.3 AEC系統架構 ................................................25
4.4 AEC系統設定 ...................................................26
4.5 Speech Recognition系統流程..........................30
4.6 Speech Recognition系統設定..........................31
4.7 演算法回顧與參數設定 .....................................................33
4.8 實驗結果 .............................................................................39
4.9 實驗結論 ............................................................................47
第五章 結論 ..................................................................................48
參考文獻 ..........................................................................................49






表目錄

表4.4-1環境模擬固定的設定項目 .............................................27
表4.4-2環境模擬改變的設定項目 .............................................27
表4.7-1仿射投影演算法(APA)……………………………….. 35
表4.7-2仿射符號投影演算法(APSA)………………………….. 37
表4.7-3演算法的初始值設定……….………………………….. 37
表4.7-4 CMAC與DCMAC中高斯函數的均值方差初始值.. 38
表4.8-1 AEC實驗收斂結果……..…….……………………….. 43
表4.8-2 語者辨識收斂數值與正確率統計.…………………….. 45

圖目錄

圖2.1-1 DCMAC架構圖 .................................................................6
圖2.2-1 CMAC架構圖 ...................................................................10
圖2.2-2 二維CMAC架示意圖 .......................................................10
圖4.2-1 自適性濾波器運算應用架構之一 ....................................20
圖4.2-2 線性型自適應性濾波器 ....................................................21
圖4.2-3 雜訊消除系統架構圖…......................................................23
圖4.2-4 訊號預測系統之架構圖…..................................................24
圖4.2-5 未知系統還原之架構圖…..................................................24
圖4.3-1 AEC系統架構……………….............................................25
圖4.4-1 SNR 40dB noise 噪聲源訊號…........................................27
圖4.4-2 Set(A)房間響應…………………........................................28
圖4.4-3 Set(B)房間響應…………………........................................28
圖4.4-4 Set(C)房間響應…………………........................................29
圖4.4-5 Set(D)房間響應…………………........................................29
圖4.5-1 語者辨識流程圖………………..........................................30
圖4.6-1 MIMO-DCMAC結合Softmax函式.....................................31
圖4.8-1 Set(A)房間響應收斂圖………………................................40
圖4.8-2 Set(B)房間響應收斂圖………………................................41
圖4.8-3 Set(C)房間響應收斂圖………………................................41
圖4.8-4 Set(D)房間響應收斂圖………………................................42
圖4.8-5 CMAC在迭代200次後的恢復訊號與興趣訊號重疊圖....43
圖4.8-6 DCMAC在迭代200次後的恢復訊號與興趣訊號重疊圖.43
圖4.8-7 CMAC_Softmax收斂圖….……………………………45
圖4.8-8 DCMAC(2)_Softmax收斂圖….…………………………45
圖4.8-9 DCMAC(3)_Softmax收斂圖….…………………………46
圖4.8-10 DCMAC(4)_Softmax收斂圖….…………………………46
參考文獻

[1] B. Widrow and M. Hoff, “Adaptive switching circuits,” in IRE WESCON Conv. Rec., pp. 96-104, 1960.
[2] J. Koford and G. Groner, “The use of an adaptive threshold element to design a linear optimal pattern classifier,” IEEE Transactions on Information Theory, vol. 12, no. 1, pp. 42-50, 1966.
[3] F. Roseenblatt, Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms, Washington, D. C. : Spartan Books, 1961.
[4] D. Gabor, W. P. L. Wilby, and R. Woodcock, “A universal non-linear filter predictor and simulator which optimizes itself by a learning process,” Proceedings of the IEEE - Part B: Electronic and Communication Engineering, vol.108, no. 40, pp. 422-435, 1961.
[5] B. Widrow, J. R. Glover, J. M. McCool, J. Kaunitz, C. S. Williams, R. H. Hearn, J. R. Zeidler, Jr. Eugene Dong, and R. C. Goodlin, “Adaptive noise cancelling: Principles and applications,” Proceedings of the IEEE, vol. 63, no. 12, pp. 1692-1716, 1975.
[6] S. Haykin, Adaptive Filter Theory, fourth edition, Prentice-Hall, 2002.
[7] E. A. Wan and R. van der Merwe, “The unscented Kalman filter for nonlinear estimation,” in Proceedings AS-SPCC, pp. 153-158, 2000.
[8] F. Daum, “Nonlinear filters: beyond the Kalman filter,” IEEE Aerospace and Electronic Systems Magazine, vol. 20, no. 8, pp. 57-69, 2005.
[9] L. Tan and J. Jiang, “Adaptive Volterra filters for active control of nonlinear noise processes,” IEEE Transactions on Signal Processing, vol. 49, no. 8, pp. 1667-1676, 2001.
[10] G. Horvath and T. Szabo, “CMAC neural network with improved generalization property for system modeling,” in Proceedings IMTC, vol. 2, pp. 1603-1608, 2002.
[11] C. M. Lin, L. Y. Chen, and D. S. Yeung, “Adaptive filter design using recurrent cerebellar model articulation controller,” IEEE Transactions on Neural Networks, vol. 21, no. 7, pp. 1149-1157, 2010.
[12] C. M. Lin and Y. F. Peng, “Adaptive CMAC-based supervisory control for uncertain nonlinear systems,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 34, no. 2, pp. 1248-1260, 2004.
[13] C. P. Hung, “Integral variable structure control of nonlinear system using a CMAC neural network learning approach,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 34, no. 1, pp. 702-709, 2004.
[14] J. S. Albus, “A new approach to manipulator control: The cerebellar model articulation controller (CMAC),” Journal of Dynamic Systems, Measurement, and Control, vol. 97, no. 3, pp. 220–227, 1975.
[15] S. Commuri and F. L. Lewis, “CMAC neural networks for control of nonlinear dynamical systems: Structure, stability and passivity,” Elsevier, Automatica, vol. 33, no. 4, pp. 635-641, 1997.
[16] Y. H. Kim and F. L. Lewis, “Optimal design of CMAC neural-network controller for robot manipulators,” IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews, vol. 30, no. 1, pp. 22-31, 2000.
[17] J. Y. Wu, “MIMO CMAC neural network classifier for solving classification problems,” Elsevier, Applied Soft Computing, vol. 11, no. 2, pp. 2326-2333, 2011.
[18] Z. R. Yu, T. C. Yang, and J. G. Juang, “Application of CMAC and FPGA to a twin rotor MIMO system,” in Proceedings ICIEA, pp. 264-269, 2010.
[19] C. T. Chiang and C. S. Lin, “CMAC with general basis functions,” Elsevier, Neural Networks, vol. 9, no. 7, pp. 1199-1211, 1996.
[20] J. Sim, W. L. Tung, and Chai Qeuk, “FCMAC-Yager: A novel yager-inference-scheme-based fuzzy CMAC,” IEEE Transactions on Neural Networks, vol. 17, no. 6, pp. 1394-1410, 2006.
[21] W. Yu, F. O. Rodriguez, and M. A. Moreno-Armendariz, “Hierarchical fuzzy CMAC for nonlinear systems modeling,” IEEE Transactions on Fuzzy Systems, vol. 16, no. 5, pp. 1302-1314, 2008.
[22] C. M. Lin, L. Y. Chen, and D. S. Yeung, “Adaptive filter design using recurrent cerebellar model articulation controller,” IEEE Transactions on Neural Networks, vol. 21, no. 7, pp. 1149-1157, 2010.
[23] P. E. M. Almedia and M. G. Simoes, “Parametric CMAC networks fundamentals and applications of a fast convergence neural structure,” IEEE Transactions on Industry Applications, vol. 39, no. 5, pp. 1551-1557, 2003.
[24] C. M. Lin, L. Y. Chen, and C. H. Chen, “RCMAC hybrid control for MIMO uncertain nonlinear systems using sliding-mode technology,” IEEE Transactions on Neural Networks, vol. 18, no. 3, pp. 708-720, 2007.
[25] G. Hinton, L. Deng, D. Yu, G. E. Dahl, A. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath, and B. Kingsbury, “Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups,” IEEE Signal Processing Magazine, vol. 29, no. 6, pp. 82-97, 2012.
[26] R. Collobert and J. Weston, “A unified architecture for natural language processing: deep neural networks with multitask learning,” ICML '08 Proceedings, pp. 160-167, 2008.
[27] G. E. Dahl, D. Yu, L. Deng, and A. Acero, “Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 20, no. 1, pp. 30-42, 2011.
[28] S.B. Davis, and P. Mermelstein ,"Comparison of Parametric Representations for Monosyllabic Word Recognition in Continuously Spoken Sentences," in IEEE Transactions on Acoustics, Speech, and Signal Processing, 28(4), pp. 357–366, 1980.
[29] C. Farabet, C. Couprie, L. Najman, and Y. LeCun, “Learning hierarchical features for scene labeling,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 8, pp.1915-1929, 2013.
[30] H. Lee, C. Ekanadham, and A. Y. Ng, “Sparse deep belief net model for visual area V2,” Advances in Neural Information Processing Systems, 2007.
[31] P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P. A. Manzagol, “Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion,” The Journal of Machine Learning Research, vol. 11, pp. 3371-3408, 2010.
[32] G. Hinton, L. Deng, D. Yu, G. E. Dahl, A. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath, and B. Kingsbury, “Deep neural networks for acoustic modeling in speech recognition,” IEEE Signal Processing Magazine, vol. 29, no. 6, pp. 82-97, 2012.
[33] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, pp. 436-444, 2015.
[34] S. M. Siniscalchi, T. Svendsen, and C. H. Lee, “An artificial neural network approach to automatic speech processing,” Elsevier, Neurocomputing, vol. 140, pp. 326-338, 2014.
[35] J.J.Shynk,”Adaptive IIR filter”,IEEE ASSP,vol.6,pp.4-21,1989
[36] N. Parihar, J. Picone, D. Pearce, and H. G. Hirsch, “Perfor- mance analysis of the Aurora large vocabulary baseline system,” in Proc. EUSIPCO, pp. 553–556, 2004.
[37] G. Hirsch, “Experimental framework for the performance eval- uation of speech recognition front-ends on a large vocabulary task,” ETSI STQ Aurora DSR Working Group, 2001
[38] N. Parihar and J. Picone, “Aurora working group: DSR front end LVCSR evaluation au/384/02,” Institute for Signal and Information Processing Report, 2002.
[39] E. A. Habets, “Room impulse response generator,” Tech. Rep, Technische Universiteit Eindhoven, vol. 2, pp. 1–21, 2006.
[40] Yu Tsao, Shih-Hau Fang, and Yao Shiao, “Acoustic Echo Cancellation Using aVector-Space-Based Adaptive Filtering Algorithm”, IEEE Signal Processing Society, pp.351-355, 2004.
[41] Y. Bengio, P. Simard, and P. Frasconi, “Learning long-term dependencies with gradient descent is difficult,” IEEE Transactions on Neural Networks, vol. 5, no. 2, pp. 157-166, 1994.
電子全文 電子全文(本篇電子全文限研究生所屬學校校內系統及IP範圍內開放)
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊