跳到主要內容

臺灣博碩士論文加值系統

(18.97.9.169) 您好!臺灣時間:2024/12/11 16:28
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:陳耀生
研究生(外文):Yao-Sheng Chen
論文名稱:小腦模型學習速度之研究
論文名稱(外文):A Study on the Learning Speed of CMAC
指導教授:涂世雄涂世雄引用關係
指導教授(外文):Shih-Hsiung Twu
學位類別:碩士
校院名稱:中原大學
系所名稱:電機工程研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2003
畢業學年度:91
語文別:英文
論文頁數:68
中文關鍵詞:小腦模型加速法學習速度
外文關鍵詞:CMACacceleration methodLearning Speed
相關次數:
  • 被引用被引用:3
  • 點閱點閱:221
  • 評分評分:
  • 下載下載:27
  • 收藏至我的研究室書目清單書目收藏:0
在本論文中,我們提出餘數校正法(residual correction)去善小腦模型的學習速度。這種方法我們稱之為加速法。加速法的學習結構與傳統小腦模型結構相近似,但加速法在輸出部分分成兩部分,一部分是傳統小腦學習,另一部分是餘數修正學習。在這兩部分,我們使用不同的學習率。
首先,我們利用三角歸屬函數取代傳統小腦模型學習之矩形歸屬函數,學習結果非常符合目標函數,所以我們知道利用三角歸屬函數可改善小腦模型學習之品質,然而此方法卻使得誤差收斂速度變慢,且誤差收斂情況有波動的情形。為了改善這些情況,我們提出加速法應用在三角歸屬函數之小腦模型(TCMAC)。
TCMAC加速法的學習誤差收斂速度比傳統TCMAC 的學習誤差收斂情形來的快,無論我們改變任何參數,包含訓練樣本數目(N)、感應參數(Ne) ,甚至使用不同的目標函數,加速法都適用。
TCMAC加速法較傳統TCMAC相比,可有效改善學習速度使誤差收斂速度加快及改善誤差收斂情況波動的情形。加速法可減少達成我們期望值之學習時間。
In this thesis, we propose the residual correction method to improve the learning speed of CMAC. This method is called the acceleration method by the anther. The output of our scheme is divided into two parts which are conventional learning and residual correction learning. By such a new learning scheme, the learning results and convergent rate are promoted apparently.
At first, the acceleration methods for improving learn speed of CMAC is proposed. The concept of residual correction in numerical analysis is used in our proposed method. Then, according to the acceleration method, a new learning structure other than the traditional CMAC learning structure is designed. There are two outputs required in the new learning structure to get fine learning results. Next, the influence of variation of several important parameters of CMAC including the training sample, memory number, learning rate and membership function are discussed. Finally, according to the proposed method with the corresponding learning structure and variation of parameters, simulation results of illustrate examples are given to demonstration the excellent performance of our proposed method.
It is believed that the research of this thesis will be helpful for the application of CMAC.
Contents

English Abstract……………………………………………………..…I
List of Figures…………………………………………………………II
List of Tables…………………………………………………………..V


Chapter 1 Introduction
1.1 CMAC……………………………………………………………..1
1.2 Study CMAC………………………………………………………2
1.3 Organization of This Thesis……………………………………… 4

CHAPTER 2 CEREBELLAR MODEL ARTICULATION CONTROLLER (CMAC)
2.1 CMAC theorem……………………………………………………5
2.2 Structure of CMAC………………………………………………..7
2.3 Mathematics for CMAC…………………………………………..18
2.4 Learning of CMAC………………………………………………..20
2.5 Learning procedures of CMAC……………………………………21

Chapter 3 CMAC Acceleration Method
3.1 Acceleration method of CMAC……………………………………23
3.2 The learning structure of CMAC acceleration method…………….25
3.3 Learning parameter In CMAC affect the learning result…………..30
3.4 The learning result of CMAC acceleration method………………..50

Chapter 4 Conclusion and Future Research…………………………64

References …………………………………………………

List of Figures

Figure 2.1 The structure of CMAC……………………………………7
Figure 2.2 One-dimension CMAC structure with Ne=3. …………….9
Figure 2.3 One-dimensional CMAC. …………………………………12
Figure 2.4 Two-dimensional CMAC. …………………………………15
Figure 2.5 Hypercubes of input state (5,5) mapping
to practical memory………………………………………..19
Figure 2.6 The CMAC learning procedure…………………………….22
Figure 3.1 Learning structure of acceleration method
of CMAC model……………………………………………26
Figure 3.2 The acceleration method of CMAC learning procedure……29
Figure 3.3 (a) The learning result of N=40 with Ne=4, α=0.7……….. 32
Figure 3.3 (b) The convergent rate with N=40 with Ne=4, α=0.7…… 32
Figure 3.4 (a) The learning result of N=60 with Ne=4, α=0.7………...33
Figure 3.4 (b) The convergent rate with of N=60 with Ne=4, α=0.7….33
Figure 3.5 (a) The learning result of N=80 with Ne=4, α=0.7………...34
Figure 3.5 (b) The convergent rate with N=80 with Ne=4, α=0.7…….34
Figure 3.6 (a) Comparison of convergent rates for N= 80, 60, 40 with Ne=4, α=0.7……………………………………………35
Figure 3.6 (b) Partial magnification of Figure 3.6 (a). …………………35
Figure 3.7 (a) The learning result of Ne=2 with N=80, α=0.7………...38
Figure 3.7 (b) The convergent rate with Ne=2 with N=80, α=0.7…….38
Figure 3.8 (a) The learning result of Ne=4 with N=80, α=0.7………...39
Figure 3.8 (b) The convergent rate with Ne=4 with N=80, α=0.7……39
Figure 3.9 (a) The learning result of Ne=6 with N=80, α=0.7……….40
Figure 3.9 (b) The convergent rate with Ne=6 with N=80, α=0.7……40
Figure 3.10 (a) The learning result of Ne=8 with N=80, α=0.7………41
Figure 3.10 (b) The convergent rate with Ne=8 with N=80, α=0.7…..41
Figure 3.11 (a) Comparison of convergent rates for Ne= 2, 4, 6 , 8 with N=80, α=0.7…………………………………………42
Figure 3.11(b) Partial magnification of Figure 3.11 (a)……………….42
Figure 3.12 (a) Comparison of convergent rates for
α=0.2, 0.4, 0.6, 0.8, with N=40, Ne=6………………..43
Figure 3.12 (b) Partial magnification of Figure 3.12 (a). …………….44
Figure 3.13 (a) The learning result of learning rate α>2……………45
Figure 3.13 (b) The dispersed result of learning rate α>2…………..45
Figure 3.14 The learning result by using the rectangle membership function with N=80, Ne=6 and α=0.7…………………….47
Figure 3.15 (a) The learning result by using the isosceles triangle membership function with N=80, Ne=6 and α=0.65..47
Figure 3.15 (b) Partial magnification of Figure 3.14 (a)………………48
Figure 3.16 (a) The comparison of convergent rates of TCMAC and CMAC………………………………………………..49
Figure 3.16 (b) Partial magnification of Figure 3.16 (a)………………49
Figure 3.17 (a) Comparison of accelerate of and conventional TCMAC with N=40, Ne=8………………………………………51
Figure 3.17 (b) Partial magnification of Figure 3.17 (a)………………51
Figure 3.18 (a) Comparison of accelerate of and conventional TCMAC with N=80, Ne=8……………………………………….52
Figure 3.18 (b) Partial magnification of Figure 3.18 (a). ……………..53
Figure 3.19 (a) Comparison of accelerate of and conventional TCMAC with N=120, Ne=8……………………………………..54
Figure 3.19 (b) Partial magnification of Figure 3.19 (a)………………54
Figure 3.20 The promoted efficiency by varying N form 40 to 120…..57
Figure 3.21 (a) Comparison of accelerate of and conventional TCMAC with N=80, Ne=4……………………………………....58
Figure 3.21 (b) Partial magnification of Figure 3.21(a) ………………58
Figure 3.22 (a) Comparison of accelerate of and conventional TCMAC with N=80, Ne=6………………………………………59
Figure 3.22 (b) Partial magnification of Figure 3.22 (a)………………59
Figure 3.23 (a) Comparison of accelerate of and conventional TCMAC with N=80, Ne=8………………………………………60
Figure 3.23 (b) Partial magnification of Figure 3.23 (a)………………60
Figure 3.24 The learning result of TCMAC
with y(x) = e2x, N=80, Ne=8……………………………62
Figure 3.25 The learning result of acceleration method
with y(x) = e2x, N=80, Ne=8……………………………62
Figure 3.26 (a) The comparison of convergent rates of
acceleration and conventional……………………….63
Figure 3.26 (b) Partial magnification of Figure 3.26 (a) …………….64

List of Tables

Table 2.1 Association vectors of one-dimensional of CMAC…………13
Table 2.2 Each layer of hypercube’s index…………………………….16
Table 3.1 The relationship between training sample
and the promoted efficiency. ………………………………..56
Table 3.2 The relationship between generalization parameter and the promoted efficiency………………………………………….61
References

[1] J. Albus, “A new approach to manipulator control: the Cerebellar Model Articulation Controller,” Journal of Dynamic Systems, Measurement, and Control, Transactions of the ASME, vol. 97, pp. 220-227, 1975.
[2] C. S. Lin, and H. Kim, “CMAC-based adaptive critic self-learning control,” IEEE Transactions on Neural Networks, pp. 530-533, vol. 2, no. 5, September 1991.
[3] W. T. Miller, “Sensor-based control of robotic manipulators using a general learning algorithm,” IEEE Journal of Robotics and Automatio , vol. RA-3, no. 2, pp. 157-165, April 1987.
[4] M. Brown, C. J. Harris, and P. C. Parks, “The interpolation capabilities of the binary CMAC,” Neural Network, vol. 6,pp. 429-440, 1993.
[5] W. T. Miller, F. H. Glanz, and L. G. Kraft “CMAC: An associative neural network alternative to backpropagation,” Proceeding of IEEE , vol. 78, pp. 1561-1567, 1990.
[6] F. H. Glanz, W. T. Miller, and L. G. Kraft “An Overview of the CMAC Neural Network,” Proceeding of 1991 IEEE Neural Networks for Ocean Engineering, pp. 301-308, 1991.
[7] J. S. ker, R. C. Wen, Y. H. Kuo, and B. D. Liu “Enhancement of the weight cell utilization for CMAC neural network: architecture design and hardware implementation,” Fourth International Conference on Microelectronics for Neural Networks and Fuzzy Systems, pp. 244-251, 1994.
[8] M. Miwa, T. Hashiyama, T. Furuhashi, and S. Okuma, “Cerebellar model arithmetic computer with bacterial evolutionary algorithm and its hardware acceleration using FPGA,” IEEE International Conference on Systems, Man, and Cybernetics, vol. 5, pp. 591-595, 1999.
[9] P. C. Parks, and J. Miller, “Improved allocation of weights for associative memory storage in learning control system” 1st IFAC Symposium on Design Methods of Control System, PP. 777-782, 1991.
[10] F. J. Gonzalez-Serrano, A. Artes-Rodriguez, and A. Figuerias-Vidal, “The generalized CMAC,” IEEE International Symposium on Circuits and Systems, vol. 3, pp. 594-597, 1996.
[11] T. Szabo and G. Horvath, “Improving the generalization capability of the binary CMAC,” International Joint Conference on Neural Networks, vol. 3, pp. 85-90, 2000.
[12] H. Kim, and C. S. Lin, “Use of adaptive resolution for better CMAC learning,” International Joint Conference on Neural Networks, vol.1, pp.517-522, 1992.
[13] A. Menozzi, and M. Y. Chow, “On the training of a multi-resolution CMAC neural network” IEEE International Symposium on Industrial Electronics, vol. 3, pp. 1201-1205, 1997.
[14] H. R. Lai and C.C. Wong, “A fuzzy CMAC structure and learning method for function approximation” Fuzzy Systems Conference, vol. 1, pp. 436-439, 2001.
[15]M. F. Yeh and H. C. Lu, “On─Line Adaptive Quantization Input Space in CMAC Neuarl Retwork” , IEEE International Conference on, vol. 4, pp.6-8 Oct, 2002.
[16] Y. Wong, and A. Sideris, “Learning Convergence in the Cerebellar Model Articulation Controller,” IEEE Transactions on Neural Networks, vol. 3, no. 1, pp. 115-121, January, 1992.
[17]S. F. Su, T. Tao, and T. H. Hung, “Credit Assigned CMAC and Its Application to online Learning Robust Controllers,” Systems, Man and Cybernetics, Part B, IEEE Transactions on , Vol. 33, pp. 202-213, April 2003.
[18]C. S. Lin and C. T. Chiang, “Learning convergence of CMAC technique,” IEEE Trans. Neural Networks, Vol. 33, pp. 1281-1292, April 2003.
[19]C. M. Hong, C. H. Lin, and T. Tao, “Grey-CMAC model,” in Proc. ISTED Int. Conf. High Technology Power Industry, pp. 39-44, 1997.
[20]J. Nie and D. A. Linkens, “FCMAC: A fuzzified Cerebellar model articulation controller with self-organizing capacity,” Automatica, Vol. 30, no. 4, pp. 655-664, 1994.
[21]Z. J. Geng and C. L. McCullough, “Missle control using fuzzy cerebellar model arithmetic computer neural networks,” J. Guld., Control, Dyn., vol. 20, no. 3, pp.557-565, 1997.
[22] CurtisF Gerald and Patrick O Wheatley, 羅志正 吳國威譯, ‘’應用數值分析 ’’, 東華書局, 1986.
[23] J. Nie and D. A. Linkens, "A Fuzzified CMAC Self-learning Controller," IEEE Transactions on Neural Networks, pp. 500-505, 1993.
[24]C. P. Hung, “Transcations on Systems, Man, and Cybernetics,” Part B: Cybernetics: Accepted for future publication, 2003.
[25] M. Majors, J. Stori, and D. Cho, "Neural Network Control of Automotive Fuel-Injection Systems," IEEE Control System, pp. 31-36, June, 1994.
[26] P. C. Parks and J. Militzer, “Convergence properties of associative memory storage for learning control system” Automatic Remote Control, Vol. 50, No. 3, pp. 254-286, 1989.
[27] J. Albus, “Data storage in the Cerebellar Model Articulation Controller (CMAC),” Journal of Dynamic Systems, Measurement, and Control, Transactions of the ASME, vol. 97, pp. 228-233, 1975.
[28] C. M. Hong, Y. R. Hsieh, C. M. Chen, "Learning Efficiency Improvementof CMAC Neural Networks Using Grey Prediction Method," The Journal of Grey System, Vol. 12,No. 1, pp. 73-86, 2000.
[29] K. S. Hwang and C. S. Lin, “Smoothing Trajectory Tracking of Three-Link Robot: A Self-Organizing CMAC Approach,” IEEE Transactions on System, Man, and Cybernetics, vol. 28 part B, no. 5, pp. 680-692, Oct. 1998.
[30] Richard L. Burden and J. Douglas Faires, “Numerical Analysis” 4th, Thomson Information, p. 414-419.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top