跳到主要內容

臺灣博碩士論文加值系統

(44.220.249.141) 您好!臺灣時間:2023/12/11 20:25
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:朱泰峰
研究生(外文):TAI-FENG CHU
論文名稱:赫比式關聯記憶體非線性量化之分析與研究
論文名稱(外文):Analysis and Research of Using Non-linear Quantization Strategy in Hebbian-type Associative Memories
指導教授:蔡清欉蔡清欉引用關係
指導教授(外文):Ching-Tsorng Tsai
學位類別:碩士
校院名稱:東海大學
系所名稱:資訊工程與科學系碩士在職專班
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2006
畢業學年度:94
語文別:中文
論文頁數:49
中文關鍵詞: 圖騰神經元 內部鏈結 赫比式關聯記憶體 線性量化 非線性量化 直接收斂能力
外文關鍵詞:patternneuroninterconnectionHebbian-type Associative Memorieslinear quantizationnon-linear quantizationprobability of dire
相關次數:
  • 被引用被引用:0
  • 點閱點閱:142
  • 評分評分:
  • 下載下載:8
  • 收藏至我的研究室書目清單書目收藏:0
為使赫比式關聯記憶體能被廣泛應用於各方面,利用大型積體電路來實作赫比式關聯記憶體是目前普遍使用的方法,但傳統赫比式關聯記憶體隨著儲存圖騰(pattern)數的增加其神經元(neuron)與內部鏈結(interconnection)數會隨之急遽增加,使得在實作大型積體電路時會遭遇到瓶頸。目前有兩大方向來解決此一問題,第一種是發展高階的赫比式關聯記憶體,而第二種就是減少赫比式關聯記憶體內部的鏈結量,雖說高階的赫比式關聯記憶體可增加儲存的圖騰(pattern)數,但其內部的鏈結量仍難以避免的快速增加,因此如何減少內部的鏈結量才是根本解決的方法。
利用量化策略來減少內部的鏈結量是相當有效率的方式,Chung和Tsai[13-20]曾針對赫比式關聯記憶體的鏈結數值量化來做分析,發現量化後的赫比式關聯記憶體具有良好的收斂能力。但其所用的量化策略是以雙態量化、三態量化及線性量化作為量化的策略。赫比式關聯記憶體有一重要特性就是內部鏈結值會呈現高斯分佈的特性,若能利用此特性作為赫比式關聯記憶體採用非線性量化(non-linear quantization)策略,應能增進赫比式關聯記憶體的效能。
在本研究中先介紹一階及二階赫比式關聯記憶體,再說明線性量化策略於一階及二階赫比式關聯記憶體所推導出一階及二階赫比式關聯記憶體經線性量化後的直接收斂能力(probability of direct convergence)方程式。本研究重點非線性量化策略是利用高斯機率密度函數來積分,再依需求分割面積,使每個分割面積皆相同,進而求取每分割區所佔長度,並依每分割區的長度佔整個分割區的長度的比例來求取上下限值,將其代回原線性量化後的直接收斂能力方程式,成為赫比式關聯記憶體經非線性量化後的直接收斂能力方程式。
比較線性量化及非線性量化直接收斂能力方程式實驗的結果,可明顯看出不論在一階或二階赫比式關聯記憶體,非線性量化策略其收斂能力都明顯優於線性量化策略,因此非線性量化策略比線性量化策略不論在一階或二階赫比式關聯記憶體在晶片製作時更具實用的效能,並可利用非線性量化策略於更高階赫比式關聯記憶體。
In order to widely apply Hebbian-type Associative Memories in every field, the most common way at present time is utilizing VLSI to make Hebbian-type Associative Memories. However, as pattern saving increasing, the number of times between neuron of traditional Hebbian-type Associative Memories and interconnection will increase quickly simultaneously. It comes to bottleneck when actually manufacture VLSI. There are two directions to solve this problem. One is to develop high-rank Hebbian-type Associative Memories, the other is to decrease interconnection inside of Hebbian-type Associative Memories. Although high-rank Hebbian-type Associative Memories may add to save patterns, yet, the connection inside will inevitably increase rapidly. Therefore, how to decrease interconnection is the radical solution to settle this problem once for all.
Making use of quantization strategy to decrease interconnection is quite an efficient way. Chung & Tsai has focused on connection numerical quantization of Hebbian-type Associative Memories to make analysis. They found Hebbian-type Associative Memories of fairly good contractility after quantization. The strategy they applied are two-level, three-level and linear quantization. One important characteristics of Hebbian-type Associative Memories is its interconnection value owns Gauss scattering distinction. It will enhance performance of Hebbian-type Associative Memories if making use of its distinction as the strategy in applying non-linear quantization.
In this research, we introduce one order and quadratic Hebbian-type Associative Memories, derive equation of probability of direct convergence from linear quantization strategy. The key-point of this research is non-linear quantization strategy is integrated by Gauss possibility density function, then, divide area according to requirements and equal every segmentation area to seek the length that every segmentation possesses on X-axis, next, calculate up & down limit value based on proportion of length of every segmentation area to the whole area. Afterwards, carry this value back to the equation of probability of direct convergence after original linear quantization and become equation of probability of direct convergence after non-linear quantization of Hebbian-type Associative Memories. Hence, We may compare which one is superior by investigating between linear & non-linear equation of probability of direct convergence
Comparing experiment results between linear & non-linear quantization equation of probability of direct convergence, we may clearly observe the performance of probability of convergence in non-linear quantization strategy is far more superior than in linear quantization strategy. Therefore, when producing tip, non-linear quantization strategy owns more practically merits in Hebbian-type Associative Memories.
摘要 i
英文摘要 iii
目錄 v
圖目錄 vi
第一章 緒論 1
1.1研究動機 1
1.2研究目標 2
1.3論文大綱 3
第二章 非線性量化策略應用於一階赫比式關聯記憶體 5
2.1一階赫比式關聯記憶體 8
2.2一階赫比式關聯記憶體線性量化介紹 9
2.3一階赫比式關聯記憶體之非線性量化策略 15
2.4非線性量化實驗模擬結果分析 19
第三章 非線性量化策略應用於二階赫比式關聯記憶體 22
3.1二階赫比式關聯記憶體 23
3.2二階赫比式關聯記憶體線性量化介紹 24
3.3二階赫比式關聯記憶體之非線性量化策略 31
3.4非線性量化實驗模擬結果分析 33
第四章 分析與討論 36
4.1一階赫比式關聯記憶體量化方法的分析與討論 36
4.2二階赫比式關聯記憶體量化方法的分析與討論 40
第五章 結論 44
參考文獻 46
[1] J.J. Hopfield and D.W. Tank, “Neural computation of decision in optimization problems,” Biol. Cybern., vol. 52, pp.141-151, 1985.

[2] D.W. Tank and J.J. Hopfield, “Simple optimization networks: A/D converter and a linear programming circuit,” IEEE Trans. Circuit Syst., vol. CAS-33, pp.533-541, 1986.

[3] C.T. Tsai, Y.N. Sun, and P.C. Chung, “Minimizing the Energy of Active Contour Model Using a Neural Network,” IEE Proceedings-E, Vol. 140, No.6, pp.297-303, Nov.1993.

[4] C.T. Tsai, Y.N. Sun, “Endocardial Boundary Decection Using a Neural Network,” Pattern Recognition, Vol. 26, No.7, pp.1057-1068, 1993.

[5] D. Psaltis. C. H. Park. and J. Hong, “Higher Order Associative Memories and Their Optical Implementations,” Neural Networks, vol. 1, pp. 149-163, 1988.

[6] L. Personnaz, I. Guyon and G. Dreyfus, “Higher-Order Neural Networks: Information Storage without Errors,’’ Europhys. Lett., vol. 4, pp.863-867, Oct. 1987.

[7] F. J. Pineda, “Generalization of Back Propagation to Recurrent and Higher Order Neural Networks,” Neural Inform. Processing Syst.: Amer. Inst. Phys, Denver, CO, 1987.

[8] K.A. Boahen and P.O. Pouliquen, “A Heteroassociative Memory Using Current-Mode MOS Analog VLSI Circuits,” IEEE Trans. Circuit & Systems, vol. 36, No.5, pp.747-755,May 1989.

[9] M.K. Habib and H.Akel, “A Digital Neuron-Type Processor and Its VLSI Design,” IEEE Trans. Circuit & System, vol. 36, No.5, pp.739-746, May 1989.

[10] M. Verleysen, B. Sirletti, “A high-Storage Capacity Content-Addressable Memory and Its Learning Algorithm,” IEEE Trans. Circuit & System, vol. 36, No.5, pp.762-766, May 1989.

[11] H. Sompolinsky, “The theory of neural networks: The Hebb rule and beyond,” in Heidelberg Colloquium on Glassy Dynamics Edited by J.L. Van Hermmen and I. Morgenstern, Springer-Verlang, June 1986.

[12] D.J. Amit, “Modeling Brain Function : The World of attractor Neural Networks,” Cambridge University Press, 1989.

[13] P.C. Chung, C.T. Tsai, and Y.N. Sun, “Characteristics of hebbian-Type Associative Memories with Quantized Interconnections,”IEEE Trans. Circuits & System, vol. 41, No.2, pp.168-171, Feb 1994.

[14] C.T. Tsai and P.C. Chung, “Performance Characteristics of Quantized Hebbian Neutal Network,”. G6-G15, Taiwan, 1993.

[15] C.T. Tsai and P.C. Chung, and Y.N. Sun “Characteristics of Quantized hebbian-Type Associative Memories,” NCCS 1993.

[16] E.L. Chen. P.C. Chung, and C.T. Tsai, “Using a Competitive Hopfield Neural Network for Polygonal Approimation,” 1993 International Symposium on Artificial Neural Networks, pp.G27-G36., Taiwan, 1993.

[17] P.C. Chung and C.T. Tsai, “Quadratic Hebbian-Type Associative Memories Having Interconnection Faults,” Neural Network Conference on Neural Networks, Sanfranciso. CA, pp.1366-1370, March 1993.

[18] P.C. Chung and C.T. Tsai, and Y.N. Sun, “Linear Quantization of Hebbian-Type Associative Memories in Interconnection Implementation,” IEEE International Conference on Neural Networks, pp.1092-1097, Orlando, 1994.

[19] P.C. Chung, E.L. Chen, and C.T. Tsai, “Pattern Recognition Using a Hierarchical Neural Network,” IEEE International Conference on Neural Networks, pp.3104-3109, Orlando, 1994.

[20] P.C. Chung, C.T. Tsai E.L. Chen and Y.N. Sun, “Polygonal Approximation Using a Competitive Hopfield Neural Network,” Pattern Recognition, vol. 27, No, 11, pp.1505-1512, 1994.

[21] A. Heittmann, and U. Ruckert, “Mixed Mode VLSI Implementation of a Neural Associative Memory,” Analog Integrated Circuits and Signal Processing, vol. 30, pp.159–172, 2002.

[22] Y. Peng, Z. Zong, and E. McCleney, “Relaxing Backpropagation Networks as Associative Memories,” Proc. Int’l Conf. Neural Networks, pp.1777-1782, Perth, Australia, 1995.

[23] R.J McEliece and E.C. Posner, “The capacity of Hopfield associative memory,”IEEE Trans. Information Theory, pp.461-482, July 1987.

[24] A. Kuh and B. W. Dickinson, “Information capacity of associative memories,” IEEE Tran. Info. Theory, vol. 35, pp.59-68, January, 1988

[25] C.M. Newman, “Memory capacity in neural network models:rigorous lower bounds,” Neural Networks, vol. 1, pp.223-238, 1988.

[26] Li Ping; W.K. Leung, “Decoding low density parity check codes with finite quantization bits,” IEEE Communications Letters , Vol.4, Issue.2, pp.62 -64, 2000

[27] C. L. Giles and T. Maxwell, “Learning, Invariance, and Generalization in High-Order Neural Networks,” Appl. Opt., vol. 26, pp.4972-4978, Dec. 1987.

[28] H. H. Chen, Y. C. Lee, G. Z. Sun. and H. Y. Lee, “High Order Correlation Model for Associative Memory,” Anzer. Inst. Phys., pp. 86-92, 1986.

[29] P.C. Chung , “Reliability issue and quantization effects in optical and electronic network implementation of Hebbian-Type Associative Memories,” Neural Network System Techniques and Applications, Algorithms and Architectures, 1997.

[30] C.T. Tsai, C.H. Ko, Q.Y. Zhan, and P.C. Chung, “Linear Quantization of Quadratic Hebbian-Type Associative Memories,” Shu-eh Journal, vol. 20, No.2, 1997.

[31] P.C. Chung and Y.N.Sun, “Hebbian-Type Associative Memories in hardware implementations,” submitted to IEEE Trans. On Neural Network, 2000.

[32] J. H. Wang,”Principal Interconnection in Higher Order Hebbian-Type Associative Memories,” IEEE Trans. On Knowledge and Data Engineering, Vol. 10, No.2, pp.342-345,1998.

[33] N. Davey, R. Frank, S. Hunt, R. Adams, and L. Calcraft,“High Capacity Associative Memory Models – Binary and Bipolar Representation,” Proceedings of the Eighth IASTED International Conference Artificial Intelligence and Soft Computing, pp.392-397, Marbella, Spain, 2004.
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關論文