(3.239.159.107) 您好!臺灣時間:2021/03/08 20:07
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

我願授權國圖
: 
twitterline
研究生:周維德
研究生(外文):Wei-Te Chou
論文名稱:自發性比例式記憶體細胞非線性網路之設計
論文名稱(外文):The Design of the Autonomous Ratio Memory Cellular Nonlinear Network for Pattern Learning and Recognition
指導教授:吳重雨
指導教授(外文):C. Y. Wu
學位類別:碩士
校院名稱:國立交通大學
系所名稱:電子工程系所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2007
畢業學年度:96
語文別:英文
論文頁數:61
中文關鍵詞:比例式記憶細胞非線性網路自發性
外文關鍵詞:Ratio-MemoryCNNAutonomous
相關次數:
  • 被引用被引用:0
  • 點閱點閱:90
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:9
  • 收藏至我的研究室書目清單書目收藏:0
在圖形辨識的領域中,聯想式記憶體是一種相當熱門的辨識方法,它能將含有雜訊的圖形恢復成完美無雜訊的圖形,而比例式記憶細胞非線性網路已被證實可以作為一種聯想式記憶的實現方法。而目前比例式記憶體非線性網路所面臨的挑戰即是如何提升其在高雜訊的環境下的辨識率。
本論文的主旨在於闡述自發性比例式記憶體細胞非線性網路(Autonomous Ratio-Memory Cellular Nonlinear Network,簡稱ARMCNN)架構之分析與設計及其在聯想式記憶及圖像辨識上之應用。所謂的自發性是指在辨識階段,帶有雜訊的輸入訊號將在各個細胞存成初始電壓,而非一固定的輸入電壓。此外,本設計也具有免衰減操作即可得到所需比例鍵值之優點。在圖形學習階段,過去的比例式記憶細胞非線性網路(Ratio-Memory Cellular Neural Network,簡稱RMCNN)比例鍵值產生方式是將絕對鍵值(absolute weight)與細胞鄰近四邊的絕對鍵值平均值作比較,如果大於平均值,則此鍵值將被保留,反之則忽略此鍵值。而新提出的ARMCNN,是將細胞相鄰四邊的絕對鍵值改為只保留最大的絕對鍵值。模擬結果證明自發性比例式記憶細胞非線性網路相較於具有較高的辨識率。
論文中除了以Matlab和C語言模擬自發性比例式記憶細胞非線性網路架構(ARMCNN)及其在聯想式記憶和圖像辨識上之應用外,並實際以TSMC 0.35um 2P4M Mixed-Signal製程設計了一解析度為9x9的ARMCNN網路,並實現之且加以量測。本設計的單位面積在相同製程下,縮小為前一版設計─免衰減操作之RMCNN的0.28倍大(從4.56mm x 3.90mm縮小到2.24mm x 2.24mm)。
量測中所學習的三個圖形(一、二、四)皆可成功的辨識,而辨識中的一些瑕疵,
也將在論文中進行探討。並從新設計電路,在Hspice模擬驗證新電路確實可以改善此缺陷。
The associative memory is of significant attention in the field of pattern recognition and recovery. It is proven that the cellular nonlinear network with the aid of ratio memory (RMCNN) can be used to implement as a kind of associative memory. However, there are still some imperfections that require further improvement for the existing RMCNN system. For example, the pattern recognition rate of RMCNN drops quickly as the environmental noise level raises. Moreover, the die area of the existing chip is too large (4.56mm x 3.90mm), which might suffer from the impact of process variation more seriously. Therefore, the chip area reduction and optimization are necessary.
A new type of CNN associative memory called the Autonomous Ratio-Memory Cellular Nonlinear Network (ARMCNN) is proposed and analyzed. In the proposed ARMCNN, there is no elapsed operation to perform weight enhancement as well. During recognition period, the noisy input patterns are sent into cells as initial cell state voltages, which in comparison with constantly injecting the noisy input patterns, yields a better recognition rate in simulation.
During pattern learning period, the ratio weight is original generated by comparing the four neighboring absolute weights with their mean value. The absolute weights that are bigger than the mean value will remain. However, in ARMCNN, only the strongest absolute weights will stay (might be more than one). Furthermore, the proposed ARMCNN inherits the features of RMCNN such as, feature enhancement effect and no elapsed operation (EO). The ratio weights are generated directly after pattern learned.
In this thesis, the circuit of ARMCNN w/o EO is designed and a 9x9 ARMCNN is implemented using TSMC 0.35um 2P4M mixed-signal process. The die area, as compared with the previous chip – RMCNN w/o elapsed operation, shrinks from 4.56mm x 3.90mm to 2.24mm x 2.24mm. It’s only 0.28 times as large as the previous chip under the same technology process, which greatly reduces the impact of process variation. The experimental results of recognizing all three patterns are successful. However, some imperfections of pattern recovery still exist and will be discussed later in this thesis. The circuit is redesigned to correct these imperfections.
CONTENT ..................v
TABLE CAPTIONS ...........v
FIGURE CAPTIONS...........vii
CHAPTER 1.................1
INTRODUCTION..............1
1.1 Background of Cellular Nonlinear Network...........1
1.2 Review of Ratio Memory Cellular Nonlinear Network..2
1.3 Research Motivation and Thesis Organization.......4
CHAPTER 2.......................6
ARCHITECTURE AND CIRCUIT IMPLEMENTATION6
2.1 Operational Principle and Architecture ...........8
2.2 Circuit Implementation............................20
2.2.1 V-I Converter ............20
2.2.2 Comparator ...............24
2.2.3 Digital Components .......25
2.2.4 Output Stage and Input Pattern Interface ....32
2.2.5 Cell for Global Maximum Absolute Weight Determination........34
CHAPTER 3..............................36
SIMULATION RESULT......................36
3.1 Behavior Simulation Result ........36
3.2 Hspice Simulation Result ..........37
CHAPTER 4..............................44
EXPERIMENTAL RESULTS ..................44
4.1 Layout Description ................44
4.2 Experimental Environment Setup ....46
4.3 Experimental Result................48
4.3 Cause of the Imperfection Experimental Result...51
CHAPTER 5..............................55
CONCLUSION AND FUTURE WORK.............55
5.1 Conclusion .......................55
5.2 Future Works .....................56
REFERENCES .........................57
[1] L. O. Chua and L. Yang, “Cellular neural networks: theory,” IEEE Tran. Circuits Syst., vol. 35, pp.1257-1272, Oct. 1988.
[2] L. O. Chua and L. Yang, “Cellular neural networks: applications,” IEEE Tran. Circuits Syst., vol. 35, no. 10, pp.1273-1290, Oct. 1988.
[3] D. Liu and A. N. Michel, “Cellular neural netowrks for associative memories,” IEEE Trans. Circuits Syst. II, vol. 40, no. 2, pp. 119-121, February 1993.
[4] A. Lukianiuk, “Capacity of cellular neural networks as associative memories,” in proc. IEEE Int. Workshop on Cellular Neural Networks and their Applications, CNNA, June 1996, pp. 37 -40.
[5] M. Brucoli, L. Carnimeo, and G. Grassi, “An approach to the design of space-varying cellular neural networks for associative memories,” in Proc. the 37th Midwest Symposium on Circuits and Syst., 1994, vol. 1, pp. 549-552.
[6] H. Kawabata, M. Nanba, and Z. Zhang, “On the associative memories in cellular neural networks,” in Proc. IEEE Int. Conference on Systems, Man, and Cybernetics, Computational Cybernetics and Simulation, 1997, vol. 1, pp. 929 - 933.
[7] P. Szolgay, I. Szatmari, and K. Laszlo, “A fast fixed point learning method to implement associative memory on CNNs,” IEEE Trans. Circuits and Syst. I, vol. 44, no. 4, pp. 362-366, Apr. 1997.
[8] R. Perfetti and G. Costantini, “Multiplierless Digital Learning Algorithm for Cellular Neural Networks,” IEEE Trans. Circuits Syst. I, vol. 48, no. 5, pp. 630-635, May 2001.
[9] A. Paasio, K. Halonen, and V. Porra, “CMOS implementation of associative memory using cellular neural network having adjustable template coefficients,” in Proc. IEEE Int. Symposium on Circuits and Syst., ISCAS, 1994, vol. 6, pp. 487-490.
[10] C.-H. Cheng and C.-Y. Wu, “The design of cellular neural network with ratio memory for pattern learning and recognition,” in CNNA, 2000, pp. 301-307.
[11] C.-Y. Wu and C.-H. Cheng, “A learnable cellular neural network structure with ratio memory for image processing,” IEEE Trans. Circuits Syst. I, vol.49, pp. 1713-1723, Dec. 2002.
[12] C.-H. Cheng and C.-Y. Wu, “The design of ratio memory cellular neural network (RMCNN) with self-feedback template weight for pattern learning and recognition,” in CNNA, 2002, pp. 609-615.
[13] Y. Wu and C.-Y. Wu, “The design of CMOS non-self-feedback ratio memory for cellular neural network without elapsed operation for pattern learning and recognition,” in CNNA, 2005, pp. 282-285.
[14] J.-L. Lai and C.-Y. Wu, “A learnable self-feedback ratio-memory cellular nonlinear network (SRMCNN) with B templates for associative memory applications,” in ICECS, 2004, pp. 183-186.
[15] C.-Y. Wu and C.-H. Cheng, “Improvement of pattern learning and recognition ability in ratio-memory cellular neural networks with non-discrete-type hebbian learning algorithm,” in ISCAS, 2002, pp. 629-632.
[16] J.-L. Lai and C.-Y. Wu, “Architectural design and analysis of learnable self-feedback ratio-memory cellular nonlinear network (SRMCNN) for nanoelectronic systems,” IEEE Trans. VLSI Syst., vol. 12, pp. 1182-1191, Nov. 2004.
[17] C.-Y. Wu, C.-Y. Hsieh, S.-H. Chen, B. C.-Y. Hsieh, and C.-R. Chen, “Non-saturated binary image learning and recognition using the ratio memory cellular neural network (RMCNN),” in CNNA, 2002, pp. 624-620
[18] C.-Y. Wu and J.-F. Lan, “CMOS current-mode neural associative memory design with on-chip learning,” IEEE Trans. Neural Networks, vol. 1, pp. 167-181, Jan. 1996.
[19] J.-F. Lan and C.-Y. Wu, “CMOS current-mode outstar neural networks with long period analog ratio memory,” in Proc. IEEE Int. Symp. Circuits and Systems, vol. 3, 1995, pp. 1676-1679.
[20] ----, “Analog CMOS current-mode implementation of the feedforward neural network with on-chip learning and storage,” in Proc. Of 1995 IEEE International Conf. on Neural Networks, vol. 1, 1995, pp. 645-650.
[21] C.-Y. Wu and J.-F. Lan, “A new neural associative memory with learning,” in IJCNN, vol. 1, 1992, pp. 487-492.
[22] J.-F. Lan and C.-Y. Wu, “The multi-chip design of analog CMOS expandable modified Hamming neural network with on-chip learning and storage for pattern classification,” in ISCAS, vol. 1, 1997, pp. 565-568.
[23] D. O. Hebb, The Organization of Behavior: A Neuropsychological Theory. NY: Wiley, 1949.
[24] S. Haykin, Neural Networks, A Comprehensive Foundation. Macmillan College Publishing Company, Inc., 1994, pp. 290-291.
[25] L. O. Chua, “Guest Editorial,” IEEE Trans. Circuits Syst. I, vol. 42, pp. 557-558, Oct. 1995.
[26] J. F. Lan, C.Y. Wu, Chapter 3 of “The Designs and Implementations of the Artificial Neural Networks with Ratio Memories and Their Applications” June 1996, pp. 64-67
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關論文
 
系統版面圖檔 系統版面圖檔