跳到主要內容

臺灣博碩士論文加值系統

(18.97.14.82) 您好!臺灣時間:2025/02/15 02:18
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:郭功耀
研究生(外文):Gang-Yaw Kuo
論文名稱:倒傳遞類神經網路之VLSI設計
論文名稱(外文):VLSI Design of Back Propagation Networks with On-Chip Learning
指導教授:張志永董蘭榮董蘭榮引用關係
指導教授(外文):Jyh-Yeong ChangLan-Rong Dung
學位類別:碩士
校院名稱:國立交通大學
系所名稱:電機與控制工程系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2002
畢業學年度:90
語文別:英文
論文頁數:78
中文關鍵詞:類神經網路超大型積體電路
外文關鍵詞:Neural NetworksVLSI
相關次數:
  • 被引用被引用:1
  • 點閱點閱:867
  • 評分評分:
  • 下載下載:262
  • 收藏至我的研究室書目清單書目收藏:2
為了使類神經網路擁有特定的能力,必須反覆學習直到每個輸入都能正確對應到所需要的輸出為止,當需要學習的資料很龐大時,學習的過程往往需要很長的一段時間。因此,許多增快學習的方法便被廣泛的討論與研究。我們若以超大型積體電路去實現類神經網路,並且使其能夠與複雜的計算機系統溝通配合,便可有效的縮短學習所消耗的時間。
在本論文中,我們利用超大型積體電路來實現倒傳遞類神經網路,這個晶片整合了學習部份以及回想部份,並且使得網路的輸入層、隱藏層以及輸出層的神經元個數能夠隨需求而任意調整。整個架構是基於單一指令多資料的精神,我們利用有限個數的運算單元去執行所需的運算,這些運算單元可由控制單元自由調度,而不僅限於某個特定的任務,最後我們以蝴蝶花分類問題以及點矩陣英文字母辨識來驗證我們的設計。

Nowadays, the industry of information appliances and communication products is growing rapidly. Intelligent products will become the key feature in the future. Artificial neural networks have the capabilities to learn and recall and are highly parallel. However, conventional computers do not support parallel computing and learning capability that are inherent in neural networks. Among the existing parallel architectures, SIMD (Single Instruction stream Multiple Data) is the most suitable for the implementation of BPN (back propagation networks). Therefore, the proposed architecture is based on SIMD. The proposed architecture uses limited number of PEs to fulfill all the operations needed for the recalling phase and the learning phase. The aim of the proposed architecture is not intended for one specific application. Therefore, the proposed BPN chip can be reconfigured to any BPN structure by modifying some parameters. Finally, two real cases are used to verify our design.

Chapter 1. Introduction …………………………………………………………. 1
1.1 Motivation …………………………………………………………………… 1
1.2 Design Flow …………………………………………………………………. 3
1.3 Thesis Outline ……………………………………………………………….. 4
Chapter 2. Back Propagation Networks ……………………………………… 6
2.1 BPN Structure ……………………………………………………………….. 6
2.2 Back-Propagation Learning Algorithm ……………………………………… 8
2.3 A Case Study ……………………………………………………………….. 11
Chapter 3. Analysis of Different Architectures ……………………………….. 17
3.1 Systolic Arrays ……………………………………………………………... 17
3.1.1 Deriving DGs from Given Algorithms …………………………… 17
3.1.2 Mapping DGs onto Array Structures ……………………………… 18
3.2 Data Flow ………………………………………………………………… 21
3.2.1 Marked Petri Net …………………………………………………… 23
3.3 SIMD ……………………………………………………………………….. 28
Chapter 4. The Proposed VLSI Architecture of BPN ………………………… 30
4.1 Specification ………………………………………………………………... 31
4.2 Control Unit ………………………………………………………………… 32
4.2.1 Scheduler …………………………………………………………… 33
4.2.2 TaskID Encoder …………………………………………………….. 34
4.2.3 Broker ………………………………………………………………. 35
4.2.4 Condition Checker ………………………………………………….. 39
4.3 Processing Element ………………………………………………………… 41
4.4 Memory Access Unit ……………………………………………………….. 49
Chapter 5. Simulation and Experiment ………………………………………... 56
5.1 Recognition of English Letters ……………………………………………... 56
5.2 Simulation ………………………………………………………………….. 59
5.3 Results Analysis ……………………………………………………………. 65
5.4 Another Example:Classification of Irises ………………………………….. 68
Chapter 6. Conclusion and Future Work ……………………………………… 71
6.1 Conclusion ………………………………………………………………….. 71
6.2 Future Work ………………………………………………………………... 74
References …………………………………………………………………………. 75

[1] James L. Peterson, Petri Net Theory And The Modeling Of Systems, Englewood Cliffs, NJ: Prentice Hall, 1981.
[2] S. Y. Kung, VLSI Array Processors, Englewood Cliffs, NJ: Prentice Hall, 1988.
[3] S. Y. Kung, Digital Neural Networks, Englewood Cliffs, NJ: Prentice Hall, 1993.
[4] S. Haykin, Neural Networks: A Comprehensive Foundation, Upper Saddle River,NJ: Prentice Hall, 1999.
[5] S. Y. Kung. and J. N. Hwang, “Parallel architecture for artificial neural network nets,” Int. Conf. on Neural Networks, San Deigo, California, vol. 2, pp. 166-175, 1998.
[6] Y. J. Jang, C. H. Park, and H. S. Lee, “A programmable digital neural-processor design with dynamically reconfigurable pipeline/parallel architecture,” in Proc. 1998 Int. Conf. on Parallel and Distributed Systems, 1998, pp.18-24.
[7] C. F. Jang and B. J. Sheu, “Design of a digital VLSI neural processor for signal and image processing,” in Proc. Neural Networks for Signal Processing, 1997, pp. 606-615.
[8] S. Shams and K. W. Przytual, “Mapping of neural networks onto programmable
parallel machines,” IEEE Trans. on Circuits and Systems, vol. 4, pp. 2613-2617, 1990.
[9] J. J. Shyu, “VLSI design of RBF neural networks,” Master Thesis, National Chiao Tung University, Hsinchu, Taiwan, R. O. C., 1999.
[10] C. H. Kuo, “A systolic array based VLSI design of RBF neural networks,” Master Thesis, National Chiao Tung University, Hsinchu, Taiwan, R. O. C., 2000.
[11] Y. L. Lee, “Study on reconfigurable System-On-Chip architecture based on dataflow computing,” Master Thesis, National Chiao Tung University, Hsinchu, Taiwan, R. O. C., 2001.
[12] C. M. Wu, “Study on reconfigurable scheduling for heterogeneous System-On-Chip architecture,” Master Thesis, National Chiao Tung University, Hsinchu, Taiwan, R. O. C., 2001.
[13] S. Y. Kung and J. N. Hwang, “A unified systolic architecture for artificial neural networks,” Journal of Parallel and Distributed Computing, vol. 6, pp. 358-387, 1989.
[14] C. A. Mead, Analog VLSI and Neural Systems. Reading, MA: Addison-Wesley, 1989.
[15] D. Hammerstrom., “A vlsi architecture for high performance, low cost, on-chip learning,” IJCNN , vol. 2, pp. 537-544, 1990.
[16] U. Ramacher, J. Beichter, N. Brula, and E. Sicheneder, “Architeuture and VLSI design of a VLSI neural signal processor,” 1999 IEEE International Symposium on Circuit and Systems, vol. 3, pp. 1976-1978, 1999.
[17] J. L. Hennessy and D. A. Patterson, Computer Architecture: A Qualitative Approach, Second Edition, Morgan Kaufmann, 1996.
[18] K. C. Chang, Digital Systems Design with VHDL and Synthesis, Computer Society, 1999.
[19] C. T. Lin and C. S. George Lee, Neural Fuzzy System, NJ: Prentice Hall, 1996.
[20] H. Oh and S. C. Kothari, “Adaptation of the relaxation method for learning in bi-directional associative memory,” IEEE Trans. on Neural Network, vol. 5, pp. 576-583, 1994.
[21] D. G. Elliott, M. Stumm, et. al., “Computational RAM: implementing processors in memory,” IEEE design & Test of Computers, vol. 16, pp. 32-41, 1997.
[22] A. Amira, A. Bouridane, et. al., “A high throughput FPGA implementation of a bit-level matrix product.” IEEE workshop on Signal Processing System, SiPS 2000, pp. 356-364, 2000.
[23] A. Kramer, “Array-based analog computation,” IEEE Micro, vol. 16, pp. 40-49, 1996.
[24] P. Pouliquen, A. G. Andreou, K. Strohbehn, “Winner-takes-all associative memory: a hamming distance vector quantizer,” Journal of Analog Integrated Circuits and Signal Processing, vol. 13, pp. 211-222, 1997.
[25] A. Chiang, “A programmable CCD signal processor,” IEEE Journal of Solid-States Circuits, vol. 25, pp. 1510-1517, 1990.
[26] C. Neugebauer and A. Yariv, “A parallel analog CCD/CMOS neural network IC.” Proc. IEEE Int. Joint Conference on Neural Networks, Seatle, WA, vol. 1, pp. 447-451, 1991.
[27] F. Kub, K. Moon, I. Mack, and F. Long, “Programmable analog vector-matrix multipliers,” IEEE Journal of Solid-State Circuits, vol. 25, pp. 207-214, 1990.
[28] A. G. Andreou, K. A. Boahen, and P. O. Pouliquen, “Current-mode subthreshold MOS circuits for analog VLSI neural networks,” IEEE Trans. on Neural Networks, vol. 2, pp. 205-213, 1991.
[29] J. C. Gealow and C. G. Sodini, ”A pixel-parallel image processor using logic pitch-matched to dynamic memory,” IEEE J. Solid-State Circuits, vol. 34, pp. 831-839, 1999.
[30] H. Watanabe, W. D. Dettloff, and K. E. Yount, “A VLSI fuzzy logic controller with reconfigurable and cascadable architecture,” IEEE J. Solid-State Circuits, vol. 25, pp. 376-382, 1990.
[31] K. Nakamura et. al., “A 12-bit resolution 200 KFLIPS fuzzy inference processor,” IEICE Trans. Electronics, vol. 10, pp. 1102-1111, 1993.
[32] A. Hiraiwa, M. Fujita, S. Kurosu, S. Arisawa, and M. Inoue, “Implementation of ANN on RISC processor array,” in Proc. Int. Conf. on Application Specific Array Processors, 1990, pp. 677-688.
[33] J. N. Hwang and S. Y. Kung, “A systolic neural network architecture for hidden Markov models,” IEEE Trans. on Acoustics, Speech and Signal Processing, vol. 37, pp. 1967-1979, 1989.
[34] Z. G. Xie, “Chip implementation of a processor for multiple neural networks models,” Master Thesis, National Taiwan University, Taipei, Taiwan, R. O. C., June 1995.

QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
1. 59.高長,開發西部!科技興國!,貿易雜誌,第82期,2001年8月16日。
2. 42.楊欽昌,大陸地區及東協四國投資環境評估,實用稅務,1994年7月,頁7-13。
3. 58.潘望博、趙美榮,大陸加入WTO對商業領域之衝擊,兩岸經貿月刊,第101期,民國89年5月,頁6-7。
4. 58.潘望博、趙美榮,大陸加入WTO對商業領域之衝擊,兩岸經貿月刊,第101期,民國89年5月,頁6-7。
5. 58.潘望博、趙美榮,大陸加入WTO對商業領域之衝擊,兩岸經貿月刊,第101期,民國89年5月,頁6-7。
6. 41.賴志威,中國大陸投資環境分析研究,大陸經濟研究期刊,第十三卷第四期,民國80年7月,頁27-37。
7. 41.賴志威,中國大陸投資環境分析研究,大陸經濟研究期刊,第十三卷第四期,民國80年7月,頁27-37。
8. 41.賴志威,中國大陸投資環境分析研究,大陸經濟研究期刊,第十三卷第四期,民國80年7月,頁27-37。
9. 59.高長,開發西部!科技興國!,貿易雜誌,第82期,2001年8月16日。
10. 46.戴嬡坪,食品業赴大陸投資之研究,經濟情勢暨評論季刊,第四卷第一期,民國87年5月。
11. 46.戴嬡坪,食品業赴大陸投資之研究,經濟情勢暨評論季刊,第四卷第一期,民國87年5月。
12. 46.戴嬡坪,食品業赴大陸投資之研究,經濟情勢暨評論季刊,第四卷第一期,民國87年5月。
13. 45.趙琪,台灣食品業赴大陸投資誘因之研究,大陸經濟研究月刊,1995年,第17卷第2期,頁11。
14. 45.趙琪,台灣食品業赴大陸投資誘因之研究,大陸經濟研究月刊,1995年,第17卷第2期,頁11。
15. 45.趙琪,台灣食品業赴大陸投資誘因之研究,大陸經濟研究月刊,1995年,第17卷第2期,頁11。