(3.236.231.14) 您好!臺灣時間:2021/04/15 06:34
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

我願授權國圖
: 
twitterline
研究生:蔡俊成
研究生(外文):Jun-Cheng Tsai
論文名稱:泛用型環狀高效類神經硬體設計
論文名稱(外文):Implementation of High Performance Hardware Based Toroidal Neural Network
指導教授:蔡孟伸蔡孟伸引用關係
口試委員:陳昭榮張志永
口試日期:2007-07-12
學位類別:碩士
校院名稱:國立臺北科技大學
系所名稱:自動化科技研究所
學門:工程學門
學類:機械工程學類
論文種類:學術論文
論文出版年:2007
畢業學年度:95
語文別:中文
論文頁數:74
中文關鍵詞:倒傳遞類神經可程式化邏輯閘陣列環狀串列
外文關鍵詞:Back-Propagation Neural NetworkFPGAToroidal Serial
相關次數:
  • 被引用被引用:5
  • 點閱點閱:68
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
現今許多的人工智慧應用領域中,類神經網路扮演著重要的角色。大部分的應用,類神經網路是以軟體的方式在一般的計算機上面實現。雖然兼具了彈性,但運算速度十分耗時。學習的過程僅能以離線(Off-Line)的方式運算,阻礙了應用的範圍。類神經網路在運算時需要進行大量的數學運算,這些透過軟體實現的系統只能於高速的計算機上順利執行,而無法在低階的嵌入式系統中進行應用。隨著科技的發展,人們嘗試以硬體來實現類神經網路藉以提高速度。有些硬體僅針對特定用途類神經網路架構及參數進行設計,開發時間長且限制了移植性;另一些硬體則使用大量的邏輯元件、佔據龐大晶片面積,耗費成本。本文的目的是開發出一個高效能的泛用型類神經網路系統,藉由採用環狀串列多資料匯流排架構,進行倒傳遞類神經網路的運算,使其具有回想(Recall)與學習(Learning)的完整功能。使用者可以根據倒傳遞架構的不同,調整陣列中運算單元的數量,而不需要再重新規劃及設計整個系統。期望透過這樣的開發,將類神經網路的應用延伸到低階的嵌入式系統中,以帶動新一代的應用。本文改善以往類神經網路硬體架構,透過較少的邏輯元件數目,在兼具彈性的同時,還能達到更佳的執行效能效果。
Neural networks play an important role in artificial intelligence application domains. In most of applications, neural networks are often implemented in software form. Although the software implementation of neural networks provides flexibility, the operating speed is limited due to the sequential machine architecture. In most applications, the learning procedure is carried off-line. A large amount of mathematics operations are needed when learning task of neural networks is performed. The neural network systems implemented using software can only work well in high speed computers. The performance is not adequate when it is implemented on embedded systems. Following the development of modern technologies, people attempt to realize the neural networks by hardware in order to improve the performance. Designs utilizing special architectures and parameters to achieve the performance were proposed in the past in order to provide higher performance. This thesis proposes a high efficiency and generic neural network hardware architecture. The architecture uses the toroidal series multiple data stream to process the back propagation neural network operations, which has the full function of recall and learning capabilities. Users can adjust the number of processor unit in the system based on the requirement of the applications. Since the proposed system is developed in hardware, it can be integrated into embedded systems. The experimental results show that the system can reach higher performance by using fewer logical elements while maintaining flexibility.
中文摘要 i
英文摘要 ii
誌謝 iv
目錄 v
表目錄 viii
圖目錄 ix
第一章 緒論 1
1.1 研究動機 1
1.2 文獻回顧 2
1.2.1 多指令多資料匯流排 2
1.2.2 單指令多資料匯流排 3
1.2.3 二維脈動陣列架構 4
1.2.4 一維脈動陣列 5
1.3 論文架構 9
第二章 類神經網路 10
2.1 簡介 10
2.2 神經元 11
2.3 運算單元 12
2.3.1 集成函數 13
2.3.2 作用函數 13
2.3.3 活化函數 14
2.4 類神經網路的分類 15
2.5 倒傳遞類神經網路架構 17
2.5.1 前向運算 19
2.5.2 逆向運算 20
第三章 系統架構 23
3.1 起源 23
3.2 開發平台 23
3.2.1 NIOS嵌入式處理器 24
3.2.2 Avalon傳輸介面 25
3.3 硬體架構與原理 28
3.3.1 環狀架構 29
3.3.2 運算單元 31
3.3.3 活化函數精簡 32
3.3.4 硬體共構 33
3.3.5 學習架構 34
3.3.6 堆疊與佇列控制 36
3.3.7 控制單元 36
3.3.8 亂數產生器 37
3.3.9 數字系統 38
3.4 流程 39
3.4.1 整體流程 39
3.4.2 權重值初始化 40
3.4.3 前向運算 41
3.4.4 逆向運算 42
3.4.4.1 計算 42
3.4.4.2 計算 44
3.4.4.3 權重值修正計算 45
3.5 活化函數的近似 46
3.5.1 相關研究 46
3.5.1.1 使用類似圖形 46
3.5.1.2 邏輯運算近似 47
3.5.1.3 查表法 49
3.5.1.4 CORDIC 50
3.5.1.5 泰勒展開 51
3.5.1.6 內插法 52
3.5.2 最佳化查表內插法 54
3.5.2.1 極距截線法求最佳轉折點 54
第四章 系統驗證與效能分析 58
4.1 正弦函數曲線擬合 58
4.1.1 相關參數 59
4.1.2 實驗結果與效能分析 59
4.2 分類問題 62
4.2.1 相關參數 63
4.2.2 實驗結果效能分析 64
4.3 過電流保護曲線 64
4.3.1 相關參數 65
4.3.2 實驗結果效能分析 66
第五章 結論與展望 68
5.1 結論 68
5.2 研究貢獻 69
5.3 未來展望與建議 69
參考文獻 70
作者簡介 74
[1]J.A.B. Fortes and B.W. Wah, "Systolic Arrays: from concepts to implementation," IEEE Computer, vol. 20, Issue 7, 1987, pp. 12-17
[2]Accurate Automation Corporation, AAC Neural Network MIMD Processor, Technical Data Sheet, Chattanooga, TN, 1995.
[3]R. Saeks, K. Priddy, K. Schnieder and S. Stowell, "On the Design of an MIMD Neural Network Processor," Proceedings of World Cong. Neural Networks ''95, San Diego/California, pp. 590-595, June 1995.
[4]郭功勳,倒傳遞類神經網路之VLSI 設計,碩士論文,交通大學電機與控制工程所,新竹,2001。
[5]S. Shams and K.W. Przytula, "Mapping of Neural Network onto Programmable Parallel Machines," IEEE International Symposium on Circuits and Systems, New Orleans, USA, 1990, vol. 4, p.p. 2613–2617.
[6]S. Jones, K. Sammut, C. Nielsen and J. Staunstrup, "Toroidal Neural Network: Architecture and Processor Granularity Issues," VLSI design of Neural Networks, 1990, pp. 229-255.
[7]D. Naylor and S. Jones, VHDL: A Logic Synthesis Approach, Cambridge: Chapman & Hall , 1997, p.p. 271-303.
[8]D. Hammerstrom, "A Digital VLSI Architecture for Real-World Applications," An introduction to neural and electronic networks, 1995, pp. 343.
[9]S. Y. Kung, "VLSI Array processors," IEEE ASSP Magazine, vol. 2 , Issue 3, Part 1, 1985, pp. 4-22.
[10]Y. Fujimoto, "An enhanced parallel planar lattice architecture for large scale neural network simulations," 1990 IJCNN International Joint Conference on Neural Networks, 17-21 Jun 1990, vol. 2, pp. 581-586.
[11]I.Z. Mihu, R. Brad and M. Breazu, "Specifications and FPGA implementation of a systolic Hopfield-type associative memory," Proceedings of IJCNN ''01. International Joint Conference on Neural Networks, 2001, vol. 1, pp. 228-233.
[12]I.Z. Mihu and H.V. Caprita, "Architectural improvements and FPGA implementation of a multimodel neuroprocessor," Proceedings of the 9th International Conference on Neural Information Processing, 18-22 Nov. 2002, vol. 4, pp. 1749-1753.
[13]E.R. Khan and N. Ling, "Systolic architectures for artificial neural nets," 1991 IEEE International Joint Conference on Neural Networks, 18-21 Nov. 1991, vol. 1, pp. 620-627.
[14]S. Mahapatra, "Mapping of neural network models onto systolic arrays," Parallel and Distributed Computing, vol. 60, Issue 6, 2000, pp. 677-689.
[15]S.Y. Kung and J.N. Hwang, "Digital VLSI architectures for neural networks," IEEE International Symposium on Circuits and Systems, 8-11 May 1989, vol. 1, pp. 445-448.
[16]S.Y. Kung and J.N. Hwang, "A unifying algorithm/architecture for artificial neural networks," International Conference on Acoustics, Speech, and Signal Processing, ICASSP-89, 23-26 May 1989, vol. 4, pp. 2505-2508.
[17]D. Hammerstrom, "A VLSI architecture for high-performance, low-cost, on-chip learning," Proceedings of International Joint Conference on Neural Networks, 1990, pp. 537-544.
[18]D. Hammerstrom, Digital VLSI for Neural Networks, The Handbook of Brain Theory and Neural Networks, Second Edition, Michael Arbib, MIT Press, 2003.
[19]K. Wojtek Przytula, "Parallel digital implementations of neural networks," Proceedings of the International Conference on Application Specific Array Processors, 2-4 Sep. 1991, pp.162-176
[20]T.D. Chiueh and H.T. Chang, "One Dimensional Systolic Array Architecture For Neural Network," United States Patent, 1998, No.579134
[21]W. S. McCulloch and W. Pitts, "A logical calculus of the ideas immanent in neurons activity," Bull. Math. Biophys., 1943, vol. 5, pp. 115–133.
[22]F. Rosenblatt, "The Perceptron: A perceiving and recognizing automaton, Project PARA," Report 85-460-1, Project PARA, Cornell Aeronautical Laboratory, Ithaca, New York, 1957.
[23]M. Minsky and S. Papert, Perceptrons: An Introduction to Computational Geometry, Cambridge, Mass., MIT Press, 1969.
[24]J.J. Hopfield, "Neural networks and physical systems with emergent collective computational abilities," Proceedings of the National Academy of Sciences, 1982, vol. 79, pp. 2554-2558.
[25]林昇甫、洪成安,神經網路入門與圖樣辨識,第二版,全華,台北,民85。
[26]J.M. Twomey, A.E. Smith and M.S. Redfern, "A Predictive Model for Slip Resistance Using Artificial Neural Networks," IIE Transactions, 1995, vol. 27, pp. 374–381.
[27]陳慶全,改良式環狀類神經網路架構之實現與應用,碩士論文,國立台北科技大學自動化科技研究所,2006,台北。
[28]S. Haykin, Neural Networks: A Comprehensive Foundation 2nd Ed., New Jersey: Prentice-Hall, 1999, pp. 174.
[29]P.H. Bardell, W.H. McAnney and J. Savir, Built-In Test for VLSI: Pseudo-Random Techniques, New York: John Wiley & Sons, 1987.
[30]J.L. Holt and T.E. Baker, "Back propagation simulations using limited precision calculations," IJCNN-91-Seattle International Joint Conference on Neural Networks, Seattle, WA, USA, 8-14 Jul 1991, vol. 2, pp. 121-126.
[31]H.K. Kwan, "Simple sigmoid-like activation function suitable for digitalhardware implementation," Electronics Letters, 6 Jul 1992, vol. 28, Issue 15, pp. 1379-1380.
[32]M.T. Tommiska, "Efficient digital implementation of the sigmoid function for reprogrammable logic," Proceedings of IEE Computers and Digital Techniques, 17 Nov. 2003, vol. 150, Issue: 6, pp. 403-411.
[33]P. D. Reynolds, Algorithm Implementation in FPGAs Demonstrated Through Neural Network Inversion on the SRC-6e, Master''s Thesis, Baylor University, Waco, Texas, 2005.
[34]H. Hahn, D. Timmermann, B.J. Hosticka and B. Rix, "A unified and division-free CORDIC argument reduction method with unlimited convergence domain including inverse hyperbolic functions," IEEE Transactions on Computers, 1994, vol. 43, Issue 11, pp 1339-1344.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
系統版面圖檔 系統版面圖檔