跳到主要內容

臺灣博碩士論文加值系統

(44.212.96.86) 您好!臺灣時間:2023/12/06 14:46
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:謝承恩
研究生(外文):HSIEH, CHEN-EN
論文名稱:基於隨機計算之深度信念網路硬體實作
論文名稱(外文):Hardware Implementation of Deep Belief Network with Stochastic Computing
指導教授:朱紹儀
指導教授(外文):CHU, SHAO-I
口試委員:朱紹儀蕭勝夫黃有榕連志原
口試委員(外文):CHU, SHAO-IHSIAO, SHEN-FUHUANG, YU-JUNGLIEN, CHIH-YUAN
口試日期:2019-08-06
學位類別:碩士
校院名稱:國立高雄科技大學
系所名稱:電子工程系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2019
畢業學年度:107
語文別:中文
論文頁數:54
中文關鍵詞:隨機計算完整隨機計算新型完整隨機計算深度信念網路雙曲正切函數指數函數S型函數
外文關鍵詞:Stochastic ComputingIntegral Stochastic ComputingNew Integral Stochastic Computingdeep neural networkhyperbolic tangentexponentiationsigmoid
相關次數:
  • 被引用被引用:0
  • 點閱點閱:148
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
隨機計算是種低硬體成本的計算元件,可以用簡單的邏輯閘來實現多種函式。但因傳統的隨機計算受限於能表達的值域僅在0至1之間,其後提出的完整隨機計算有效地將值域擴展至-x至x間後,讓隨機計算可以應用在更多領域內。也因此,我們在去年亦提出了一種新型完整隨機計算來改良完整隨機計算。以簡化判斷條件與狀態機個數的方式來降低其整體硬體成本。而深度信念網路是一種經典可靠的神經網路,可用來解決分類或生成的問題。
在本篇論文中,我們將提出基於新型完整隨機計算系統的深度信念網路來改良基於完整隨機計算系統的深度信念網路。與其他篇國際論文相同,本論文將統一以784-100-200-10的網路架構來做為實現的依據,且SC-DBN與ISC-DBN我們皆有提供模擬資料。權重與偏壓則是統一由預訓練資料所提取出來,用來完成MNIST手寫辨識資料庫分類任務的資料。我們的模擬結果則將以均方差的方式來表達其各設計間的模擬誤差有多少。而ISC-DBN與NISC-DBN神經元的資料將統一以台灣積體電路製造公司的TSMC 40nm製造合成。從我們的神經元合成資料可知我們提出的新型基於完整新型隨機計算之深度信念網路相較於基於完整新型隨機計算之深度信念網路有更佳的模擬結果與低延遲,並僅增加了1.6%的硬體成本,成功的改良了現有的基於完整隨機計算系統的深度信念網路。

The deep belief network (DBN) is a classic and representative neural network designed to solve classification problems. Stochastic computing (SC) is a highly efficient and attractive paradigm with low-cost hardware, the computation operation can be implemented by simple logic gates. The range of the conventional SC in the bipolar format is limited in the interval of [-1, 1], while the integral stochastic computing (ISC) expands the range to [-m, m], where m is the number of input streams. The new integral stochastic computing (NISC) has recently been introduced to improve hardware cost of ISC by reducing the number of states in the finite state machine (FSM). In this thesis, we propose a novel NISC-DBN architecture to improve hardware cost of the conventional ISC-DBN framework. The four-layer DBN structure 784-100-200-10 is considered. Simulation results reveal NISC-DBN outperform ISC-DBN in terms of the mean-square error (MSE). The classification accuracy of the NISC-DBN is also superior to that of ISC-DBN by applying the modified national institute of standards and technology (MNIST) dataset. The proposed NISC-DBN only increases the hardware cost by 1.6% over ISC-DBN in implementation the stochastic neuron of the first layer.

中文摘要 i
ABSTRACT ii
致謝 iii
目錄 iv
圖目錄 v
表目錄 vii
緒論 1
1.1 隨機計算相關研究文獻分析 2
1.2 隨機計算深度信念網路研究動機與目標 3
1.3 論文貢獻 4
隨機計算相關文獻實作與探討 5
2.1隨機計算與隨機計算元件介紹 5
2.2結合有限狀態機的傳統隨機計算元件 10
2.3結合有限狀態機的完整隨機計算元件 12
2.5隨機計算結合二進制電路的A-SCAU元件 14
基於隨機計算元件之深度信念網路相關文獻實作與探討 17
3.1 以完整隨機計算實現的深度信念網路 18
3.2 隨機計算結合二進制電路A-SCAU的深度信念網路 23
基於新型完整隨機計算之深度信念網路 25
4.1結合有限狀態機的新型完整隨機計算元件 25
4.2結合有限狀態機的新型完整隨機計算元件推導過程 26
4.3基於新型完整隨機計算之深度信念網路 30
硬體模擬結果與合成資料 34
5.1雙曲正切函數tanh(x) 34
5.1.1雙曲正切函數模擬結果 34
5.1.2雙曲正切函數合成資料 35
5.2指數函數exponentiation(x) 36
5.2.1指數函數模擬結果 36
5.2.2指數函數合成資料 37
5.3 S型函數Sigmoid(x) 38
5.3.1 S型函數模擬結果 38
5.3.2 S型函數合成資料 39
5.3.3 深度信念網路第一層神經元S型函數模擬資料 40
5.3.4 不同位元流長度模擬資料 42
成果與未來展望 44
參考文獻 45

[1] B. R. Gaines, “Stochastic computing systems,” in Advances in Information Systems
Science. Boston, MA, USA: Springer, 1969, pp. 37–172.
[2] S. S. Tehrani, S. Mannor, and W. J. Gross, “Fully parallel stochastic LDPC
decoders,” IEEE Trans. Signal Process., vol. 56, no. 11, pp. 5692–5703, Nov. 2008.
[3] Y.-L. Ueng, C.-Y. Wang, and M.-R. Li, “An efficient combined bitflipping and
stochastic LDPC decoder using improved probability tracers,” IEEE Trans. Signal
Process., vol. 65, no. 20, pp. 5368–5380, Oct. 2017.
[4] P. Li and D. J. Lilja, “Using stochastic computing to implement digital image
processing algorithms,” in Proc. IEEE 29th Int. Conf. Comput. Design (ICCD), Oct.
2011, pp. 154–161.
[5] P. Li, D. J. Lilja, W. Qian, K. Bazargan, and M. D. Riedel, “Computation on
stochastic bit streams digital image processing case studies,” IEEE Trans. Very
Large Scale Integr. (VLSI) Syst., vol. 22, no. 3, pp. 449–462, Mar. 2014.
[6] Y. Liu and K. K. Parhi, “Architectures for recursive digital filters using stochastic
computing,” IEEE Trans. Signal Process., vol. 64, no. 14, pp. 3705–3718, Jul. 2016.
[7] N. Onizawa, D. Katagiri, K. Matsumiya, W. J. Gross, and T. Hanyu, “Gabor filter
based on stochastic computation,” IEEE Signal Process. Lett., vol. 22, no. 9, pp.
1224–1228, Sep. 2015.
[8] D. Cire¸san, U. Meier, and J. Schmidhuber, “Multi-column deep neural networks
for image classification,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit.
(CVPR), 2012, pp. 3642–3649.
[9] G. E. Dahl, D. Yu, L. Deng, and A. Acero, “Context-dependent pretrained deep
neural networks for large-vocabulary speech recognition,” IEEE Trans. Audio,
Speech, Lang. Process., vol. 20, no. 1, pp. 30–42, Jan. 2012.
[10] C. Szegedy, A. Toshev, and D. Erhan, “Deep neural networks for object detection,”
in Proc. Adv. Neural Inf. Process. Syst., 2013, pp. 2553–2561.
[11] B. D. Brown and H. C. Card, “Stochastic neural computation. I: Computational
elements,” IEEE Trans. Comput., vol. 50, no. 9, pp. 891–905, Sep. 2001.
[12] B. D. Brown and H. C. Card, “Stochastic neural computation. II. Soft competitive
learning,” IEEE Trans. Comput., vol. 50, no. 9, pp. 906–920, Sep. 2001.
[13] J. P. Hayes, “Introduction to stochastic computing and its challenges,” in Proc.
DAC, Jun. 2015, pp. 1–3.
[14] A. Alaghi and J. P. Hayes, “Dimension reduction in statistical simulation of digital
circuits,” in Proc. Symp. Theory Modeling Simulation, DEVS Integr. M&S Symp.,
2015, pp. 1–8.
[15] A. Ardakani, F. Leduc-Primeau, N. Onizawa, T. Hanyu, and W. J. Gross, “VLSI
implementation of deep neural network using integral stochastic computing,”
IEEE Trans. Very Large Scale Integr. (VLSI) Syst., vol. 25, no. 10, pp. 2688 –
2699, Oct. 2017.
[16] S. Chu, C. Hsieh and Y. Huang, "Design of FSM-Based Function With Reduced
Number of States in Integral Stochastic Computing," in IEEE Transactions on
Very Large Scale Integration (VLSI) Systems, vol. 27, no. 6, pp. 1475-1479, June
2019.
[17] P. Li, D. J. Lilja, W. Qian, M. D. Riedel, and K. Bazargan, “Logical computation
on stochastic bit streams with linear finite-state machines,” IEEE Trans. Comput.,
vol. 63, no. 6, pp. 1474–1486, Jun. 2014.
[18] Z. Li, A. Ren, J. Li, Q. Qiu, Y. Wang, and B. Yuan, “Dscnn: hardwareoriented
optimization for stochastic computing based deep convolutional neural networks,”
in IEEE ICCD, pp. 678–681, 2016.
[19] J. L. Rossell’o, V. Canals, and A. Morro, “Probabilistic-based neural network
implementation,” in IEEE IJCNN, pp. 1–7, 2012.
[20] Y. Ji, F. Ran, C. Ma, and D. J. Lilja, “A hardware implementation of a radial basis
function neural network using stochastic logic, ” in DATE, pp. 880–883, 2015.
[21] Liu, Yin. (2017). Digital Signal Processing and Machine Learning System Design
using Stochastic Logic. Retrieved from the University of Minnesota Digital
Conservancy, http://hdl.handle.net/11299/190534.
[22] A. Alaghi and J. Hayes, “Exploiting correlation in stochastic circuit design,” in
Proc. IEEE 31st Int. Conf. Comput. Design, 2013, pp. 39–46.
[23] Y. Liu, Y. Wang, F. Lombardi, and J. Han, “An energy-efficient onlinelearning
stochastic computational deep belief network,” IEEE Trans. Emerg. Sel. Topics
Circuits Syst., vol. 8, no. 3, pp. 454–465, Sep. 2018.
[24] R. K. Budhwani, R. Ragavan, and O. Sentieys, “Taking advantage of correlation
in stochastic computing,” in 2017 IEEE International Symposium on Circuits and
Systems (ISCAS), May. 2017, pp. 1–4.

電子全文 電子全文(網際網路公開日期:20240828)
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top