(3.238.186.43) 您好!臺灣時間:2021/02/25 02:30
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

我願授權國圖
: 
twitterline
研究生:郭豐源
研究生(外文):KUO,FENG-YANG
論文名稱:人工智慧之快速類神經網路演算法及 FPGA即時辨識
論文名稱(外文):Real-Time Object Detector Base on Fast Algorithm Deep Convolutional Neural Networks on an FPGA
指導教授:夏世昌夏世昌引用關係
指導教授(外文):HSIA,SHIH-CHANG
口試委員:夏世昌王斯弘陳朝烈黃國興
口試委員(外文):HSIA,SHIH-CHANGWANG,SZ-HUNGCHEN,CHAO-LIEHHUANG,GUO-SHING
口試日期:2020-07-30
學位類別:碩士
校院名稱:國立雲林科技大學
系所名稱:電子工程系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2020
畢業學年度:108
語文別:中文
論文頁數:104
中文關鍵詞:神經網絡串聯卷積特徵圖
外文關鍵詞:Neural networkSumNetconvolutionconcatenationfeature mapXilinx
相關次數:
  • 被引用被引用:0
  • 點閱點閱:55
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
摘要 i
ABSTRACT ii
誌謝 iii
目錄 iv
表目錄 vii
圖目錄 viii
第一章、緒論 1
1.1 前言 1
1.2 研究動機及目的 2
1.3 論文架構 3
第二章、相關研究與探討 4
2.1 網路架構演進史 4
2.2 VGGNet相關研究 4
2.3 ResNet相關研究 6
2.4 DenseNet相關研究 8
2.5 數據集相關研究 9
第三章、快速卷積演算法 12
3.1 演算法架構 12
3.2 卷積神經網路 13
3.2.1 神經元操作 13
3.2.2 卷積操作 14
3.2.3 激勵函數 15
3.2.4 池化層 17
3.2.5 全連接層 19
3.3 快速卷積神經網路架構 20
3.3.1 快速卷積神經網路 21
3.4 實驗結果分析 22
第四章、基於Tensorflow深度類神經網路研究方法 27
4.1 研究工具 27
4.1.1 研究工具 27
4.2 實驗設計 28
4.3 數據處裡 28
4.4 模型建構 33
4.4.1 SUM-Block 33
4.4.2 Sum 34
4.4.3 Conv-BN-Activation 36
4.4.4 Concatenation 38
4.4.5 SUM-VGG16 Architecture 39
4.5 實驗結果 44
4.5.1 Avg pooling層代替FC層 44
4.5.2 熱力圖分析 47
4.5.3 SUM-Block權重分析 52
4.5.4 Cifar10實驗結果 54
4.5.5 Cifar100實驗結果 58
4.5.6 Tiny-ImageNet實驗結果 60
4.5.7 各網路模型比較 61
第五章、FPGA硬體架構設計 64
5.1 研究工具 64
5.2 軟硬體架構流程圖 65
5.3 硬體架構設計 67
5.4 DPU架構設計 69
5.5 ZCU104 IP測試 72
5.6 混合DPU 可執行文件 73
5.6.1 量化 74
5.6.2 量化數據分析 76
5.6.3 合成DPU 77
5.7 靜態測試 78
5.8 動態辨識 83
第六章、結論 88
參考文獻 89


[1]維基百科, Artificial intelligence.
[2]世界經濟論壇(WEF), 2016.
[3]大和有話說, AI人工智慧:3大浪潮+3大技術+3大應用|大和有話說, 2018.
[4]Y. Lecun, L. Bottou, Y. Bengio and P. Haffner, “Gradient-based learning applied to document recognition,” in Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, Nov 1998.
[5]Krizhevsky, A., Sutskever, I. and Hinton, G. E., “ImageNet Classification with Deep Convolutional,” NIPS'12: Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume , p. 1097–1105, Dec 2012.
[6]Karen Simonyan & Andrew Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” ICLR, 2015, 2015.
[7]C. Szegedy et al, “Going deeper with convolutions,” 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, pp. 1-9, 2015.
[8]K. He, X. Zhang, S. Ren and J. Sun, “Deep Residual Learning for Image Recognition,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, pp. 770-778, 2016.
[9]L. Fei-Fei, "ImageNet Large Scale Visual Recognition Challenge," 2010. [Online]. Available: http://image-net.org.
[10]"CNN Architectures — LeNet, AlexNet, VGG, GoogLeNet and ResNet," mc.ai, 2018.[Online].Available:https://mc.ai/cnn-architectures-lenet-alexnet-vgg-googlenet-and-resnet/.
[11]JT, "DeepLearning," [Online]. Available: https://medium.com/@danjtchen.
[12]Min Lin, Qiang Chen, Shuicheng Yan, “Network In Network,” CoRR, vol, 2013.
[13]G. Huang, Z. Liu, L. Van Der Maaten and K. Q. Weinberger, “Densely Connected Convolutional Networks,” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, pp. 2261-2269, 2017.
[14]Krizhevsky, Alex, "The CIFAR-10 and CIFAR-100," 2004. [Online]. Available: https://www.cs.toronto.edu/~kriz/cifar.html.
[15]"Tiny ImageNet Visual Recognition Challenge," Stanford, 2015. [Online]. Available: https://tiny-imagenet.herokuapp.com/.
[16]Sergey Ioffe and Christian Szegedy, "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift," ICML'15: Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37, p. 448–456, July 2015.
[17]Rémy,Philippe, "keract," [Online]. Available: https://github.com/philipperemy/keract.
[18]xilinx. [Online]. Available: https://www.xilinx.com/.
[19]xilinx, "ZCU104 Evaluation Board User Guide (UG1267)," 2018, p. 9.
[20]xilinx, "Zynq DPU v3.1 IP Product Guide," 2019.
[21]Benoit Jacob, Skirmantas Kligys, Bo Chen, Menglong Zhu, Matthew Tang, Andrew Howard, Hartwig Adam, Dmitry Kalenichenko, “Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference,” 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, pp. 2704-2713, 2018.
[22]xilinx, "Vitis AI User Guide(UG1414)," 2019. [Online]. Available: https://www.xilinx.com/support/documentation/sw_manuals/vitis_ai/1_0/ug1414-vitis-ai.pdf.
[23]Y. Wang, J. Xu, Y. Han, H. Li, and X. Li, “Automatic generation of FPGA-based learning accelerators for the neural network family,” 2017.
[24]C. Zhang, Z. Fang, P. Zhou, P. Pan, and J. Cong, “Caffeine: towards uniformed representation and acceleration for deep convolutional neural networks,” in ICCAD, 2016.
[25]S. I. Venieris and C.-S. Bouganis, ‘‘Latency-driven design for FPGAbased convolutional neural networks,’’ in Proc. IEEE 27th Int. Conf. Field Program. Logic Appl. (FPL), Sep. 2017, pp. 1–8.
[26]J. Mairal, “End-to-end kernel learning with supervised convolutional kernel networks,” CoRR, vol. abs/1605.06265, pp. 1–16, Dec. 2016.
[27]A. Coates and A. Y. Ng, “The importance of encoding versus training with sparse coding and vector quantization,” in Proc. ICML, Jul. 2011.
[28]T. H. Chan, K. Jia, S. Gao, J. Lu, Z. Zeng, and Y. Ma, “PCANet: A simple deep learning baseline for image classification,” IEEE Trans. Image Process, vol. 24, no. 12, pp. 5017–5032, Dec. 2015.
[29]T. Lin and H. T. Kung, “Stable and efficient representation learning with nonnegativity constraints,” in Proc. ICML, Jun. 2014, pp. 1323–1331.
[30]C. Lee, S. Xie, P. W. Gallagher, Z. Zhang, and Z. Tu, “Deeply-supervised nets,” in Proc. JMLR, Feb. 2015, pp. 562–570.
[31]S. Zagoruyko and N. Komodakis, “Wide residual networks,” CoRR, vol. abs/1605.07146, pp. 1–15, Jun. 2016.
[32]K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. CVPR, Jul. 2016, pp. 770–778.
[33]S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He, “Aggregated residual transformations for deep neural networks,” Nov. 2016, arXiv:1611.05431. [Online]. Available.
[34]郭豐源,夏世昌, “改良 VGGNET 之類神經網路架構,” DLT2020數位生活科技研討會 , May 2020, pp. 259-262.

電子全文 電子全文(網際網路公開日期:20250818)
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
系統版面圖檔 系統版面圖檔