跳到主要內容

臺灣博碩士論文加值系統

(100.28.132.102) 您好!臺灣時間:2024/06/21 23:37
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:劉品和
研究生(外文):LIU, PIN-HE
論文名稱:FPGA平台實現二值化類神經網路運行地面可行進區域辨識
論文名稱(外文):The Binary Neural Network Ground truth region estimation on FPGA Platform for UGV Robots
指導教授:宋啟嘉
指導教授(外文):SUN, CHI-CHIA
口試委員:宋啟嘉蔡宗漢許明華李佩君
口試委員(外文):SUN, CHI-CHIATSAI, TSUNG-HANSHEU, MING-HWALEE, PEI-JUN
口試日期:2020-07-30
學位類別:碩士
校院名稱:國立虎尾科技大學
系所名稱:電機工程系碩士班
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2020
畢業學年度:108
語文別:中文
論文頁數:56
中文關鍵詞:BNNFCNPYNQB-FCN影像辨識人工智慧可行進區域辨識SoC-FPGA無人載具
外文關鍵詞:BNNFCNPYNQB-FCNImage recognitionArtificial IntelligenceArea recognitionSoC-FPGAUGV
相關次數:
  • 被引用被引用:0
  • 點閱點閱:153
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
中文摘要..........i
英文摘要..........ii
誌謝..........iii
目錄..........iv
表目錄..........vi
圖目錄..........vii
專有名詞說明..........ix
第一章. 緒論..........1
1.1 研究背景與動機..........1
1.2 研究目的與方法..........2
1.3 文獻回顧與探討..........2
1.4 論文組織架構..........4
第二章. 卷積神經網路..........5
2.1 卷積神經網路..........5
2.2 二值化與量化神經網路..........8
2.2.1 二值化神經網路..........8
2.2.2 量化神經網路..........9
2.2.3 修整資料長度..........10
2.3 適用FPGA之二值化神經網路架構..........11
第三章. 地面區域辨識方法..........12
3.1 表面紋理特徵提取法..........12
3.1.1 超像素(Superpixel)..........12
3.1.2 Canny邊緣偵測..........15
3.2 全卷積神經網路..........17
3.2.1 語意分割..........17
3.2.2 卷積化..........18
3.2.3 上採樣..........18
3.2.4 顯示結果與其架構..........19
3.3 SegNet及HRNet..........19
3.3.1 語義分割網路..........19
3.3.2 高解析網路..........20
3.4 收集數據與物件標籤法..........21
3.4.1 數據收集..........21
3.4.2 物件標籤方法..........22
3.5 訓練模型..........27
3.6 取得最適合FPGA操作模型..........28
第四章. FPGA實驗平台與地面無人載具..........29
4.1 系統架構..........29
4.2 硬體架構..........29
4.2.1 ZCU104..........29
4.2.2 ZCU104規格..........31
4.3 DMA資料通道及AXI 4資料匯流排..........32
4.3.1 AXI 4-Stream..........33
4.3.2 AXI 4-Stream協定..........33
4.4 PYNQ(Python Productivity for Zynq)..........34
4.5 地面無人載具..........35
4.5.1 ROS系統通訊架構..........36
4.5.2 載具規格..........37
4.5.3 軟硬體支援..........38
第五章. 實驗與結果..........39
5.1 系統流程..........39
5.2 系統實現..........40
5.2.1 硬體IP大小..........40
5.2.2 驗證..........40
5.2.3 實驗結果與預測..........43
5.3 硬體實現..........45
5.3.1 測試..........45
5.3.2 測試結果..........46
第六章. 結論..........47
參考資料..........48
Extended Abstract..........50

[1]P. Ongsulee, "Artificial intelligence, machine learning and deep learning," in International Conference on ICT and Knowledge Engineering, 2017, pp. 1-6.
[2]M. Xue and C. Zhu, "A Study and Application on Machine Learning of Artificial Intellligence," in International Joint Conference on Artificial Intelligence, 2009, pp. 272-274.
[3]R. Brandon, A. Robert, W. Paul, W. Gu-Yeon, B. David, and M. Margaret, Deep Learning for Computer Architects. 2017.
[4]M. Bettoni, G. Urgese, Y. Kobayashi, E. Macii, and A. Acquaviva, "A Convolutional Neural Network Fully Implemented on FPGA for Embedded Platforms," in 2017 New Generation of CAS (NGCAS), pp. 49-52.
[5]J. Wang, Q. Lou, X. Zhang, C. Zhu, Y. Lin, and D. Chen, "Design Flow of Accelerating Hybrid Extremely Low Bit-width Neural Network in Embedded FPGA," vol. abs/1808.04311, 2018.
[6]S. Aggarwal, A. M. Namboodiri, and C. V. Jawahar, "Estimating Floor Regions in Cluttered Indoor Scenes from First Person Camera View," in International Conference on Pattern Recognition, 2014, pp. 4275-4280.
[7]E. Shelhamer, J. Long, and T. Darrell, "Fully Convolutional Networks for Semantic Segmentation," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 4, pp. 640-651, 2017.
[8]M. C. a. Y. Bengio, "BinaryNet: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1," Computing of Research Respoitory, vol. abs/1602.02830, 2016.
[9]D. Miyashita, E. H. L. and, and B. Murmann, "Convolutional Neural Networks using Logarithmic Data Representation," CoRR, vol. abs/1603.01025, 2016.
[10]M. Bettoni, G. Urgese, Y. Kobayashi, E. Macii, and A. Acquaviva, "A Convolutional Neural Network Fully Implemented on FPGA for Embedded Platforms," in 2017 New Generation of CAS (NGCAS), pp. 49-52.
[11]I. H. and, M. C. and, D. S. and, R. E.-Y. and, and Y. Bengio, "Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations," CoRR, vol. abs/1609.07061, 2016.
[12]S. Zhou, Z. Ni, X. Zhou, H. Wen, Y. Wu, and Y. Zou, "DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients," vol. abs/1606.06160, 2016.
[13]P. Guo, H. Ma, R. Chen, P. Li, S. Xie, and D. Wang, "FBNA: A Fully Binarized Neural Network Accelerator," in International Conference on Field Programmable Logic and Applications, 2018, pp. 51-513.
[14]R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Süsstrunk, "SLIC Superpixels Compared to State-of-the-Art Superpixel Methods," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 11, pp. 2274-2282, 2012.
[15]J. Canny, "A Computational Approach to Edge Detection," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAMI-8, no. 6, pp. 679-698, 1986.
[16]H. A. A. I. Lab, "Semantic Segmentation Editor," 2018.
[17]E. Shelhamer, J. Long, and T. Darrell, "Fully Convolutional Networks for Semantic Segmentation," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 4, pp. 640-651, 2017.
[18]V. Badrinarayanan, A. Handa, and R. Cipolla, "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Robust Semantic Pixel-Wise Labelling," CoRR, vol. abs/1505.07293, 2015.
[19]V. Badrinarayanan, A. Kendall, and R. Cipolla, "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 12, pp. 2481-2495, 2017.
[20]A. Kendall, V. Badrinarayanan, and R. Cipolla. (2015). A Deep Convolutional Encoder-Decoder Architecture for Robust Semantic Pixel-Wise Labelling. Available: http://mi.eng.cam.ac.uk/projects/segnet/demo.php#demo
[21]J. Wang et al., "Deep High-Resolution Representation Learning for Visual Recognition," IEEE transactions on pattern analysis and machine intelligence, 2020.
[22]G. Brostow and J. Fauqueur. (2007). InteractLabeler.
[23]C. S. a. A. I. L. MIT. Labelme. Available: http://labelme2.csail.mit.edu/Release3.0/index.php
[24]A. Geiger, P. Lens, C. Stiller, and R. Urtasun. (2013). The KITTI Vision Benchmark Suite. Available: http://www.cvlibs.net/datasets/kitti/eval_road.php
[25]C.-C. L. Sun, Hou-En; Lin, Cheng-Jian; Xie, Yun-Zhen, "A New Floor Region Estimation Algorithm Based on Deep Learning Networks with Improved Fuzzy Integrals for UGV Robots," Imaging Science and Technology, vol. 63, pp. 30408-1-30408-10, May 2019 2019.
[26]Xilinx. PYNQ:Python Productivity. Available: http://www.pynq.io/home.html
[27]Xilinx, ZCU104 Evaluation Board user guide. 2018.
[28]Xilinx, AXI4-Stream FIFO v4.1 LogiCORE IP Product Guide. 2016.
[29]Xilinx, Vivado Design Suite : AXI Reference Guide. 2017.
[30]J. M. O'Kane, A Gentle Introduction to ROS. CreateSpace Independent Publishing Platform, 2013.
[31]M. Quigley, B. Gerkey, and W. D. Smart, Programming Robots with ROS. 2015.
[32]N. Y. a. C. C. Sun, "A New Fast Estimating Floor Region based on Image Segmentation for Smart Rovers," in VLSI CAD, 2017, Taiwan.
[33]L. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, "DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 4, pp. 834-848, 2018.


QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊