(3.238.7.202) 您好!臺灣時間:2021/03/04 02:59
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

我願授權國圖
: 
twitterline
研究生:李俊德
研究生(外文):LI, CUHN-TE
論文名稱:應用於車用平台實現多網路融合之行人偵測系統
論文名稱(外文):Pedestrian Detection System with Multi-Network Convergence Applied in Vehicle Platform
指導教授:蘇慶龍蘇慶龍引用關係
指導教授(外文):SU, CHING-LUNG
口試委員:張添烜賴槿峰蘇慶龍
口試委員(外文):CHANG, TIAN-SHEUANLAI, CHIN-FENGSU, CHING-LUNG
口試日期:2020-07-30
學位類別:碩士
校院名稱:國立雲林科技大學
系所名稱:電子工程系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2020
畢業學年度:108
語文別:中文
論文頁數:194
中文關鍵詞:深度學習行人偵測車用嵌入式系統
外文關鍵詞:deep learningpedestrian detectionembedded system for vehicles
相關次數:
  • 被引用被引用:0
  • 點閱點閱:38
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
近年來關於深度學習相關的知識越發的普及,網路上可以使用的資源也越來越多,也有不少的架構釋放程式碼出來用,如:VGG-Net、SSD-Net、ResNet、YOLO等眾多深度學習架構,訓練的時候也有許多的工具可以協助,如:現在由github代管的Caffe、由Google開發的TensorFlow和Facebook推廣的Torch等許多的深度學習框架,讓深度學習的架構變得更方便於設計和使用,而且不同的學習框架又對不同的網路架構有不同的加速功能。因為深度學習在訓練時所需要的運算量很大,所以幾乎都是在PC上來訓練,在使用預測階段則是會在一些有支援GPU運算的平台,如:TX2等,或是PC上使用,而車用嵌入式系統因為成本、散熱和震動等協定問題而無法裝設GPU,從而導致很少人在車用嵌入式系統上開發深度學習。市面上的行人偵測系統可分為3種形式,第一種為僅使用一顆鏡頭進行影像處理,單使用鏡頭的缺點為演算法開發不容易,畢竟人類可以辨識的特徵有限,從而導致準確率會因為環境浮動;第二種為使用鏡頭與其他感測器,如:雷達或光達等,藉由雷達回傳回來的距離資訊配合鏡頭來判斷行人的方位與距離,這也是大多數車廠採用的方案,缺點為成本相對提高很多,而且有不少車廠為了提升精確度並不是只裝設一種感測器來協助;第三種為鏡頭在加機器學習,如:AdaBoost、支援向量機(Support Vector Machine, SVM)和卷積神經網路(convolution neural network,CNN)等,缺點為需大量樣本來訓練,運算量和參數量大無法使用於車用嵌入式系統上。本論文提出藉由整合不同架構的網路之間不同的優化計算方式,來達到降低運算量和網路所需之參數量,以此達到只使用車用嵌入式系統的CPU就可進行預測的動作,並且能保持在一定的準確度之上。
In recent years, the knowledge about deep learning has become more and more popular, and there are more and more resources available on the Internet. There are also many architectures that release code, such as VGG-Net, SSD-Net, ResNet, YOLO, etc. Deep learning architecture, there are many tools to assist during training, such as: Caffe, hosted by github, TensorFlow developed by Google, and Torch, promoted by Facebook, make the deep learning architecture more convenient. Designed and used, and different learning frameworks have different acceleration features for different network architectures. Because deep learning requires a lot of computation during training, it is almost always trained on a PC. In the prediction phase, it will be used on platforms that support GPU computing, such as TX2 or PC. The embedded system for vehicles cannot be equipped with GPUs due to agreement issues such as cost, heat dissipation and vibration, resulting in few people developing deep learning on embedded systems for vehicles. The pedestrian detection system on the market can be divided into three types. The first one is to use only one lens for image processing. The disadvantage of using a single lens is that the development of the algorithm is not easy. After all, the characteristics that humans can recognize are limited, which leads to The accuracy rate will be due to the environment fluctuation; the second is to use the lens and other sensors, such as: radar or light, etc., by using the distance information returned by the radar to match the lens to determine the position and distance of the pedestrian, this is also the majority of the depot The disadvantages of the adopted scheme are that the cost is relatively high, and many manufacturers do not only install a sensor to improve the accuracy; the third is to learn the lens in the machine, such as: AdaBoost, support vector machine ( Support Vector Machine (SVM) and convolutional neural network (CNN), etc., the disadvantage is that a large number of samples are needed for training, and the amount of calculation and the large amount of parameters cannot be used in the embedded system for vehicles. This paper proposes to reduce the amount of computation and the amount of parameters required by the network by integrating different optimization calculations between networks of different architectures, so as to achieve prediction by using only the CPU of the embedded system of the vehicle. Action, and can maintain above a certain accuracy.
摘要 i
Abstract ii
誌謝 iv
目錄 v
表目錄 viii
圖目錄 ix
第一章 緒論 1
1.1 研究動機 1
1.2 研究方向 2
1.3 論文架構 3
第二章 研究背景與相關知識 4
2.1 深度學習的概況 4
2.1.1 深度學習的發展 4
2.1.2 深度學習的應用 6
2.2 行人偵測系統的概況 7
2.2.1 影像處理 7
2.2.2 光達和雷達 8
2.2.3 深度學習 9
第三章 多網路融合的行人偵測系統之實現 14
3.1 卷積網路的流程圖 14
3.2 各層功能介紹 16
3.2.1 卷積層 16
3.2.2 激勵層 19
3.2.3 池化層 21
3.2.4 Softmax 23
3.2.5 歸一化 23
3.3 候選框篩選與決策 25
3.3.1 候選框 25
3.3.2 候選框自適應調整 26
3.4 卷積運算量優化 27
3.4.1 深度可分離卷積 27
3.4.2 運用卷積位移的補償 29
3.4.3 NEON概述 30
第四章 車用平台架構實現 32
4.1 卷積神經網路架構 32
4.2 卷積神經網路訓練方法 32
4.3 架構修改與運算量優化 38
4.3.1 運用深度可分離卷積 39
4.3.2 Fire module的應用 41
4.3.3 運用卷積位移的補償 44
4.3.4 通道串聯的應用 55
4.3.5 運用NEON加速卷積計算 57
4.4煞車距離計算與距離量測 58
4.4.1 煞車距離計算 59
4.4.2 距離量測 61
第五章 實作結果 63
5.1 硬體設備介紹 63
5.2 嵌入式系統介紹 64
5.3 實車架設配置 64
5.4 系統模擬結果 65
5.5 架構比較 67
第六章 結論及未來展望 72
參考文獻 73
附錄 78

參考文獻
[1]內政部統計處網頁,107年第30週內政統計通報。
網址:https://www.moi.gov.tw/stat/node.aspx?cate_sn=-1&belong_sn=7460&sn=7712.html
[2]J. Redmon and A. Farhadi. “Yolo9000: Better, faster, stronger.” The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pages 7263-7271. DOI: 10.1109/CVPR.2017.690
[3]YOLO v3-tiny source code.
Available from https://github.com/AlexeyAB/darknet
[4] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam. “Mobilenets: Efficient convolutional neural networks formobile vision applications.” arXiv:1704.04861, 2017.
[5]Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. “Deep residual learning for image recognition.” In CVPR, 2016, pp. 770-778
[6] K. He, X. Zhang, S. Ren, and J. Sun. “Identity mappings in deep residual networks.” In ECCV, 2016. 2, 3, 5, 7
[7]M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen. “Mobilenetv2: Inverted residuals and linear bottlenecks.” In CVPR, 2018.
[8]Zhang, X., Zhou, X., Lin, M., Sun, J.”Shufflenet: An extremely efficient convolutional neural network for mobile devices.” In CVPR. (2018)
[9]N. Ma, X. Zhang, H.-T. Zheng, and J. Sun. “Shufflenet v2: Practical guidelines for efficient cnn architecture design.” The European Conference on Computer Vision (ECCV), 2018, pp. 116-131
[10]Bordes, A., Bottou, L., Gallinari, P. “Sgd-qn: Careful quasi-newton stochasticgradient descent.” Journal of Machine Learning Research 10, 1737–1754, 2009.
[11]V. Sze, Y.-H. Chen, T.-J. Yang, and J. Emer. “Efficient processing of deep neuralnetworks: A tutorial and survey.” arXiv preprint arXiv:1703.09039, 2017.
[12]A. Krizhevsky, I. Sutskever, and G. Hinton. “Imagenet classification with deepconvolutional neural networks.” In NIPS, 2012.
[13]Simonyan, K.; Zisserman, A. “Very deep convolutional networks for large-scaleimage recognition.” arXiv 2014, arXiv:1409.1556
[14]C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. E. Reed, D. Anguelov, D. Erhan, V.Vanhoucke, A. Rabinovich, "Going deeper with convolutions", IEEE Conferenceon Computer Vision and Pattern Recognition CVPR, pp. 1-9, 2015.
[15] D. Cristinacce, T. Cootes, "Facial Feature Detection Using AdaBoost WithShape Constrains", British Machine Vision Conference, 2003.
[16]THE Star ONLINE : Apple proposed a smarter Siri at WWDC developerconference.
網址:https://www.thestar.com.my/tech/tech-news/2018/06/09/apple-proposes-a-smarter-siri-at-wwdc-developer-conference/#qKiLzfb0ALQ9CxLM.99.
[17]MakeUseOf : What Is Google Assistant and How to Use It, By Ben Stegner,March 23, 2018.
網址:https://www.makeuseof.com/tag/what-is-google-assistant/
[18]科技報橘Tech Orange : Google 醫療 AI 新進展!精準檢測癌細胞擴散,正確率高達 99%。
網址:https://buzzorange.com/techorange/2018/12/07/google-ai-cancer-research/
[19]u-car : 安全為先,Volvo新世代S60安全科技體驗。網址:https://news.u-car.com.tw/article/13576/%E5%AE%89%E5%85%A8%E7%82%BA%E5%85%88%EF%BC%8CVolvo%E6%96%B0%E4%B8%96%E4%BB%A3S60%E5%AE%89%E5%85%A8%E7%A7%91%E6%8A%80%E9%AB%94%E9%A9%97
[20]車訊網:零傷亡目標Volvo New S60行人偵測式全自動煞車。
網址:https://carnews.com/article/info/200a6fe5-4b03-11e8-8ee2-42010af00004/
[21] S. Ren, K. He, R. B. Girshick, J. Sun, "Faster R-CNN: towards real-time object detection with region proposal networks", CoRR, vol. abs/1506.01497, 2015.
[22] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi. “You only look once: Unified,real-time object detection.” arXiv preprint arXiv:1506.02640, 2015.
[23]J. Redmon, A. Farhadi, “Yolov3: An incremental improvement” CoRR, 2018.
[24]Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. “Gradient-based learning appliedto document recognition.” Proceedings of the IEEE, 86(11):2278–2324, 1998.
[25]Wikipedia : Activation function
網址:https://en.wikipedia.org/wiki/Activation_function
[26]莫煩PYTHON:Batch Normalization
網址:https://morvanzhou.github.io/tutorials/machine-learning/ML-intro/3-08-batch-normalization/
[27]C. L. Zitnick and P. Doll´ar, “Edge boxes: Locating object proposals from edges,” in European Conference on Computer Vision (ECCV), 2014.
[28]M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman, “The PASCAL Visual Object Classes Challenge 2007 (VOC2007) Results,” 2007
[29]T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Doll´ar, and C. L. Zitnick, “Microsoft COCO: Common Objects in Context,” in European Conference on Computer Vision (ECCV), 2014.
[30]NEON Programmer's Guide – Arm. 網址:
https://static.docs.arm.com/den0018/a/DEN0018A_neon_programmers_guide_en.pdf
[31]Iandola, F.N., Moskewicz, M.W., Ashraf, K., Han, S., Dally, W.J., Keutzer, K., DeepScale∗& UC Berkeley Stanford University ”SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size”, arXiv preprint arXiv:1602.07360, 2016
[32]Hu, Jie; Shen, Li; Albanie, Samuel; Sun, Gang; Wu, Enhua ”Squeeze-and-Excitation Networks”, IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7132-7141, 2018
[33]M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and C. Chen. “Mobilenetv2: Inverted residuals and linear bottlenecks.” CVPR, 2018.
[34]Alexey Bochkovskiy, Chien-Yao Wang, Hong-Yuan Mark Liao.”YOLOv4: Optimal Speed and Accuracy of Object Detection”, arXiv preprint arXiv:2004.10934, 2020.
[35]Diederik P. Kingma, Jimmy Lei Ba,” ADAM: A METHOD FOR STOCHASTIC OPTIMIZATION”, ICLR ,2015
[36]Sebastian Ruder,” An overview of gradient descent optimization algorithms ”, arXiv preprint arXiv:1609.04747v2, 2017.
[37]YOLO v4-tiny source code.
Available from https://github.com/AlexeyAB/darknet
[38]ncnn source code.
Available from https://github.com/xiangweizeng/darknet2ncnn
[39] 交通部運輸研究所90.04.24.運安字第900002569號函
網址: https://reurl.cc/gmvN8V
[40] Renesas R-car H3規格
網址: https://www.renesas.com/tw/zh/solutions/automotive/soc/r-car-h3.html

電子全文 電子全文(網際網路公開日期:20250815)
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
系統版面圖檔 系統版面圖檔