跳到主要內容

臺灣博碩士論文加值系統

(44.200.117.166) 您好!臺灣時間:2023/10/03 18:07
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:何秉翰
研究生(外文):Bing-Han Ho
論文名稱:應用深度學習結合毫米波雷達進行路面障礙偵測
論文名稱(外文):Using Deep Learning and mmWave Radar for Road Obstacle Identification
指導教授:謝禎冏謝禎冏引用關係
指導教授(外文):Chen-chiung Hsieh
口試委員:謝禎冏
口試委員(外文):Chen-chiung Hsieh
口試日期:2019-07-18
學位類別:碩士
校院名稱:大同大學
系所名稱:資訊工程學系(所)
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2019
畢業學年度:107
語文別:中文
論文頁數:64
中文關鍵詞:毫米波雷達影像辨識卷積神經網路深度學習自動駕駛
外文關鍵詞:millimeter-wave radarobject recognitionCNNDeep learningsmart self-driving car
相關次數:
  • 被引用被引用:2
  • 點閱點閱:1169
  • 評分評分:
  • 下載下載:238
  • 收藏至我的研究室書目清單書目收藏:1
智慧自動駕駛汽車,又稱無人駕駛汽車,是一種無人的運輸動力地面載具。作為自動化載具,自動駕駛汽車不需要人類操作即能感測其環境及導航。對於自動駕駛來說,檢測和追蹤移動的行人或車輛是一項基本需求。目前毫米波雷達和攝影機是智慧汽車的主要接收器。毫米波雷達具有較高的水平距離解析,但無法辨識出其物件的種類。與毫米波雷達不同的是,攝影機可以有效辨識出物件的類別,但是對附近物體、照明變化和複雜的室外環境較為敏感。
由於深度學習受到越來越多研究人員的重視與青睞,並在影像辨識、語音辨識、物件偵測等等應用上展現強大的能力。本研究的目的是結合毫米波雷達和深度學習影像辨識技術以改善自動駕駛物件偵測的有效性,藉以增進自駕車輛的安全性。本研究基於深度學習(Deep Learning)的目標檢測,主要目的是實現利用毫米波雷達和深度學習進行運動目標檢測結合的可行性,本研究測試了YOLOv2、YOLOv3、YOLOv3-tiny、Faster RCNN以及Mask RCNN五種深度學習神經網路架構進行路面障礙偵測的結果,篩選出其中最合適的模型並和毫米波雷達偵測結果進行初步的資訊結合測試,將實際物體的三維座標進行相機座標轉換成功轉移到二維像平面座標上,作為多感應器與深度學習技術融合的第一步,再與影像辨識結果融合,達到具有精確的實際座標位置與速度之物體辨識。
本研究進行單人以及雙人在實驗室中行走以及室外一般路面車輛偵測兩種場景的測試,攝影機與毫米波雷達結合效果可順利偵測出人物在各個位置的物體辨識與距離變化,在室外路面測試中,影像辨識已可達到80%的準確率,能良好辨識出雷達不容易偵測出的靜止物體以及偵測範圍外之物件,而與毫米波雷達的資訊做結合,在雷達偵測範圍內能同時顯示出其物件類別與位置等資訊的結合成功率也達到70%以上,分析了對不同物體的檢測精度性,證明本方法之可行性。
A smart self-driving car, also known as a driverless car, is an unmanned transport power ground vehicle. As an automated vehicle, autonomous vehicles can sense their environment and navigation without human intervention. For autonomous driving, detecting and tracking a moving pedestrian or vehicle is a basic requirement. Millimeter wave radars and cameras are currently the main receivers for smart cars. Millimeter wave radar has a high horizontal distance resolution, but it is basically unable to detect vertical changes. Unlike millimeter-wave radars, cameras can detect vertical changes efficiently, but are sensitive to nearby objects, lighting changes, and complex outdoor environments.
Because deep learning is valued and favored by more and more researchers, it also displays powerful capabilities in image recognition, speech recognition, object detection and other applications. The purpose of this study is to improve the safety of self-driving vehicles by combining millimeter-wave radar and deep learning image recognition techniques to improve the effectiveness of auto-driving object detection. This study is based on Deep Learning's target detection. The main purpose is to realize the feasibility of combining millimeter-wave radar and deep learning for moving target detection. This study tested YOLOv2, YOLOv3, YOLOv3-tiny, Faster RCNN and Mask RCNN. Five deep learning neural network architectures are used to detect the obstacles of the road surface, and the most suitable model is selected and combined with the millimeter wave radar detection results for preliminary information integration test. The three-dimensional coordinates of the actual object are successfully transferred to the camera coordinate conversion as the first step in the fusion of multi-sensor and deep learning technology. The two-dimensional image plane coordinates are combined with the image recognition result to achieve object recognition with accurate actual coordinate position and velocity.
This study tests two scenes of single and double walking in the laboratory and outdoor road vehicle detection. The combination of camera and millimeter wave radar can smoothly detect the object recognition and distance changes of people at various positions. In the road test, the image recognition can reach 80% accuracy, and it can well identify the stationary objects that are not easily detected by the radar and the objects outside the detection range, and combine with the information of the millimeter wave radar in the radar detection. The combination of the object category and location information can also show the success rate of more than 70%, and the detection accuracy of different objects is analyzed, which proves the feasibility of the method.
誌謝 i
摘要 iii
ABSTRACT v
目次 vii
圖次 ix
表次 xi
第1章 緒論 1
1.1 研究目的 2
1.2 研究動機 6
第2章 相關文獻探討 7
2.1 機器學習技術 9
2.2 毫米波雷達技術 12
2.3 物件偵測技術 17
2.3.1 YOLO物件偵測 17
2.3.2 RCNN 物件偵測 21
第3章 研究方法資訊融合 25
3.1 系統架構 26
3.2 毫米波雷達物體偵測 27
3.3 影像輸入與物件辨識 29
3.4 雷達座標轉換與影像物件追蹤 31
3.5 資訊結合 33
第4章 實驗結果與分析 35
4.1 實驗環境 35
4.2 實驗結果 39
4.2.1 物件辨識及追蹤 39
4.2.2 交叉比對與資訊結合 43
第5章 結論與未來展望 59
5.1 結論 59
5.2 未來展望 60
參考文獻 61
[1] S. Marsland, “Machine learning: an algorithmic perspective”, CRC press, 2015.
[2] A. L. Buczak and E. Guven, “A survey of data mining and machine learning methods for cyber security intrusion detection,” IEEE Communications Surveys & Tutorials, vol. 18, no. 2, pp. 1153–1176, Oct. 2015.
[3] Wikipedia, History of self-driving cars, Retrieved from https://en.wikipedia.org/wiki/History_of_self-driving_cars(June 15, 2019)
[4] AVENUE report, Autonomous Vehicles to Evolve to a New Urban Experience (2018)
[5] ERTRAC Working Group report, Automated Driving Roadmap (2017)
[6] Ghulam Mehdi, Jungang Miao, “Millimeter Wave FMCW Radar for Foreign Object Debris (FOD) Detection at Airport Runways”, Proceedings of 2012 9th International Bhurban Conference o n Applied Sciences & Technology (IBCAST), 407 (2012).
[7] T. Yamawaki et al, “Millimeter-wave obstacle detection radar”, FUJITSU TEN TECH, No. 15 (2000) 10.
[8] R. O. Chavez-Garcia, O. Aycard, “ Multiple sensor fusion and classification for moving object detection and tracking”, IEEE Transactions on Intelligent Transportation Systems, 17(2), 525-534. 2015.
[9] G. E. Hinton, S. Osindero, Y.-W. Teh, “A fast learning algorithm for deep belief nets. Neural computation”, 18(7):1527–1554, 2006
[10] P.Vincent, H. Larochelle, I. Lajoie, Y. Bengio, P.-A. Manzagol, “Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion”, The Journal of Machine Learning Research, 11:3371–3408. 2010.
[11] Alex Krizhevsky, Ilya Sutskever, and Geoff Hinton, “ Imagenet classification with deep convolutional neural networks”, In Advances in Neural Information Processing Systems 25, 2012.
[12] Quoc V Le, Marc’Aurelio Ranzato, Rajat Monga, Matthieu Devin, Kai Chen, Greg S Corrado, Jeff Dean, and Andrew Y Ng, “Building high-level features using large scale unsupervised learning”, In International Conference on Machine Learning, 2012.
[13] R. S. R. Dheekonda, S. K. Panda, N. Khan, M. Al-Hasan, S. Anwar, “Object Detection from a Vehicle Using Deep Learning Network and Future Integration with Multi-Sensor Fusion Algorithm”. (No. 2017-01-0117). SAE Technical Paper. https://doi.org/10.4271/2017-01-0117. (2017).
[14] Vivienne Sze, Yu-Hs in Chen, “Efficient Processing of Deep Neural Networks: A Tutorial and Survey”, Proceedings of the IEEE, Vol. 105, No. 12, December 2295. 2017.
[15] Rodrigo Verschae, Javier Ruiz-del-Solar, “Object detection: Current and future directions, Frontiers in robotics and AI”, Front. Robot. AI, 19 November 2015 https://doi.org/10.3389/frobt.2015.00029. 2015.
[16] XUE-WEN CHEN, XIAOTONG LIN, “Big Data Deep Learning: Challenges and Perspectives”, IEEE Access, 514. 2014.
[17] 百度百科, 毫米波雷达, Retrieved from https://baike.baidu.com/item/%E6%AF%AB%E7%B1%B3%E6%B3%A2%E9%9B%B7%E8%BE%BE(June 16, 2019)
[18] Xinyi Tang et al., “Experimental Results of Target Classification Using mm Wave Corner Radar Sensors”, 2018 Asia-Pacific Microwave Conference (APMC) (https://doi.org/10.23919/APMC.2018.8617234). 2018.
[19] M. Kang, K. Ji, X. Leng, Z. Lin, “Contextual region-based convolutional neural network with multilayer fusion for SAR ship detection”, Remote Sensing, 9(8), 860. 2017.
[20] Shunguang Wu, S. Decker, P. Chang, T. Camus, and J. Eledath, “Collision Sensing by Stereo Vision and Radar Sensor Fusion”, IEEE Transactions on Intelligent Transportation Systems 10(4), 606_614. doi: 10. 1109/TITS.2009.2032769. 2009.
[21] G. Alessandretti, A. Broggi, P. Cerri, “Vehicle and Guard Rail Detection Using Radar and Vision Data Fusion”, IEEE Transactions on Intelligent Transportation Systems 8(1), 95_104. doi: 10.1109/TITS. 2006.888597. 2007.
[22] L. Bombini, P. Cerri, P. Medici, G. Alessandretti, “Radar-vision Fusion for Vehicle Detection”, Proceedings of International Workshop on Intelligent Transportation, 65-70. 2006.
[23] Wei Huang, Zhen Zhang, Wentao Li and Jiandong Tian, “Moving Object Tracking Based on Millimeter-wave Radar and Vision Sensor”, Journal of Applied Science and Engineering, Vol. 21, No. 4, pp. 609_614. 2018.
[24] Takahiro Yanagi, Karel Kreuter, Hiroshi Naganawa, “The World’s First Real-TimeMillimeter wave Radar Simulator using High Precision 3DCG MAP and Objects”, Use CASE. Retrieved from https://www.coseda-tech.com/files/coside/user_files/Files/pdf.%20Dokumente/TheWorld'sFirstMMWR_Sim_OTSL_DIAM_COSEDA_UGM_2017.pdf(June 20, 2019)
[25] J. Redmon, S. Divvala, R. Girshick, A. Farhadi, “You only look once: Unified, real-time object detection”. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 779-788). 2016.
[26] J. Redmon, A. Farhadi, “YOLO9000: better, faster, stronger”, In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 7263-7271). 2017.
[27] J. Redmon, A. Farhadi, “Yolov3: An incremental improvement”, arXiv preprint arXiv:1804.02767. 2018.
[28] S. Ren, K. He, R. Girshick, J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks”, In Advances in neural information processing systems (pp. 91-99). 2015.
[29] K. He, G. Gkioxari, P. Dollár, R. Girshick, “Mask r-cnn”, In Proceedings of the IEEE international conference on computer vision (pp. 2961-2969). 2017.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊