跳到主要內容

臺灣博碩士論文加值系統

(18.97.9.172) 您好!臺灣時間:2025/03/16 04:44
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:廖陳毅
研究生(外文):Liao, Chen-Yi
論文名稱:基於半監督式神經網路的車用雷達影像合成技術
論文名稱(外文):Synthetic Radar Imaging with Semi-Supervised VAE-GAN
指導教授:伍紹勳伍紹勳引用關係
指導教授(外文):Wu, Sau­-Hsuan
口試日期:2021-12-24
學位類別:碩士
校院名稱:國立陽明交通大學
系所名稱:電信工程研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2022
畢業學年度:110
語文別:英文
論文頁數:50
中文關鍵詞:雷達點雲重建壓縮感知合成孔徑雷達半監督式神經網路遷移式學習雷達成像
外文關鍵詞:Radar Point Cloud ReconstructionCompressive SensingSynthetic Aperture RadarSemi-supervised Neural NetworkTransfer LearningRadar Imaging
相關次數:
  • 被引用被引用:0
  • 點閱點閱:276
  • 評分評分:
  • 下載下載:14
  • 收藏至我的研究室書目清單書目收藏:0
對於自動駕駛而言,當面對不利的天氣或運作情況時,雷達是能夠輔助光達與相機的關鍵感測裝置。儘管雷達信號有著良好的穿透能力這項優勢,它的主要缺點是在於解析度比起光達與相機要來的低很多。有鑑於近年來人工智慧輔助的影像合成技術的發展,我們在本篇文章中呈現了一種結合了壓縮感知、合成孔徑雷達與生成對抗式網路三項技術的影像合成方法,並且能夠成功的從雷達點雲圖合成高品質的車輛影像。明確來說,壓縮感知與合成孔徑雷達技術被使用在重建雷達點雲上面,這些雷達點雲具有不同車型的鮮明特徵,此外生成對抗式網路的生成器部分使用的是變分自編碼器來從三維的重建雷達點雲合成二維的車輛影像。在這樣的模型架構下,我們進一步提出一種半監督式學習的方式,利用由變分自編碼器中的編碼器提取出來的特徵,來預測車輛的面向以減少合成影像時有面向出錯的問題。這樣的半監督式生成對抗網路架構不只能提供對於自駕車來講十分重要的車輛面向資訊,也能夠改善合成影像的品質。大量雷達點雲重建與影像合成的模擬結果將會在後續的文章中呈現,並且我們所提出的方法在不同的車型的情況下都能有勝過其他車輛影像合成方法的表現。此外,為了實現在現實中的應用,我們將遷移式學習引入我們的成像方法之中。經過微調後的生成模型能夠從由真實收集的雷達資料重建的點雲成功產生對應的車輛影像,而在文章的最後我們也針對多霧的天氣做了實驗來展示我們提出的雷達成像方法對於不利環境的穩定性。
For automonous driving, radar is the key sensor to support Lidar or camera in adverse weather or operating conditions. Despite its superiority in signal penetration, resolution is its major disadvantage in comparison with Lidar or camera. In view of the recent advances in AI-assisted image synthesis, a radar imaging method is presented herein which combines the technologies of compressive sensing (CS), synthetic aperture radar (SAR), and generative adversarial network (GAN) to successfully synthesize high-quality car images out of their radar
point clouds. Specifically, CS and SAR are used to reconstruct high-quality radar point clouds that possess the distinctive characteristics of different types or models of cars. Moreover, a variational auto-encoder (VAE) is used as a generator of GAN to synthesize car images out of their reconstructed radar point clouds. Under the VAE-GAN model, a semi-supervised learning method is further proposed to reduce the chance of incorrect estimations in cars' orientations with the features extracted by the encoder of VAE. Such a semi-supervised VAE-GAN not only provides the crucial orientation information for autonomous driving, but also helps improve the qualities of synthesized car images. Extensive simulations for point cloud reconstructions and image syntheses show superior performances compared with other methods for car image synthesis. Furthermore, transfer learning is introduced to fine-tune the imaging model for real data collected from radar. The fine-tuned model successfully outputs desired car images with the reconstructed point clouds from real radar sensing data. Experiments in foggy environmental conditions also show the robustness of our proposed radar imaging method in adverse weather.
1 Introduction . . . 1
2 System Model . . . 4
2.1 Received Signal of FMCW Radar . . . 4
2.2 Digital Received Beamforming . . . 5
2.3 Derivation of the De­chirped Phase Term . . . 6
3 Radar Imaging Processes . . . 8
3.1 Radar Imaging with Compressive Sensing and GAN . . . 8
3.2 Synthetic Radar Imaging . . . 14
3.3 Imaging with Semi­supercised VAE­GAN . . . 16
4 Simulation Settings and Results Comparisons . . . 21
4.1 Dataset . . . 21
4.2 Settings for Received Signal Simulation . . . 22
4.3 3D Point Cloud Reconstruction . . . 22
4.4 Imaging Results Comparison . . . 27
5 Realistic Radar Data Testing . . . 33
5.1 Radar Data Collection . . . 33
5.2 Transfer Learning Overview . . . 35
5.3 Fine­tune the Semi­Supervised VAE­GAN . . . 35
5.4 Imaging Results Presentation . . . 37
6 Conclusion . . . 46
References . . . 47
[1] I. Bilik, O. Longman, S. Villeval, and J. Tabrikian, “The rise of radar for autonomous vehicles: Signal processing solutions and future research directions,” IEEE Signal Processing Magazine, vol. 36, no. 5, pp. 20–31, 2019

[2] T. Gisder, M. Meinecke, and E. Biebl, “Synthetic aperture radar towards automotive applications,” in 2019 20th International Radar Symposium (IRS), 2019, pp. 1–10

[3] S. Gishkori, L. Daniel, M. Gashinova, and B. Mulgrew, “Imaging for a forward scanning automotive synthetic aperture radar,” IEEE Transactions on Aerospace and Electronic Systems, vol. 55, no. 3, pp. 1420–1434, 2019

[4] A. Moreira, P. Prats­Iraola, M. Younis, G. Krieger, I. Hajnsek, and K. P. Papathanassiou, “A tutorial on synthetic aperture radar,” IEEE Geoscience and Remote Sensing Magazine, vol. 1, no. 1, pp. 6–43, 2013

[5] X. Gao, S. Roy, and G. Xing, “MIMO­SAR: A hierarchical high­resolution imaging algorithm for mmWave FMCW radar in autonomous driving,” 2021

[6] M. Steiner, T. Grebner, and C. Waldschmidt, “Millimeter­wave SAR­imaging with radar networks based on radar self­localization,” IEEE Transactions on Microwave Theory and
Techniques, vol. 68, no. 11, pp. 4652–4661, 2020

[7] Q. Kun, Z. He, and X. Zhang, “3D point cloud generation with millimeter­wave radar,” Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies,
vol. 4, no. 4, pp. 1–23, 2020

[8] R. Baraniuk and P. Steeghs, “Compressive radar imaging,” in 2007 IEEE Radar Conference, 2007, pp. 128–133

[9] W. Qiu, J. Zhou, and Q. Fu, “Tensor representation for three­dimensional radar target imaging with sparsely sampled data,” IEEE Transactions on Computational Imaging, vol. 6, pp. 263–275, 2020

[10] A. S. Barzegar, A. Cheldavi, S. H. Sedighy, and V. Nayyeri, “3D through­the­wall radar imaging using compressed sensing,” IEEE Geoscience and Remote Sensing Letters, pp. 1–5, 2021

[11] Q. Cheng, A. A. Ihalage, Y. Liu, and Y. Hao “Compressive sensing radar imaging with convolutional neural networks,” IEEE Access, vol. 8, pp. 212 917–212 926, 2020

[12] J. Seo, Y. Yang, Y. Hong, and J. Park, “Transfer learning based radar imaging with deep convolutional neural networks for distributed frequency modulated continuous waveform multiple input multiple output radars,” Iet Radar Sonar and Navigation, 2021

[13] L. Peng, X. Qiu, C. Ding, and W. Tie, “Generating 3D point clouds from a single SAR image using 3D reconstruction network,” in IGARSS 2019 ­ 2019 IEEE International Geoscience and Remote Sensing Symposium, 2019, pp. 3685–368

[14] M. Meyer and G. Kuschk, “Deep learning based 3D object detection for automotive radar and camera,” in 2019 16th European Radar Conference (EuRAD), 2019, pp. 133–136

[15] S. Wang, J. Guo, Y. Zhang, Y. Hu, C. Ding, and Y. Wu, “Single target SAR 3D reconstruction based on deep learning,” Sensors, vol. 21, no. 3, 2021

[16] O. Ronneberger, P. Fischer, and T. Brox, “U­net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and
computer­assisted intervention. Springer, 2015, pp. 234–241

[17] J. Guan, S. Madani, S. Jog, S. Gupta, and H. Hassanieh, “Through fog high­resolution imaging using millimeter wave radar,” in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 11 461–11 470

[18] S. Hua, M. Kapoor, and D. C. Anastasiu, “Vehicle tracking and speed estimation from traffic videos,” in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2018, pp. 153–1537

[19] J. Yosinski, J. Clune, Y. Bengio, and H. Lipson, “How transferable are features in deep neural networks?” 2014

[20] J. A. Tropp and A. C. Gilbert, “Signal recovery from random measurements via orthogonal matching pursuit,” IEEE Transactions on information theory, vol. 53, no. 12, pp. 4655–4666, 2007

[21] D. P. Kingma and M. Welling, “Auto­encoding variational bayes,” 2014

[22] J. Lindgren, Evaluation of CST Studio Suite for simulation of radar cross­section, 2021

[23] J. L. Pech­Pacheco, G. Cristóbal, J. Chamorro­Martinez, and J. Fernández­Valdivia, “Diatom autofocusing in brightfield microscopy: a comparative study,” in Proceedings 15th International Conference on Pattern Recognition. ICPR­2000, vol. 3. IEEE, 2000, pp. 314–317

[24] M. T. Crockett and D. Long, “An introduction to synthetic aperture radar: a high­resolution alternative to optical imaging,” 2013

[25] H. Iqbal, M. B. Sajjad, M. Mueller, and C. Waldschmidt, “SAR imaging in an automotive scenario,” in 2015 IEEE 15th Mediterranean Microwave Symposium (MMS), 2015, pp. 1–4

[26] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” in Advances in neural information processing systems, 2017, pp. 5998–6008

[27] Z. Sun, S. Cao, Y. Yang, and K. M. Kitani, “Rethinking transformer­based set prediction for object detection,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 3611–3620

[28] N. Carion, F. Massa, G. Synnaeve, N. Usunier, A. Kirillov, and S. Zagoruyko, “End­toend object detection with transformers,” in European Conference on Computer Vision.
Springer, 2020, pp. 213–229

[29] A. Kolesnikov, A. Dosovitskiy, D. Weissenborn, G. Heigold, J. Uszkoreit, L. Beyer, M. Minderer, M. Dehghani, N. Houlsby, S. Gelly et al., “An image is worth 16x16 words:
Transformers for image recognition at scale,” 2021

[30] S. J. Pan and Q. Yang, “A survey on transfer learning,” IEEE Transactions on Knowledge and Data Engineering, vol. 22, no. 10, pp. 1345–1359, 2010

[31] Y. Guo, H. Shi, A. Kumar, K. Grauman, T. Rosing, and R. Feris, “Spottune: Transfer learning through adaptive fine­tuning,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2019
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top