跳到主要內容

臺灣博碩士論文加值系統

(18.97.14.81) 您好!臺灣時間:2025/02/13 08:00
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:謝豐安
研究生(外文):HSIEH, FENG-AN
論文名稱:基於行車影像之端到端道路代理人行為分 類系統
論文名稱(外文):End-To-End System For Road Agents Behavior Classification Based On Dash Cam Image
指導教授:林惠勇
指導教授(外文):LIN, HUEI-YUNG
口試委員:賴尚宏連震杰林文杰王傑智林惠勇
口試委員(外文):LAI, SHANG-HONGLIEN, JENN-JIERLIN, WEN-CHIEHWANG, CHIEH-CHIHLIN, HUEI-YUNG
口試日期:2022-07-15
學位類別:碩士
校院名稱:國立中正大學
系所名稱:電機工程研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2022
畢業學年度:110
語文別:中文
論文頁數:75
中文關鍵詞:多物件追蹤軌跡預測行為分類超車輔助
外文關鍵詞:Multiple Object TrackingTrajectory PredictionBehavior ClassificationOvertaking Assistance
相關次數:
  • 被引用被引用:0
  • 點閱點閱:165
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
誌謝 i
摘要 ii
Abstract iii
1 緒論 1
1.1 研究動機 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 主要貢獻 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2 相關研究 4
2.1 多物件追蹤(Multiple Object Tracking, MOT) . . . . . . . . . . . . . . 4
2.2 軌跡預測(Trajectory Predicion) . . . . . . . . . . . . . . . . . . . . . 6
2.3 行為分類(Behavior Classification) . . . . . . . . . . . . . . . . . . . . 8
2.3.1 對外之駕駛行為 . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.4 超車輔助 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.4.1 物件偵測 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.4.2 車道線偵測 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.4.3 路面偵測 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.5 系統整合 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3 研究方法 14
3.1 系統流程設計 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.2 多物件追蹤 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.3 軌跡預測 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.4 行為分類 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.4.1 GraphRQI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.4.2 改進 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.5 超車輔助 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.5.1 物件偵測 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.5.2 車道線偵測 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.5.3 路面偵測 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.5.4 超車輔助判斷 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4 實驗與結果 35
4.1 實驗環境 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
iv
4.1.1 開發環境 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.1.2 系統介面 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.2 資料集挑選與標註 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.2.1 公開資料集 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.2.2 自行資料集 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.3 系統效能評估 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.4 超車輔助模組評估 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.5 行為分類模組評估 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.5.1 行為分類方法比較 . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.5.2 GraphRQI 演算法測試 . . . . . . . . . . . . . . . . . . . . . . . 47
4.5.3 ID 抽取方法比較 . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.5.4 分類器比較 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.5.5 消融實驗 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.6 軌跡預測模組評估 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4.6.1 Actor 網路架構比較 . . . . . . . . . . . . . . . . . . . . . . . . 55
4.6.2 DDPG 與其他軌跡預測模型比較 . . . . . . . . . . . . . . . . . . 57
4.7 多物件追蹤模組評估 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
5 結論與未來展望 59
5.1 結論 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
5.2 未來展望 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
參考文獻 61

[1] C. Badue, R. Guidolini, R. V. Carneiro, P. Azevedo, V. B. Cardoso, A. Forechi,
L. Jesus, R. Berriel, T. M. Paixao, F. Mutz et al., “Self-driving cars: A survey,”
Expert Systems with Applications, vol. 165, p. 113816, 2021.
[2] W. Schwarting, J. Alonso-Mora, and D. Rus, “Planning and decision-making for au-
tonomous vehicles,” Annual Review of Control, Robotics, and Autonomous Systems,
vol. 1, pp. 187–210, 2018.
[3] R. Hoogendoorn, B. van Arerm, and S. Hoogendoom, “Automated driving, traffic
flow efficiency, and human factors: Literature review,” Transportation Research
Record, vol. 2422, no. 1, pp. 113–120, 2014.
[4] R. Yoshizawa, Y. Shiomi, N. Uno, K. Iida, and M. Yamaguchi, “Analysis of car-
following behavior on sag and curve sections at intercity expressways with driving
simulator,” International Journal of Intelligent Transportation Systems Research,
vol. 10, no. 2, pp. 56–65, 2012.
[5] R. Chandra, U. Bhattacharya, T. Mittal, A. Bera, and D. Manocha, “Cmetric: A driv-
ing behavior measure using centrality functions,” in 2020 IEEE/RSJ International
Conference on Intelligent Robots and Systems (IROS). IEEE, 2020, pp. 2035–2042.
[6] F. Yu, W. Xian, Y. Chen, F. Liu, M. Liao, V. Madhavan, and T. Darrell, “Bdd100k:
A diverse driving video database with scalable annotation tooling,” arXiv preprint
arXiv:1805.04687, vol. 2, no. 5, p. 6, 2018.

[7] N. Wojke, A. Bewley, and D. Paulus, “Simple online and realtime tracking with a
deep association metric,” in 2017 IEEE international conference on image process-
ing (ICIP). IEEE, 2017, pp. 3645–3649.
[8] Z. Wang, L. Zheng, Y. Liu, Y. Li, and S. Wang, “Towards real-time multi-object
tracking,” in European Conference on Computer Vision. Springer, 2020, pp. 107–
122.
[9] R. Chandra, U. Bhattacharya, T. Randhavane, A. Bera, and D. Manocha, “Roadtrack:
Realtime tracking of road agents in dense and heterogeneous environments,” in 2020
IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2020,
pp. 1270–1277.
[10] A. Alahi, K. Goel, V. Ramanathan, A. Robicquet, L. Fei-Fei, and S. Savarese, “Social
lstm: Human trajectory prediction in crowded spaces,” in Proceedings of the IEEE
conference on computer vision and pattern recognition, 2016, pp. 961–971.
[11] A. Vemula, K. Muelling, and J. Oh, “Social attention: Modeling attention in hu-
man crowds,” in 2018 IEEE international Conference on Robotics and Automation
(ICRA). IEEE, 2018, pp. 4601–4607.
[12] R. Chandra, U. Bhattacharya, T. Mittal, X. Li, A. Bera, and D. Manocha, “Graphrqi:
Classifying driver behaviors using graph spectrums,” in 2020 IEEE International
Conference on Robotics and Automation (ICRA). IEEE, 2020, pp. 4350–4357.
[13] R. Chandra, U. Bhattacharya, A. Bera, and D. Manocha, “Traphic: Trajectory pre-
diction in dense and heterogeneous traffic using weighted interactions,” in The IEEE
Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
[14] M. Ye, J. Shen, G. Lin, T. Xiang, L. Shao, and S. C. Hoi, “Deep learning for person
re-identification: A survey and outlook,” IEEE Transactions on Pattern Analysis
and Machine Intelligence, 2021.
[15] A. Bewley, Z. Ge, L. Ott, F. Ramos, and B. Upcroft, “Simple online and real-
time tracking,” in 2016 IEEE international conference on image processing (ICIP).
IEEE, 2016, pp. 3464–3468.
[16] R. E. Kalman, “A new approach to linear filtering and prediction problems,” 1960.
[17] H. W. Kuhn, “The hungarian method for the assignment problem,” Naval research
logistics quarterly, vol. 2, no. 1-2, pp. 83–97, 1955.
[18] Y. Zhang, C. Wang, X. Wang, W. Zeng, and W. Liu, “Fairmot: On the fairness of
detection and re-identification in multiple object tracking,” International Journal of
Computer Vision, vol. 129, no. 11, pp. 3069–3087, 2021.
[19] S. Malla, B. Dariush, and C. Choi, “Titan: Future forecast using action priors,” in
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recog-
nition, 2020, pp. 11 186–11 196.
[20] T. Fernando, S. Denman, S. Sridharan, and C. Fookes, “Deep inverse reinforcement
learning for behavior prediction in autonomous driving: Accurate forecasts of vehi-
cle motion,” IEEE Signal Processing Magazine, vol. 38, no. 1, pp. 87–96, 2020.
[21] A. Kuefler, J. Morton, T. Wheeler, and M. Kochenderfer, “Imitating driver behavior
with generative adversarial networks,” in 2017 IEEE Intelligent Vehicles Symposium
(IV). IEEE, 2017, pp. 204–211.
[22] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accu-
rate object detection and semantic segmentation,” in Proceedings of the IEEE con-
ference on computer vision and pattern recognition, 2014, pp. 580–587.
[23] R. Girshick, “Fast r-cnn,” in Proceedings of the IEEE international conference on
computer vision, 2015, pp. 1440–1448.
[24] S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object de-
tection with region proposal networks,” Advances in neural information processing
systems, vol. 28, 2015.
[25] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg,
“Ssd: Single shot multibox detector,” in European conference on computer vision.
Springer, 2016, pp. 21–37.
[26] P. V. Hough, “Method and means for recognizing complex patterns,” Dec. 18 1962,
uS Patent 3,069,654.
[27] D. Neven, B. De Brabandere, S. Georgoulis, M. Proesmans, and L. Van Gool, “To-
wards end-to-end lane detection: an instance segmentation approach,” in 2018 IEEE
intelligent vehicles symposium (IV). IEEE, 2018, pp. 286–291.
[28] K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Mask r-cnn,” in Proceedings of the
IEEE international conference on computer vision, 2017, pp. 2961–2969.
[29] H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia, “Pyramid scene parsing network,” in
Proceedings of the IEEE conference on computer vision and pattern recognition,
2017, pp. 2881–2890.
[30] Y. Zhang, P. Sun, Y. Jiang, D. Yu, Z. Yuan, P. Luo, W. Liu, and X. Wang, “Byte-
track: Multi-object tracking by associating every detection box,” arXiv preprint
arXiv:2110.06864, 2021.
[31] Z. Ge, S. Liu, F. Wang, Z. Li, and J. Sun, “Yolox: Exceeding yolo series in 2021,”
arXiv preprint arXiv:2107.08430, 2021.
[32] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and
D. Wierstra, “Continuous control with deep reinforcement learning,” arXiv preprint
arXiv:1509.02971, 2015.
[33] H. Van Hasselt, A. Guez, and D. Silver, “Deep reinforcement learning with double
q-learning,” in Proceedings of the AAAI conference on artificial intelligence, vol. 30,
no. 1, 2016.
[34] I. Sutskever, O. Vinyals, and Q. V. Le, “Sequence to sequence learning with neural
networks,” Advances in neural information processing systems, vol. 27, 2014.
[35] S. Bengio, O. Vinyals, N. Jaitly, and N. Shazeer, “Scheduled sampling for sequence
prediction with recurrent neural networks,” Advances in neural information process-
ing systems, vol. 28, 2015.
[36] M.-F. Chang, J. Lambert, P. Sangkloy, J. Singh, S. Bak, A. Hartnett, D. Wang,
P. Carr, S. Lucey, D. Ramanan et al., “Argoverse: 3d tracking and forecasting with
rich maps,” in Proceedings of the IEEE/CVF Conference on Computer Vision and
Pattern Recognition, 2019, pp. 8748–8757.
[37] N. V. Chawla, K. W. Bowyer, L. O. Hall, and W. P. Kegelmeyer, “Smote: syn-
thetic minority over-sampling technique,” Journal of artificial intelligence research,
vol. 16, pp. 321–357, 2002.
[38] B. Schölkopf, R. C. Williamson, A. Smola, J. Shawe-Taylor, and J. Platt, “Support
vector method for novelty detection,” Advances in neural information processing
systems, vol. 12, 1999.
[39] Ultralytics, “Yolov5,” https://github.com/ultralytics/yolov5, 2020.
[40] Y. Ko, Y. Lee, S. Azam, F. Munir, M. Jeon, and W. Pedrycz, “Key points estimation
and point instance segmentation approach for lane detection,” IEEE Transactions on
Intelligent Transportation Systems, 2021.
[41] D. Bolya, C. Zhou, F. Xiao, and Y. J. Lee, “Yolact: Real-time instance segmentation,”
in Proceedings of the IEEE/CVF international conference on computer vision, 2019,
pp. 9157–9166.
[42] A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous driving? the kitti
vision benchmark suite,” in 2012 IEEE conference on computer vision and pattern
recognition. IEEE, 2012, pp. 3354–3361.
[43] X. Chen, J. Wei, X. Renl, K. H. Johansson, and X. Wang, “Automatic overtaking on
two-way roads with vehicle interactions based on proximal policy optimization,” in
2021 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2021, pp. 1057–1064.
[44] H. Xiao, C. Wang, Z. Li, R. Wang, C. Bo, M. A. Sotelo, and Y. Xu, “Ub-lstm: a
trajectory prediction method combined with vehicle behavior recognition,” Journal
of Advanced Transportation, vol. 2020, 2020.
[45] T. Chen and C. Guestrin, “Xgboost: A scalable tree boosting system,” in Proceedings
of the 22nd acm sigkdd international conference on knowledge discovery and data
mining, 2016, pp. 785–794.
[46] C. Wang, C. Deng, and S. Wang, “Imbalance-xgboost: Leveraging weighted and fo-
cal losses for binary label-imbalanced classification with xgboost,” Pattern Recog-
nition Letters, vol. 136, pp. 190–197, 2020.
[47] F. Giuliari, I. Hasan, M. Cristani, and F. Galasso, “Transformer networks for tra-
jectory forecasting,” in 2020 25th international conference on pattern recognition
(ICPR). IEEE, 2021, pp. 10 335–10 342.

電子全文 電子全文(網際網路公開日期:20270811)
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top