|
[1] C. Badue, R. Guidolini, R. V. Carneiro, P. Azevedo, V. B. Cardoso, A. Forechi, L. Jesus, R. Berriel, T. M. Paixao, F. Mutz et al., “Self-driving cars: A survey,” Expert Systems with Applications, vol. 165, p. 113816, 2021. [2] W. Schwarting, J. Alonso-Mora, and D. Rus, “Planning and decision-making for au- tonomous vehicles,” Annual Review of Control, Robotics, and Autonomous Systems, vol. 1, pp. 187–210, 2018. [3] R. Hoogendoorn, B. van Arerm, and S. Hoogendoom, “Automated driving, traffic flow efficiency, and human factors: Literature review,” Transportation Research Record, vol. 2422, no. 1, pp. 113–120, 2014. [4] R. Yoshizawa, Y. Shiomi, N. Uno, K. Iida, and M. Yamaguchi, “Analysis of car- following behavior on sag and curve sections at intercity expressways with driving simulator,” International Journal of Intelligent Transportation Systems Research, vol. 10, no. 2, pp. 56–65, 2012. [5] R. Chandra, U. Bhattacharya, T. Mittal, A. Bera, and D. Manocha, “Cmetric: A driv- ing behavior measure using centrality functions,” in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2020, pp. 2035–2042. [6] F. Yu, W. Xian, Y. Chen, F. Liu, M. Liao, V. Madhavan, and T. Darrell, “Bdd100k: A diverse driving video database with scalable annotation tooling,” arXiv preprint arXiv:1805.04687, vol. 2, no. 5, p. 6, 2018.
[7] N. Wojke, A. Bewley, and D. Paulus, “Simple online and realtime tracking with a deep association metric,” in 2017 IEEE international conference on image process- ing (ICIP). IEEE, 2017, pp. 3645–3649. [8] Z. Wang, L. Zheng, Y. Liu, Y. Li, and S. Wang, “Towards real-time multi-object tracking,” in European Conference on Computer Vision. Springer, 2020, pp. 107– 122. [9] R. Chandra, U. Bhattacharya, T. Randhavane, A. Bera, and D. Manocha, “Roadtrack: Realtime tracking of road agents in dense and heterogeneous environments,” in 2020 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2020, pp. 1270–1277. [10] A. Alahi, K. Goel, V. Ramanathan, A. Robicquet, L. Fei-Fei, and S. Savarese, “Social lstm: Human trajectory prediction in crowded spaces,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 961–971. [11] A. Vemula, K. Muelling, and J. Oh, “Social attention: Modeling attention in hu- man crowds,” in 2018 IEEE international Conference on Robotics and Automation (ICRA). IEEE, 2018, pp. 4601–4607. [12] R. Chandra, U. Bhattacharya, T. Mittal, X. Li, A. Bera, and D. Manocha, “Graphrqi: Classifying driver behaviors using graph spectrums,” in 2020 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2020, pp. 4350–4357. [13] R. Chandra, U. Bhattacharya, A. Bera, and D. Manocha, “Traphic: Trajectory pre- diction in dense and heterogeneous traffic using weighted interactions,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019. [14] M. Ye, J. Shen, G. Lin, T. Xiang, L. Shao, and S. C. Hoi, “Deep learning for person re-identification: A survey and outlook,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021. [15] A. Bewley, Z. Ge, L. Ott, F. Ramos, and B. Upcroft, “Simple online and real- time tracking,” in 2016 IEEE international conference on image processing (ICIP). IEEE, 2016, pp. 3464–3468. [16] R. E. Kalman, “A new approach to linear filtering and prediction problems,” 1960. [17] H. W. Kuhn, “The hungarian method for the assignment problem,” Naval research logistics quarterly, vol. 2, no. 1-2, pp. 83–97, 1955. [18] Y. Zhang, C. Wang, X. Wang, W. Zeng, and W. Liu, “Fairmot: On the fairness of detection and re-identification in multiple object tracking,” International Journal of Computer Vision, vol. 129, no. 11, pp. 3069–3087, 2021. [19] S. Malla, B. Dariush, and C. Choi, “Titan: Future forecast using action priors,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recog- nition, 2020, pp. 11 186–11 196. [20] T. Fernando, S. Denman, S. Sridharan, and C. Fookes, “Deep inverse reinforcement learning for behavior prediction in autonomous driving: Accurate forecasts of vehi- cle motion,” IEEE Signal Processing Magazine, vol. 38, no. 1, pp. 87–96, 2020. [21] A. Kuefler, J. Morton, T. Wheeler, and M. Kochenderfer, “Imitating driver behavior with generative adversarial networks,” in 2017 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2017, pp. 204–211. [22] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accu- rate object detection and semantic segmentation,” in Proceedings of the IEEE con- ference on computer vision and pattern recognition, 2014, pp. 580–587. [23] R. Girshick, “Fast r-cnn,” in Proceedings of the IEEE international conference on computer vision, 2015, pp. 1440–1448. [24] S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object de- tection with region proposal networks,” Advances in neural information processing systems, vol. 28, 2015. [25] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg, “Ssd: Single shot multibox detector,” in European conference on computer vision. Springer, 2016, pp. 21–37. [26] P. V. Hough, “Method and means for recognizing complex patterns,” Dec. 18 1962, uS Patent 3,069,654. [27] D. Neven, B. De Brabandere, S. Georgoulis, M. Proesmans, and L. Van Gool, “To- wards end-to-end lane detection: an instance segmentation approach,” in 2018 IEEE intelligent vehicles symposium (IV). IEEE, 2018, pp. 286–291. [28] K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Mask r-cnn,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2961–2969. [29] H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia, “Pyramid scene parsing network,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 2881–2890. [30] Y. Zhang, P. Sun, Y. Jiang, D. Yu, Z. Yuan, P. Luo, W. Liu, and X. Wang, “Byte- track: Multi-object tracking by associating every detection box,” arXiv preprint arXiv:2110.06864, 2021. [31] Z. Ge, S. Liu, F. Wang, Z. Li, and J. Sun, “Yolox: Exceeding yolo series in 2021,” arXiv preprint arXiv:2107.08430, 2021. [32] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra, “Continuous control with deep reinforcement learning,” arXiv preprint arXiv:1509.02971, 2015. [33] H. Van Hasselt, A. Guez, and D. Silver, “Deep reinforcement learning with double q-learning,” in Proceedings of the AAAI conference on artificial intelligence, vol. 30, no. 1, 2016. [34] I. Sutskever, O. Vinyals, and Q. V. Le, “Sequence to sequence learning with neural networks,” Advances in neural information processing systems, vol. 27, 2014. [35] S. Bengio, O. Vinyals, N. Jaitly, and N. Shazeer, “Scheduled sampling for sequence prediction with recurrent neural networks,” Advances in neural information process- ing systems, vol. 28, 2015. [36] M.-F. Chang, J. Lambert, P. Sangkloy, J. Singh, S. Bak, A. Hartnett, D. Wang, P. Carr, S. Lucey, D. Ramanan et al., “Argoverse: 3d tracking and forecasting with rich maps,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 8748–8757. [37] N. V. Chawla, K. W. Bowyer, L. O. Hall, and W. P. Kegelmeyer, “Smote: syn- thetic minority over-sampling technique,” Journal of artificial intelligence research, vol. 16, pp. 321–357, 2002. [38] B. Schölkopf, R. C. Williamson, A. Smola, J. Shawe-Taylor, and J. Platt, “Support vector method for novelty detection,” Advances in neural information processing systems, vol. 12, 1999. [39] Ultralytics, “Yolov5,” https://github.com/ultralytics/yolov5, 2020. [40] Y. Ko, Y. Lee, S. Azam, F. Munir, M. Jeon, and W. Pedrycz, “Key points estimation and point instance segmentation approach for lane detection,” IEEE Transactions on Intelligent Transportation Systems, 2021. [41] D. Bolya, C. Zhou, F. Xiao, and Y. J. Lee, “Yolact: Real-time instance segmentation,” in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 9157–9166. [42] A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous driving? the kitti vision benchmark suite,” in 2012 IEEE conference on computer vision and pattern recognition. IEEE, 2012, pp. 3354–3361. [43] X. Chen, J. Wei, X. Renl, K. H. Johansson, and X. Wang, “Automatic overtaking on two-way roads with vehicle interactions based on proximal policy optimization,” in 2021 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2021, pp. 1057–1064. [44] H. Xiao, C. Wang, Z. Li, R. Wang, C. Bo, M. A. Sotelo, and Y. Xu, “Ub-lstm: a trajectory prediction method combined with vehicle behavior recognition,” Journal of Advanced Transportation, vol. 2020, 2020. [45] T. Chen and C. Guestrin, “Xgboost: A scalable tree boosting system,” in Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining, 2016, pp. 785–794. [46] C. Wang, C. Deng, and S. Wang, “Imbalance-xgboost: Leveraging weighted and fo- cal losses for binary label-imbalanced classification with xgboost,” Pattern Recog- nition Letters, vol. 136, pp. 190–197, 2020. [47] F. Giuliari, I. Hasan, M. Cristani, and F. Galasso, “Transformer networks for tra- jectory forecasting,” in 2020 25th international conference on pattern recognition (ICPR). IEEE, 2021, pp. 10 335–10 342.
|