|
[1] Y. Wang, H. Wang, Y. Shen, J. Fei, W. Li, G. Jin, L. Wu, R. Zhao, and X. Le, “Semi-supervised semantic segmentation using unreliable pseudo-labels,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4248–4257, 2022. 1 [2] L. Yang, W. Zhuo, L. Qi, Y. Shi, and Y. Gao, “St++: Make self-training work better for semi-supervised semantic segmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4268–4277, 2022. 1 [3] M. Zheng, S. You, L. Huang, F. Wang, C. Qian, and C. Xu, “Simmatch: Semisupervised learning with similarity matching,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14471–14481, 2022. 1 [4] L. Yang, L. Qi, L. Feng, W. Zhang, and Y. Shi, “Revisiting weak-to-strong consistency in semi-supervised semantic segmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7236– 7246, 2023. 1 [5] X. Chen, Y. Yuan, G. Zeng, and J. Wang, “Semi-supervised semantic segmentation with cross pseudo supervision,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2613–2622, 2021. 1 [6] J. Li, C. Xiong, and S. C. Hoi, “Comatch: Semi-supervised learning with contrastive graph regularization,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9475–9484, 2021. 1 [7] Y. Li, L. Yuan, and N. Vasconcelos, “Bidirectional learning for domain adaptation of semantic segmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6936–6945, 2019. 2, 6 [8] F. Pizzati, R. d. Charette, M. Zaccaria, and P. Cerri, “Domain bridge for unpaired image-to-image translation and unsupervised domain adaptation,” in Proceedings of the IEEE/CVF winter conference on applications of computer vision, pp. 2990– 2998, 2020. 2, 6 [9] R. Gong, W. Li, Y. Chen, D. Dai, and L. Van Gool, “Dlow: Domain flow and applications,” International Journal of Computer Vision, vol. 129, no. 10, pp. 2865– 2888, 2021. 2, 6 [10] T.-H. Vu, H. Jain, M. Bucher, M. Cord, and P. P´erez, “Advent: Adversarial entropy minimization for domain adaptation in semantic segmentation,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 2517–2526, 2019. 2, 6, 34 [11] K. Saito, K. Watanabe, Y. Ushiku, and T. Harada, “Maximum classifier discrepancy for unsupervised domain adaptation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3723–3732, 2018. 2, 6 [12] Y. Luo, P. Liu, L. Zheng, T. Guan, J. Yu, and Y. Yang, “Category-level adversarial adaptation for semantic segmentation using purified features,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 8, pp. 3940–3956, 2021. 2, 6 [13] B. Sun and K. Saenko, “Deep coral: Correlation alignment for deep domain adaptation,” in Computer Vision–ECCV 2016 Workshops: Amsterdam, The Netherlands, October 8-10 and 15-16, 2016, Proceedings, Part III 14, pp. 443–450, Springer, 2016. 2, 6 [14] M. Long, Z. Cao, J. Wang, and M. I. Jordan, “Conditional adversarial domain adaptation,” Advances in neural information processing systems, vol. 31, 2018. 2, 6 [15] K. Mei, C. Zhu, J. Zou, and S. Zhang, “Instance adaptive self-training for unsupervised domain adaptation,” in Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXVI 16, pp. 415–430, Springer, 2020. 2, 6 [16] N. Araslanov and S. Roth, “Self-supervised augmentation consistency for adapting semantic segmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15384–15394, 2021. 2, 6 [17] J. Choi, T. Kim, and C. Kim, “Self-ensembling with gan-based data augmentation for domain adaptation in semantic segmentation,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6830–6840, 2019. 2, 6 [18] L. Melas-Kyriazi and A. K. Manrai, “Pixmatch: Unsupervised domain adaptation via pixelwise consistency training,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12435–12445, 2021. 2, 6 [19] W. Tranheden, V. Olsson, J. Pinto, and L. Svensson, “Dacs: Domain adaptation via cross-domain mixed sampling,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1379–1389, 2021. 2, 6, 8, 34 [20] Q. Zhou, Z. Feng, Q. Gu, J. Pang, G. Cheng, X. Lu, J. Shi, and L. Ma, “Contextaware mixup for domain adaptive semantic segmentation,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 33, no. 2, pp. 804–817, 2022. 2, 6 [21] V. Olsson, W. Tranheden, J. Pinto, and L. Svensson, “Classmix: Segmentationbased data augmentation for semi-supervised learning,” in Proceedings of the IEEE/CVF winter conference on applications of computer vision, pp. 1369–1378, 2021. 3, 7 [22] Y. Zou, Z. Yu, B. Kumar, and J. Wang, “Unsupervised domain adaptation for semantic segmentation via class-balanced self-training,” in Proceedings of the European conference on computer vision (ECCV), pp. 289–305, 2018. 3 [23] Y. N. D. D. L.-P. Hongyi Zhang, Moustapha Cisse, “mixup: Beyond empirical risk minimization,” International Conference on Learning Representations, 2018. 6 [24] S. Yun, D. Han, S. J. Oh, S. Chun, J. Choe, and Y. Yoo, “Cutmix: Regularization strategy to train strong classifiers with localizable features,” in Proceedings of the IEEE/CVF international conference on computer vision, pp. 6023–6032, 2019. 6, 7 [25] H. K. Choi, J. Choi, and H. J. Kim, “Tokenmixup: Efficient attention-guided token-level data augmentation for transformers,” Advances in Neural Information Processing Systems, vol. 35, pp. 14224–14235, 2022. 6 [26] A. Galdran, G. Carneiro, and M. A. Gonz´alez Ballester, “Balanced-mixup for highly imbalanced medical image classification,” in Medical Image Computing and Computer Assisted Intervention – MICCAI 2021 (M. de Bruijne, P. C. Cattin, S. Cotin, N. Padoy, S. Speidel, Y. Zheng, and C. Essert, eds.), (Cham), pp. 323– 333, Springer International Publishing, 2021. 7 [27] R. Takahashi, T. Matsubara, and K. Uehara, “Ricap: Random image cropping and patching data augmentation for deep cnns,” in Asian conference on machine learning, pp. 786–798, PMLR, 2018. 7 [28] T. Hong, Y. Wang, X. Sun, F. Lian, Z. Kang, and J. Ma, “Gradsalmix: Gradient saliency-based mix for image data augmentation,” in 2023 IEEE International Conference on Multimedia and Expo (ICME), pp. 1799–1804, IEEE, 2023. 7 [29] L. Hoyer, D. Dai, H. Wang, and L. Van Gool, “Mic: Masked image consistency for context-enhanced domain adaptation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11721–11732, 2023. 9, 34 [30] S. R. Richter, V. Vineet, S. Roth, and V. Koltun, “Playing for data: Ground truth from computer games,” in Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part II 14, pp. 102–118, Springer, 2016. 25 [31] G. Ros, L. Sellart, J. Materzynska, D. Vazquez, and A. M. Lopez, “The synthia dataset: A large collection of synthetic images for semantic segmentation of urban scenes,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3234–3243, 2016. 25 [32] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele, “The cityscapes dataset for semantic urban scene understanding,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3213–3223, 2016. 25 [33] P. Zhang, B. Zhang, T. Zhang, D. Chen, Y. Wang, and F. Wen, “Prototypical pseudo label denoising and target structure learning for domain adaptive semantic segmentation,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 12414–12424, 2021. 34 [34] L. Hoyer, D. Dai, and L. Van Gool, “Daformer: Improving network architectures and training strategies for domain-adaptive semantic segmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9924–9935, 2022. 34 [35] L. Hoyer, D. Dai, and L. Van Gool, “Hrda: Context-aware high-resolution domainadaptive semantic segmentation,” in European Conference on Computer Vision, pp. 372–391, Springer, 2022. 34
|