|
[1] Z. Liu, P. Luo, X. Wang, and X. Tang, “Deep learning face attributes in the wild,” in Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), pp. 3730–3738, 2015. [2] C. Doersch, S. Singh, A. Gupta, J. Sivic, and A. A. Efros, “What makes paris look like paris?,” ACM Transactions on Graphics (Proc. SIGGRAPH), vol. 31, pp. 101:1–101:9, Jul 2012. [3] S. Iizuka, E. SimoSerra, and H. Ishikawa, “Globally and locally consistent image completion,” ACM Transactions on Graphics (Proc. SIGGRAPH), vol. 36, pp. 107:1–107:14, jul 2017. [4] J. Yu, Z. Lin, J. Yang, X. Shen, X. Lu, and T. S. Huang, “Generative image inpainting with contextual attention,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018. [5] K. Nazeri, E. Ng, T. Joseph, F. Qureshi, and M. Ebrahimi, “Edgeconnect: Generative image inpainting with adversarial edge learning,” arXiv:1901.00212 [cs.CV], 2019. [6] C. Barnes, E. Shechtman, A. Finkelstein, and D. B. Goldman, “Patchmatch: A randomized correspon dence algorithm for structural image editing,” ACM Transactions on Graphics (Proc. SIGGRAPH), vol. 28, aug 2009. [7] A. Newson, A. Almansa, M. Fradet, Y. Gousseau, and P. Pérez, “Video inpainting of complex scenes,” SIAM Journal of Imaging Science, vol. 7, no. 4, pp. 1993–2019, 2014. [8] R. Yeh, C. Chen, T. Yian Lim, M. HasegawaJohnson, and M. N. Do, “Semantic image inpainting with perceptual and contextual losses,” arXiv:1607.07539 [cs.CV], 07 2016. [9] D. Simakov, Y. Caspi, E. Shechtman, and M. Irani, “Summarizing visual data using bidirectional similarity.,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–8, 2008. [10] J.B. Huang, S. B. Kang, N. Ahuja, and J. Kopf, “Image completion using planar structure guidance.,” ACM Transactions on Graphics (Proc. SIGGRAPH), vol. 33, no. 4, pp. 129:1–129:10, 2014. [11] D. Pathak, P. Krähenbühl, J. Donahue, T. Darrell, and A. Efros, “Context encoders:feature learning by inpainting,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), p. 2536– 2544, 2016. [12] G. Liu, F. A. Reda, K. J. Shih, T.C. Wang, A. Tao, and B. Catanzaro, “Image inpainting for irregular holes using partial convolutions,” in European Conference on Computer Vision (ECCV), September 2018. [13] I. Goodfellow, J. PougetAbadie, M. Mirza, B. Xu, D. WardeFarley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in neural information processing systems, pp. 2672–2680, 2014. [14] C. Ballester, M. Bertalmio, V. Caselles, G. Sapiro, and J. Verdera, “Fillingin by joint interpolation of vector fields and gray levels,” Trans. Image Process., vol. 10, pp. 1200–1211, aug 2001. 32[15] L. Xu, J. S. Ren, C. Liu, and J. Jia, “Deep convolutional neural network for image deconvolution,” in Advances in Neural Information Processing Systems 27, pp. 1790–1798, Curran Associates, Inc., 2014. [16] C. Yang, X. Lu, Z. Lin, E. Shechtman, O. Wang, and H. Li, “Highresolution image inpainting using multiscale neural patch synthesis,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017. [17] J. Johnson, A. Alahi, and L. FeiFei, “Perceptual losses for realtime style transfer and super resolution,” in European Conference on Computer Vision (ECCV), 2016. [18] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE Con ference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778, June 2016. [19] P. Isola, J.Y. Zhu, T. Zhou, and A. A. Efros, “Imagetoimage translation with conditional adversarial networks,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5967–5976, 2017. [20] T.C. Wang, M.Y. Liu, J.Y. Zhu, A. Tao, J. Kautz, and B. Catanzaro, “Highresolution image synthe sis and semantic manipulation with conditional gans,” in IEEE Conference on Computer Vision and Pattern Recognition(CVPR), 2018. [21] T. Miyato, T. Kataoka, M. Koyama, and Y. Yoshida, “Spectral normalization for generative adversarial networks,” in International Conference on Learning Representations(ICLR), 2018. [22] J. H. Lim and J. C. Ye, “Geometric gan,” arXiv:1806.03589 [stat.ML], 2017. [23] L. A. Gatys, A. S. Ecker, and M. Bethge, “Image style transfer using convolutional neural networks,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2414–2423, 2016. [24] K. Simonyan and A. Zisserman, “Very deep convolutional networks for largescale image recogni tion,” in International Conference on Learning Representations(ICLR), 2015. [25] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. FeiFei, “Imagenet large scale visual recognition challenge,” Inter national Journal of Computer Vision (IJCV), vol. 115, no. 3, pp. 211–252, 2015. [26] X. Wang, K. Yu, S. Wu, J. Gu, Y. Liu, C. Dong, Y. Qiao, and C. C. Loy, “Esrgan: Enhanced super resolution generative adversarial networks,” in European Conference on Computer Vision Workshops (ECCVW), September 2018. [27] M. S. M. Sajjadi, B. Schölkopf, and M. Hirsch, “Enhancenet: Single image superresolution through automated texture synthesis,” p. 4501–4510, 12 2017. [28] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv:1412.6980 [cs.LG], 2014. [29] J. Canny, “A computational approach to edge detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence(TPAMI), vol. PAMI8, pp. 679–698, Nov 1986.
|