|
[1] O. Chapelle, B. Schölkopf, and A. Zien. Semi-Supervised Learning. 2006. [2] Y. Choi, M. Choi, M. Kim, J.-W. Ha, S. Kim, and J. Choo. StarGAN: Unified gener- ative adversarial networks for multi-domain image-to-image translation. In CVPR, 2018. [3]Z.Dai,Z.Yang,F.Yang,W.W.Cohen,andR.Salakhutdinov.GoodSemi-supervised Learning that Requires a Bad GAN. In NIPS. 2017. [4] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, 2009. [5] L. A. Gatys, A. S. Ecker, and M. Bethge. Image style transfer using convolutional neural networks. In CVPR, June 2016. [6] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In NIPS. 2014. [7] Y. Grandvalet and Y. Bengio. Semi-supervised learning by entropy minimization. In NIPS. 2005. [8] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. Courville. Improved training of wasserstein gans. In NIPS, 2017. [9] G. E. Hinton, S. Osindero, and Y.-W. Teh. A fast learning algorithm for deep belief nets. Neural Comput., 18(7):1527–1554, July 2006. [10] G. E. Hinton and R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 313(5786):504–507, 2006. [11] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros. Image-to-image translation with con- ditional adversarial networks. In CVPR, 2017. [12] J. Johnson, A. Alahi, and L. Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In ECCV, 2016. [13] W.-C. Kang, C. Fang, Z. Wang, and J. McAuley. Visually-Aware Fashion Recom- mendation and Design with Generative Image Models. 2017. [14] T. Kim, M. Cha, H. Kim, J. K. Lee, and J. Kim. Learning to discover cross-domain relations with generative adversarial networks. In ICML, 2017. [15] D. P. Kingma and J. L. Ba. Adam: A method for stochastic optimization. In ICML, 2015. [16] D. P. Kingma and M. Welling. Auto-Encoding Variational Bayes. In ICLR, 2014. [17] A. Krause, P. Perona, and R. G. Gomes. Discriminative clustering by regularized information maximization. In NIPS. 2010. [18] S. Laine and T. Aila. Temporal ensembling for semi-supervised learning. In ICLR, 2017. [19] D.-H. Lee. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In Workshop on Challenges in Representation Learn- ing,ICML, 2013. [20] C. Li and M. Wand. Precomputed real-time texture synthesis with markovian gen- erative adversarial networks. In ECCV, 2016. [21] C. Li, K. Xu, J. Zhu, and B. Zhang. Triple generative adversarial nets. In NIPS. 2017. [22] M.-Y. Liu, T. Breuel, and J. Kautz. Unsupervised image-to-image translation net- works. In NIPS. 2017. [23] M.-Y. Liu and O. Tuzel. Coupled generative adversarial networks. In NIPS. 2016. [24] Z. Liu, P. Luo, X. Wang, and X. Tang. Deep learning face attributes in the wild. In ICCV, 2015. [25] M. Lucic, K. Kurach, M. Michalski, S. Gelly, and O. Bousquet. Are GANs Created Equal? A Large-Scale Study. ArXiv e-prints, Nov. 2017. [26] M. Mirza and S. Osindero. Conditional generative adversarial nets. ArXiv e-prints, 2014. [27] T. Miyato, S.-i. Maeda, M. Koyama, and S. Ishii. Virtual Adversarial Training: a Regularization Method for Supervised and Semi-supervised Learning. ArXiv e- prints, Apr. 2017. [28] A. Odena. Semi-supervised learning with generative adversarial networks. In work- shop at ICML, 2016. [29] A. Rasmus, M. Berglund, M. Honkala, H. Valpola, and T. Raiko. Semi-supervised learning with ladder networks. In NIPS. 2015. [30] S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, and H. Lee. Generative adversarial text to image synthesis. In ICML, 2016. [31] O. Ronneberger, P. Fischer, and T. Brox. U-Net: Convolutional Networks for Biomedical Image Segmentation. In MICCAI, 2015. [32] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved techniques for training gans. In NIPS, 2016. [33] Y.-S. Shih, K.-Y. Chang, H.-T. Lin, and M. Sun. Compatibility family learning for item recommendation and generation. In AAAI, 2018. [34] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556, 2014. [35] J. T. Springenberg. Unsupervised and semi-supervised learning with categorical generative adversarial networks. In ICLR, 2016. [36]C.Szegedy,V.Vanhoucke,S.Ioffe,J.Shlens,andZ.Wojna.Rethinkingtheinception architecture for computer vision. In CVPR, 2016. [37] Y. Taigman, A. Polyak, and L. Wolf. Unsupervised cross-domain image generation. In ICLR, 2017. [38] D. Ulyanov, V. Lebedev, A. Vedaldi, and V. Lempitsky. Texture networks: Feed- forward synthesis of textures and stylized images. In ICML, 2016. [39] D. Ulyanov, A. Vedaldi, and V. Lempitsky. Instance normalization: The missing ingredient for fast stylization. ArXiv e-prints, July 2016. [40] Z. Yi, H. Zhang, P. Tan, and M. Gong. DualGAN: Unsupervised dual learning for image-to-image translation. In ICCV, 2017. [41] M. D. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. In ECCV, 2014. [42] H. Zhang, T. Xu, H. Li, S. Zhang, X. Wang, X. Huang, and D. Metaxas. StackGAN: Text to photo-realistic image synthesis with stacked generative adversarial networks. In ICCV, 2017. [43] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros. Unpaired image-to-image translation using cycle-consistent adversarial networkss. In ICCV, 2017. [44] S. Zhu, S. Fidler, R. Urtasun, D. Lin, and C. L. Chen. Be your own prada: Fashion synthesis with structural coherence. In ICCV, 2017.
|