|
[1] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. “Generative adversarial nets, In Advances in Neural Information Processing Systems (NIPS), 8-13 Dec., Palais des Congrès de Montréal, pp. 2672–2680, 2014. [2] A. Radford, L. Metz, and S. Chintala. “Unsupervised representation learning with deep convolutional generative adversarial networks, In International Conference on Learning Representations (ICLR), 2 - 4 May, Caribe Hilton, San Juan, Puerto Rico, 2016. [3] M.-Y. Liu and O. Tuzel. “Coupled generative adversarial networks, In Advances in Neural Information Processing Systems (NIPS), 5-10 December, Centre Convencions Internacional Barcelona, Barcelona SPAIN, pp. 469–477, 2016. [4] A. Shrivastava, T. Pfister, O. Tuzel, J. Susskind, W. Wang, and R. Webb. “Learning from simulated and unsupervised images through adversarial training, The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 21-16 July, Hawaii Convention Center in Honolulu, Hawaii, pp. 2107-2116, 2017. [5] J. Donahue, P. Krähenbühl, and T. Darrell. “Adversarial feature learning International Conference on Learning Representations (ICLR), 24 – 26 April, Palais des Congrès Neptune, Toulon, France, 2017. [6] V. Dumoulin, I. Belghazi, B. Poole, A. Lamb, M. Arjovsky, O. Mastropietro, and A. Courville, “Adversarially Learned Inference, arXiv preprint arXiv:1606.00704, 2016. [7] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-Image Translation with Conditional Adversarial Networks, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , 21-16 July, Hawaii Convention Center in Honolulu, Hawaii., pp. 5967–5976, 2017. [8] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “ Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks, The IEEE International Conference on Computer Vision (ICCV), 21-26 July, Venice, Italy, pp.2223-2232, 2017. [9] G. E. Hinton and R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, VOL 313, pages 504–507, 2006. [10] M. Arjovsky, S. Chintala, and L. Bottou, “ Wasserstein Generative Adversarial Networks, Proceedings of the International Conference on Machine Learning (ICML), 6-11 August, Sydney Australia , pp. 214–223, 2017. [11] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. Courville, “ Improved Training of Wasserstein Gans, Advances in Neural Information Processing Systems (NIPS), 4-9 Dec., Long Beach Convention Center, 2017. [12] Y. Choi, M. Choi, M. Kim, J.-W. Ha, S. Kim, and J. Choo, “StarGAN: Unified Generative Adversarial Networks for MultiDomain Image-to-Image Translation, ArXiv e-prints ArXiv:1711.09020, 2017. [13] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation, Proceedings of the Medical Image Computing and Computer-Assisted Intervention (MICCAI), 5-9 October, Granada, Spain, pp. 234–241, 2015. [14] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 26 June – 1 July, Caesar's Palace in Las Vegas, Nevada, pp. 770– 778, 2016. [15] G. Huang, Z. Liu, L. van der Maaten, and K.Q. Weinberger, “Densely connected convolutional networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 21-16 July, Hawaii Convention Center in Honolulu, Hawaii, pp. 2261-2269, 2017. [16] J. Sun, W. Cao, Z. Xu, and J. Ponce, “Learning a convolutional neural network for non-uniform motion blur removal, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 7-12 June, Boston, MA, pp. 769–777, 2015. [17] L. Xu, J. S. Ren, C. Liu, and J. Jia, “Deep convolutional neural network for image deconvolution, In Advances in Neural Information Processing Systems (NIPS) ), 8-13 Dec, Palais des Congrès de Montréal, pp. 1790–1798, 2014. [18] S. Nah, T. H. Kim and K. M. Lee, “Deep multi-scale convolutional neural network for dynamic scene deblurring, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 21-16 July, Hawaii Convention Center in Honolulu, Hawaii, 2017. [19] M. Noroozi, P. Chandramouli, and P. Favaro, “Motion Deblurring in the Wild, German Conference on Pattern Recognition (GCPR), 13-15 September, Basel, Switzerland, pp. 65–77, 2017. [20] A. Chakrabarti, “ A neural approach to blind motion deblurring, Proceedings of the European Conference on Computer Vision (ECCV), 8-16 October, Amsterdom,Netherlans, pp. 221–235, 2016. [21] G. Boracchi and A. Foi, “ Modeling the performance of image restoration from motion blur, IEEE Transactions on Image Processing, Vol. 21, No. 8, pp. 3502–3517, 2012. [22]O. Kupyn, V. Budzan, M. Mykhailych, D. Mishkin, and J. Matas, “DeblurGAN: blind motion deblurring using conditional adversarial networks arXiv preprint arXiv:1711.07064, 2017. [23] B. Li, W. Ren, D. Fu, D. Tao, D. Feng, W. Zeng, and Z. Wang, “RESIDE: A Benchmark for Single Image Dehazing, arXiv preprint arXiv:1712.04143, 2017. [24] M. Brown and S. Süsstrunk, “Multi-spectral SIFT for scene category recognition, Computer Vision and Pattern Recognition (CVPR), 20-25 June, Colorado Springs, CO, USA, USA, pp. 177-184, 2011. [25] J. Redmon and A. Farhadi, “ Yolo9000: Better, faster, stronger, Computer Vision and Pattern Recognition (CVPR), 21-16 July, Hawaii Convention Center in Honolulu, Hawaii, pp. 6517–6525, 2017. [26] J. Redmon and A. Farhadi, “Yolov3:An incremental improvement, arXiv preprint arXiv:1804.02767, 2018. [27] B. Cai, X. Xu, K. Jia, C. Qing, and D. Tao. Dehazenet, “An end-to-end system for single image haze removal, IEEE Transactions on Image Processing, Vol. 25, No. 11, pp. 5187–5198, 2016.
|