|
參考文獻 [1] J. Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” Proc. IEEE Int. Conf. Computer Vision, Venice, Italy, Oct., 2017, pp. 2223–2232. [2]I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” Generative adversarial nets,” Advances in neural information processing systems, Montreal, Canada, Dec., 2014, pp.2672-2680. [3] M. Mirza and S. Osindero, “Conditional generative adversarial nets,” arXiv preprint arXiv:1411.1784,2014. [4] P. Isola, J. Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-Image Translation with conditional adversarial networks,” Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit, HI, USA, Jul., 2017, pp.1125-1134. [5] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proc. IEEE Int. Conf, CA, USA, Jun., 1998, pp. 2278–2324. [6] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Advances in neural information processing systems, NV, USA, Dec., 2012, pp. 1097-1105. [7] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv: 1409.1556,2014. [8] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit, NV, USA, Jun., 2016, pp.770-778. [9] M. D. Zeiler, D. Krishnan, G. W. Taylor, and R. Fergus, “Deconvolutional networks,” Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit, CA, USA, Jun., 2010, pp.2528-2535. [10]V. Badrinarayanan, A. Kendall, and R. Cipolla, "SegNet: A deep convolutional encoder-decoder architecture for image segmentation," IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 12, pp. 2481-2495, Dec. 2017. [11] U. Demir and G. Unal, “Patch-based image inpainting with generative adversarial networks,” arXiv preprint arXiv: 1803.07422,2018. [12] K. Shmelkov, C. Schmid, and K. Alahari, “How good is my GAN,” Proceedings of the European conference on computer vision (ECCV), MUC, BRD, Sep., 2018, pp.213-229. [13] Y. Lu, Y. W. Tai, and C. K. Tang, “Attribute-guided face generation using conditional cyclegan,” Proceedings of the European conference on computer vision (ECCV), MUC, BRD, Sep., 2018, pp.282-297. [14] B. Chang, Q. Zhang, S. Pan, and L. Meng, “Generating handwritten chinese characters using cyclegan, ” IEEE Winter Conference on Applications of Computer Vision (WACV), NV, USA, Mar.,2018, pp. 199-207. [15] H. Lipson, M. Kurman, “Driverless: intelligent cars and the road ahead. Mit Press”, 2016. [16] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation, ” Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit, MA, USA, Jun., 2015, pp.3431-3440. [17] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation, ”International Conference on Medical image computing and computer-assisted intervention, MUC, BRD, Oct., 2015, pp.234-241. [18] 張斐張 (民94)。類神經網路。臺北市:東華。 [19] A. Shaban, , S. Bansal, Z. Liu, , I. I. Essa and B. Boots, “One-shot learning for semantic segmentation,” arXiv preprint arXiv:1709.03410,2017. [20] 王維嘉(民108)。AI背後的暗知識:機器如何學習、認知與改造我們的未來世界。臺北市:大寫出版。 [21] C. Balakrishna, , S. Dadashzadeh and S. Soltaninejad,” Automatic detection of lumen and media in the IVUS images using U-Net with VGG16 Encoder”, arXiv preprint arXiv:1806.07554, 2018. [22] A. Almahairi , S. Rajeswar, A. Sordoni, P. Bachman and A. Courville, “Augmented cyclegan: Learning many-to-many mappings from unpaired data”, arXiv preprint arXiv:1802.10151, 2018. [23]J. Lin, Y. Xia, T. Qin, Z. Chen, , and T. Y. Liu, “Conditional image-to-image translation, ” Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit, UT, USA, Jun., 2018, pp.5524-5532. [24] X. Wang, H. Yan, C. Huo, J. Yu, and C. Pant, “Enhancing Pix2Pix for remote sensing image classification, ” Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit, Beijing, China, Aug.,2018,pp.2332-2336.
|