跳到主要內容

臺灣博碩士論文加值系統

(44.200.122.214) 您好!臺灣時間:2024/10/06 04:04
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:Jose Jaena Mari Romabiles Ople
研究生(外文):Jose Jaena Mari Ople
論文名稱:Augmenting Super-­Resolution using Neural Texture Transfer
論文名稱(外文):Augmenting Super-­Resolution using Neural Texture Transfer
指導教授:花凱龍
指導教授(外文):Kai-Lung Hua
口試委員:Arnulfo Azcarraga楊傳凱陳駿丞
口試委員(外文):Arnulfo AzcarragaChuan-Kai YangJun-Cheng Chen
口試日期:2020-01-20
學位類別:碩士
校院名稱:國立臺灣科技大學
系所名稱:資訊工程系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2020
畢業學年度:108
語文別:英文
論文頁數:41
中文關鍵詞:super-resolutionmachine learningcomputer visiondeep learningtexture transfer
外文關鍵詞:super-resolutionmachine learningcomputer visiondeep learningtexture transfer
相關次數:
  • 被引用被引用:0
  • 點閱點閱:125
  • 評分評分:
  • 下載下載:9
  • 收藏至我的研究室書目清單書目收藏:0
Recent deep learning approaches in Single image super­resolution (SISR) can generate high­definition textures for super­resolved (SR) images. However, they tend to hallucinate fake textures and even produce artifacts. Alternative to SISR, Reference­based SR (RefSR) approaches use high­resolution (HR) reference images to provide HR details that are missing in the low-resolution input image. We propose a novel framework that leverages existing SISR approaches and augment them with RefSR. Specifically, we refine the output of SISR methods using neural texture transfer, where HR features are queried from Ref images. The query is conducted by computing the similarity between the features of the input low­resolution (LR) image and the Ref images. The most similar HR features, patch­wise, is used to augment the output image of the SISR approach. Different from past RefSR approaches, our method does not impose limitations on the Ref images. We showcase that our method drastically improves the performance of the base SISR approach.
Recent deep learning approaches in Single image super­resolution (SISR) can generate high­definition textures for super­resolved (SR) images. However, they tend to hallucinate fake textures and even produce artifacts. Alternative to SISR, Reference­based SR (RefSR) approaches use high­resolution (HR) reference images to provide HR details that are missing in the low-resolution input image. We propose a novel framework that leverages existing SISR approaches and augment them with RefSR. Specifically, we refine the output of SISR methods using neural texture transfer, where HR features are queried from Ref images. The query is conducted by computing the similarity between the features of the input low­resolution (LR) image and the Ref images. The most similar HR features, patch­wise, is used to augment the output image of the SISR approach. Different from past RefSR approaches, our method does not impose limitations on the Ref images. We showcase that our method drastically improves the performance of the base SISR approach.
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
2 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.1 Patch Similarity . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Generating the HR Features . . . . . . . . . . . . . . . . . 8
2.3 Synthesizing the SR Image . . . . . . . . . . . . . . . . . 9
2.4 Training Objective . . . . . . . . . . . . . . . . . . . . . 11
2.5 Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.6 Training Details . . . . . . . . . . . . . . . . . . . . . . . 14
3 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . 16
3.1 Improving SISR methods . . . . . . . . . . . . . . . . . . 16
3.2 Effects of reference similarity . . . . . . . . . . . . . . . . 21
3.3 Visualizing the Texture Features . . . . . . . . . . . . . . 24
4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.1 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . 27
[1] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large­scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
[2] Z. Zhang, Z. Wang, Z. Lin, and H. Qi, “Image super­resolution by neural texture transfer,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7982–7991, 2019.
[3] D. G. Lowe et al., “Object recognition from local scale­invariant features.,” in iccv, vol. 99, pp. 1150–1157, 1999.
[4] B. Lim, S. Son, H. Kim, S. Nah, and K. Mu Lee, “Enhanced deep residual networks for single image
super­resolution,” in Proceedings of the IEEE conference on computer vision and pattern recognition
workshops, pp. 136–144, 2017.
[5] C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, et al., “Photo­realistic single image super­resolution using a generative adversarial network,”in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4681–4690, 2017.
[6] C.­Y. Yang, C. Ma, and M.­H. Yang, “Single­image super­resolution: A benchmark,” in European Conference on Computer Vision, pp. 372–386, Springer, 2014.
[7] J. Kim, J. Kwon Lee, and K. Mu Lee, “Accurate image super­resolution using very deep convolutional networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1646–1654, 2016.
[8] C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, et al., “Photo­realistic single image super­resolution using a generative adversarial network,”in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4681–4690, 2017.
[9] M. S. Sajjadi, B. Scholkopf, and M. Hirsch, “Enhance-net: Single image super­resolution through automated texture synthesis,” in Proceedings of the IEEE International Conference on Computer Vision, pp. 4491–4500, 2017.
[10] J. Johnson, A. Alahi, and L. Fei­Fei, “Perceptual losses for real­time style transfer and super-resolution,” in European conference on computer vision, pp. 694–711, Springer, 2016.
[11] I. Goodfellow, J. Pouget­Abadie, M. Mirza, B. Xu, D. Warde­Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in neural information processing systems, pp. 2672–2680, 2014.
[12] A. Shocher, N. Cohen, and M. Irani, ““zero­shot”super­resolution using deep internal learning,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3118–3126, 2018.
[13] R. Timofte, V. De Smet, and L. Van Gool, “Anchored neighborhood regression for fast example­based super­-resolution,” in Proceedings of the IEEE international conference on computer vision, pp. 1920–1927, 2013.
[14] L. Sun and J. Hays, “Super­resolution from internet­scale scene matching,” in 2012 IEEE International Conference on Computational Photography (ICCP), pp. 1–12, IEEE, 2012.
[15] G. Freedman and R. Fattal, “Image and video upscaling from local self­examples,” ACM Transactions on Graphics (TOG), vol. 30, no. 2, p. 12, 2011.
[16] H. Chang, D.­Y. Yeung, and Y. Xiong, “Super­resolution through neighbor embedding,” in Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004., vol. 1, pp. I–I, IEEE, 2004.
[17] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016.
[18] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” arXiv preprint arXiv:1502.03167, 2015.
[19] V. Nair and G. E. Hinton, “Rectified linear units improve restricted boltzmann machines,” in Proceedings of the 27th international conference on machine learning (ICML­10), pp. 807–814, 2010.
[20] J. Johnson, A. Alahi, and L. Fei­Fei, “Perceptual losses for real­time style transfer and superresolution,” in European conference on computer vision, pp. 694–711, Springer, 2016.
[21] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large­scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
[22] E. Agustsson and R. Timofte, “Ntire 2017 challenge on single image super­resolution: Dataset and study,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 126–135, 2017.
[23] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv: 1412.6980, 2014.
[24] L. Gatys, A. S. Ecker, and M. Bethge, “Texture synthesis using convolutional neural networks,” in Advances in neural information processing systems, pp. 262–270, 2015.
[25] M. Mori, K. F. MacDorman, and N. Kageki, “The uncanny valley [from the field],” IEEE Robotics & Automation Magazine, vol. 19, no. 2, pp. 98–100, 2012.
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊