(3.238.7.202) 您好!臺灣時間:2021/03/02 01:23
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

我願授權國圖
: 
twitterline
研究生:Maynard John C. Si
研究生(外文):Maynard John C. Si
論文名稱:Perspective Preserving Style Transfer for Interior Portraits
論文名稱(外文):Perspective Preserving Style Transfer for Interior Portraits
指導教授:花凱龍
指導教授(外文):Kai-Lung Hua
口試委員:楊傳凱陳駿丞
口試委員(外文):Chuan-Kai YangJun-Cheng Chen
口試日期:2020-01-20
學位類別:碩士
校院名稱:國立臺灣科技大學
系所名稱:資訊工程系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2020
畢業學年度:108
語文別:英文
論文頁數:40
中文關鍵詞:Non-photorealistic renderingStyle Transfer
外文關鍵詞:Non-photorealistic renderingStyle Transfer
相關次數:
  • 被引用被引用:0
  • 點閱點閱:19
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
One of the jarring limitations of existing style transfer techniques is its failure
to capture the illusion of depth through perspective. These often result
in flat looking images as style elements are simply distributed across the image.
Though recent methods attempt to alleviate this by considering depth
information for a distinct stylization between foreground and background,
they still fail to capture the perspective of an image. When used on interior
portraits where perspective is instinctively observed through its surfaces
(walls, ceiling, floor), previous methods cause unwanted styling such as
style elements distorting boundaries of surfaces and style elements not receding
according to the perspective of the surfaces. In this paper, we developed
a simple approach to effectively preserve the perspective of interior
portraits during style transfer, yielding stylized images that distributes and
warps style elements according to the perspective of the interior surfaces.
Compared to existing methods, our approach generated images having style
elements receding towards the vanishing point of its respective surface. We
also observe that our approach was able to preserve depth information for
some styles despite not extracting this from the content.
One of the jarring limitations of existing style transfer techniques is its failure
to capture the illusion of depth through perspective. These often result
in flat looking images as style elements are simply distributed across the image.
Though recent methods attempt to alleviate this by considering depth
information for a distinct stylization between foreground and background,
they still fail to capture the perspective of an image. When used on interior
portraits where perspective is instinctively observed through its surfaces
(walls, ceiling, floor), previous methods cause unwanted styling such as
style elements distorting boundaries of surfaces and style elements not receding
according to the perspective of the surfaces. In this paper, we developed
a simple approach to effectively preserve the perspective of interior
portraits during style transfer, yielding stylized images that distributes and
warps style elements according to the perspective of the interior surfaces.
Compared to existing methods, our approach generated images having style
elements receding towards the vanishing point of its respective surface. We
also observe that our approach was able to preserve depth information for
some styles despite not extracting this from the content.
Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . iv
Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v
List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
2 Review of Related Literature . . . . . . . . . . . . . . . . . . . 5
3 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3.1 Interior Surface Keypoint Extraction . . . . . . . . . . . . 8
3.1.1 Keypoint Extraction Network . . . . . . . . . . . 9
3.1.2 Selecting New Surface Keypoints . . . . . . . . . 11
3.2 Interior Surface Map Construction . . . . . . . . . . . . . 13
3.3 Style Transfer . . . . . . . . . . . . . . . . . . . . . . . . 15
3.3.1 Perspective Image Reconstruction . . . . . . . . . 17
4 Results and Discussion . . . . . . . . . . . . . . . . . . . . . . 18
4.1 Implementation Details . . . . . . . . . . . . . . . . . . . 18
4.2 Perspective Preservation . . . . . . . . . . . . . . . . . . 19
4.3 Depth Map Comparison . . . . . . . . . . . . . . . . . . . 20
4.4 Stylization Quality Comparison . . . . . . . . . . . . . . 22
5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
5.1 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . 27
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
[1] L. A. Gatys, A. S. Ecker, and M. Bethge, “Image style transfer using convolutional neural networks,”
in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2414–2423,
2016.
[2] X. Huang and S. Belongie, “Arbitrary style transfer in realtime
with adaptive instance normalization,”
in Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510, 2017.
[3] K. Kozlovtsev and V. Kitov, “DepthPreserving
RealTime
arbitrary style transfer,” June 2019.
[4] F. Yu, Y. Zhang, S. Song, A. Seff, and J. Xiao, “Lsun: Construction of a largescale
image dataset
using deep learning with humans in the loop,” arXiv preprint arXiv:1506.03365, 2015.
[5] C.Y.
Lee, V. Badrinarayanan, T. Malisiewicz, and A. Rabinovich, “Roomnet: Endtoend
room layout
estimation,” in Proceedings of the IEEE International Conference on Computer Vision, pp. 4865–4874,
2017.
[6] Facebook, Inc., “Instagram: A simple, fun & creative way to capture, edit & share photos, videos &
messages with friends & family.” https://www.instagram.com/.
[7] Snap, Inc., “Snapchat: Snapchat lets you easily talk with friends, view live stories from around the
world, and explore news in discover. life’s more fun when you live in the moment!.” https://www.
snapchat.com/.
[8] R. Poplin and A. Prins, “Behind the scenes with stadia’s style transfer ML.” https://stadia.dev/
blog/behind-the-scenes-with-stadias-style-transfer-ml/.
[9] J. A. Paradiso and J. A. Landay, “Guest editors’ introduction: Crossreality
environments,” IEEE
Pervasive Computing, vol. 8, no. 3, pp. 14–15, 2009.
[10] J. Johnson, A. Alahi, and L. FeiFei,
“Perceptual losses for RealTime
style transfer and SuperResolution,”
in Computer Vision – ECCV 2016, pp. 694–711, Springer International Publishing, 2016.
[11] L. A. Gatys, A. S. Ecker, M. Bethge, A. Hertzmann, and E. Shechtman, “Controlling perceptual factors
in neural style transfer,” in Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition, pp. 3985–3993, 2017.
[12] X. Wang, G. Oxholm, D. Zhang, and Y.F.
Wang, “Multimodal transfer: A hierarchical deep convolutional
neural network for fast artistic style transfer,” 2017.
[13] X.C.
Liu, M.M.
Cheng, Y.K.
Lai, and P. L. Rosin, “Depthaware
neural style transfer,” in Proceedings
of the Symposium on NonPhotorealistic
Animation and Rendering, NPAR ’17, (New York, NY,
USA), pp. 4:1–4:10, ACM, 2017.
[14] Y. Li, C. Fang, J. Yang, Z. Wang, X. Lu, and M.H.
Yang, “Universal style transfer via feature transforms,”
in Advances in neural information processing systems, pp. 386–396, 2017.
[15] Y. Jing, Y. Liu, Y. Yang, Z. Feng, Y. Yu, D. Tao, and M. Song, “Stroke controllable fast style transfer
with adaptive receptive fields,” in Proceedings of the European Conference on Computer Vision
(ECCV), pp. 238–254, 2018.
[16] Y. Yao, J. Ren, X. Xie, W. Liu, Y.J.
Liu, and J. Wang, “Attentionaware
multistroke
style transfer,”
in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1467–1475,
2019.
[17] K. Andersen, The Geometry of an Art: The History of the Mathematical Theory of Perspective from
Alberti to Monge. Springer Science & Business Media, Nov. 2008.
[18] A. A. Efros and T. K. Leung, “Texture synthesis by nonparametric
sampling,” in Proceedings of the
Seventh IEEE International Conference on Computer Vision, vol. 2, pp. 1033–1038 vol.2, Sept. 1999.
[19] V. Kwatra, I. Essa, A. Bobick, and N. Kwatra, “Texture optimization for examplebased
synthesis,”
ACM Trans. Graph., vol. 24, pp. 795–802, July 2005.
[20] A. Hertzmann, C. E. Jacobs, N. Oliver, B. Curless, and D. H. Salesin, “Image analogies,” in Proceedings
of the 28th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH
’01, (New York, NY, USA), pp. 327–340, ACM, 2001.
[21] Y. Li, N. Wang, J. Liu, and X. Hou, “Demystifying neural style transfer,” Jan. 2017.
[22] A. Selim, M. Elgharib, and L. Doyle, “Painting style transfer for head portraits using convolutional
neural networks,” ACM Trans. Graph., vol. 35, pp. 129:1–129:18, July 2016.
[23] J. Yaniv, Y. Newman, and A. Shamir, “The face of art: Landmark detection and geometric style in
portraits,” ACM Trans. Graph., vol. 38, pp. 60:1–60:15, July 2019.
[24] J. D’Amelio, Perspective Drawing Handbook. Courier Corporation, May 2004.
[25] J. M. Coughlan and A. L. Yuille, “The manhattan world assumption: Regularities in scene statistics
which enable bayesian inference,” in Advances in Neural Information Processing Systems 13 (T. K.
Leen, T. G. Dietterich, and V. Tresp, eds.), pp. 845–851, MIT Press, 2001.
[26] K. Simonyan and A. Zisserman, “Very deep convolutional networks for LargeScale
image recognition,”
Sept. 2014.
[27] W. Chen, Z. Fu, D. Yang, and J. Deng, “Singleimage
depth perception in the wild,” in Advances in
neural information processing systems, pp. 730–738, 2016.
電子全文 電子全文(網際網路公開日期:20250206)
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
系統版面圖檔 系統版面圖檔