跳到主要內容

臺灣博碩士論文加值系統

(44.222.64.76) 您好!臺灣時間:2024/06/17 07:59
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:柯林佑
研究生(外文):Lin-Yu Ko
論文名稱:BatikGAN:一個用於蠟染影像創作的對抗式生成網路
論文名稱(外文):BatikGAN: A Generative Adversarial Network for Batik Creation
指導教授:朱威達
指導教授(外文):Wei-Ta Chu
口試委員:花凱龍朱威達許志仲黃敬群
口試委員(外文):Kai-Lung HuaWei-Ta ChuChih-Chung HsuChing-Chun Huang
口試日期:2019-06-27
學位類別:碩士
校院名稱:國立中正大學
系所名稱:資訊工程研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2019
畢業學年度:107
語文別:中文
論文頁數:34
中文關鍵詞:紋理合成影像生成對抗式生成網路
外文關鍵詞:texture synthesisimage generationgenerative adversarial network
相關次數:
  • 被引用被引用:0
  • 點閱點閱:246
  • 評分評分:
  • 下載下載:13
  • 收藏至我的研究室書目清單書目收藏:0
影像生成一直是電腦視覺中非常重要的領域之一,在過去二三十年的研究裡,紋理合成也是十分熱門的題目。該研究藉由有限的紋理資料合成或是拓展成新的紋理。本文提出一個紋理合成的方法,基於兩塊補丁,產生出一張符合兩塊補丁圖案,同時又融合兩種風格且合成和諧的蠟染影像。我們採用兩階段的訓練,使得產生的影像能夠逐漸清晰。透過增加局部鑑別器,去除掉補丁與補丁之間區塊感問題,使得影像線條更具有連續性。在實驗中,我們逐漸考慮不同特徵,使模型產生出來的影像逐漸變好。我們也研究這個方法使用在具有規律性的紋理資料集中,證明我們的方法也適用於此。最後我們進行使用者研究,實驗結果被使用者肯定,證明我們產生的影像是十分和諧的。
Image generation has been one of the most important fildes in computer vision.In the past two decades, texture synthesis is a popular study. This kind of researchsynthesizes or expands texture based on a small patch. In this thesis, we proposea regular-texture synthesis method based on two patches. The generation modelfuses styles of two patches and generates a harmonious Batik image. We adopt two-stages training and generate images more clearly. By adding a local discriminator,we removes blocking artifacts between patches.In the experiment, by considering features progressively, the generator learnshow to fuse two styles, removes the blocking artifacts and generates a harmoniousimage. We also show the proposed method can be used to generate texture imagesother than Batik images. Furthermore, we do a comprehensive user study andshow promising results
Introduction 1
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1
1.2 System Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . .2
1.3 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3
1.4 Thesis Organization . . . . . . . . . . . . . . . . . . . . . . . . . . .3
2 Related Works 4
2.1 Style Transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4
2.2 Generative Adversarial Network for Texture Synthesis . . . . . . . .5
2.3 Short Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6
3 Framework 7
3.1 Patch Generator . . . . . . . . . . . . . . . . . . . . . . . . . . . .7
3.2 Batik Generator . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8
3.2.1 BatikGAN . . . . . . . . . . . . . . . . . . . . . . . . . . . .9
3.2.2 BatikGAN with Style Features . . . . . . . . . . . . . . . . . 11
3.2.3 BatikGAN with Style and Local Features . . . . . . . . . . . 13
3.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4 Experiment 16
4.1 Experimental Settings . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.1.1 Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.1.2 Training Details . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.1.3 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4.2 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4.2.1 Patch Generation Results . . . . . . . . . . . . . . . . . . . 20
4.2.2 Batik Image Generation Results . . . . . . . . . . . . . . . . 21
4.2.3 Performance Comparison . . . . . . . . . . . . . . . . . . . . 23
4.2.4 Comparison with Other Methods. . . . . . . . . . . . . . . . 24
4.2.5 Texture Synthesis . . . . . . . . . . . . . . . . . . . . . . . . 25
4.3 User Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
5 Conclusion and Future Works 30
5.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
5.2 Future Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
References 31
References[1] Martin Arjovsky, Soumith Chintala, and L ́eon Bottou. Wasserstein gan.arXivpreprint arXiv:1701.07875, 2017.
[2] Dongdong Chen, Lu Yuan, Jing Liao, Nenghai Yu, and Gang Hua. Stylebank:An explicit representation for neural image style transfer. InProceedings ofIEEE Conference on Computer Vision and Pattern Recognition, pages 1897–1906, 2017.
[3] Yunjey Choi, Minje Choi, Munyoung Kim, Jung-Woo Ha, Sunghun Kim, andJaegul Choo. Stargan: Unified generative adversarial networks for multi-domain image-to-image translation. InProceedings of IEEE Conference onComputer Vision and Pattern Recognition, pages 8789–8797, 2018.
[4] Mircea Cimpoi, Subhransu Maji, Iasonas Kokkinos, Sammy Mohamed, andAndrea Vedaldi. Describing textures in the wild. InProceedings of IEEEConference on Computer Vision and Pattern Recognition, pages 3606–3613,2014.
[5] Dengxin Dai, Hayko Riemenschneider, and Luc Van Gool. The synthesizabil-ity of texture examples. InProceedings of IEEE Conference on ComputerVision and Pattern Recognition, pages 3027–3034, 2014.
[6] Vincent Dumoulin, Jonathon Shlens, and Manjunath Kudlur. A learned rep-resentation for artistic style. InProceedings of International Conference onLearning Representations, volume 2, 2017.
[7] Alexei A Efros and William T Freeman. Image quilting for texture synthe-31
sis and transfer. InProceedings of the 28th annual conference on Computergraphics and interactive techniques, pages 341–346. ACM, 2001.
[8] Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge. Texture synthesisusing convolutional neural networks. InProceddings of Advances in NeuralInformation Processing Systems, pages 262–270, 2015.
[9] Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge. Image style transferusing convolutional neural networks. InProceedings of IEEE Conference onComputer Vision and Pattern Recognition, pages 2414–2423, 2016.
[10] Leon A. Gatys, Alexander S. Ecker, Matthias Bethge, Aaron Hertzmann, andEli Shechtman. Controlling perceptual factors in neural style transfer. InPro-ceedings of IEEE Conference on Computer Vision and Pattern Recognition,pages 3985–3993, 2017.
[11] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C. Courville, and Yoshua Bengio. Generativeadversarial nets. InProceedings of Advances in Neural Information ProcessingSystems, pages 2672–2680, 2014.
[12] Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, andAaron C. Courville. Improved training of wasserstein gans. InProceedings ofAdvances in Neural Information Processing Systems, pages 5767–5777, 2017.
[13] Yohanes Gultom, Aniati Murni Arymurthy, and Rian Josua Masikome. Batikclassification using deep convolutional network transfer learning.Jurnal IlmuKomputer dan Informasi, 11(2):59–66, 2018.
[14] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler,and Sepp Hochreiter. Gans trained by a two time-scale update rule convergeto a local nash equilibrium. InProceddings of Advances in Neural InformationProcessing Systems, pages 6626–6637, 2017.
[15] Xun Huang and Serge Belongie. Arbitrary style transfer in real-time withadaptive instance normalization. InProceedings of the IEEE InternationalConference on Computer Vision, pages 1501–1510, 2017.32
[16] Nikolay Jetchev, Urs Bergmann, and Roland Vollgraf. Texture synthesis withspatial generative adversarial networks.arXiv preprint arXiv:1611.08207,2016.
[17] Chuan Li and Michael Wand. Precomputed real-time texture synthesis withmarkovian generative adversarial networks. InProceedings of European Con-ference on Computer Vision, pages 702–716, 2016.
[18] Yijun Li, Chen Fang, Jimei Yang, Zhaowen Wang, Xin Lu, and Ming-HsuanYang. Universal style transfer via feature transforms. InAdvances in NeuralInformation Processing Systems, pages 386–396, 2017.
[19] Fujun Luan, Sylvain Paris, Eli Shechtman, and Kavita Bala. Deep photostyle transfer. InProceedings of IEEE Conference on Computer Vision andPattern Recognition, pages 4990–4998, 2017.
[20] Fujun Luan, Sylvain Paris, Eli Shechtman, and Kavita Bala. Deep painterlyharmonization.Computer Graphics Forum, 37(4):95–106, 2018.
[21] Eric Risser, Pierre Wilmot, and Connelly Barnes. Stable and controllable neu-ral texture synthesis and style transfer using histogram losses.arXiv preprintarXiv:1701.08893, 2017.
[22] Patsorn Sangkloy, Jingwan Lu, Chen Fang, Fisher Yu, and James Hays. Scrib-bler: Controlling deep image synthesis with sketch and color. InProceedings ofIEEE Conference on Computer Vision and Pattern Recognition, pages 5400–5409, 2017.
[23] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. InProceedings of International Conference on Learn-ing Representations, 2015.
[24] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed,Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabi-novich. Going deeper with convolutions. InProceedings of IEEE Conferenceon Computer Vision and Pattern Recognition, pages 1–9, 2015.33
[25] Wenqi Xian, Patsorn Sangkloy, Varun Agrawal, Amit Raj, Jingwan Lu, ChenFang, Fisher Yu, and James Hays. Texturegan: Controlling deep image syn-thesis with texture patches. InProceedings of IEEE Conference on ComputerVision and Pattern Recognition, pages 8456–8465, 2018.
[26] Chao Yang, Xin Lu, Zhe Lin, Eli Shechtman, Oliver Wang, and Hao Li. High-resolution image inpainting using multi-scale neural patch synthesis. InPro-ceedings of IEEE Conference on Computer Vision and Pattern Recognition,pages 6721–6729, 2017.
[27] Hang Zhang and Kristin Dana. Multi-style generative network for real-timetransfer. InProceedings of European Conference on Computer Vision, 2018.
[28] Yang Zhou, Zhen Zhu, Xiang Bai, Dani Lischinski, Daniel Cohen-Or, andHui Huang. Non-stationary texture synthesis by adversarial expansion.arXivpreprint arXiv:1805.04487, 2018.
[29] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpairedimage-to-image translation using cycle-consistent adversarial networks. InProceedings of IEEE International Conference on Computer Vision, pages2223–2232, 2017
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top