跳到主要內容

臺灣博碩士論文加值系統

(34.204.172.188) 您好!臺灣時間:2023/09/27 16:51
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:喻凱揚
研究生(外文):Kai-Yang Yu
論文名稱:基於生成對抗網路之複雜光源人臉膚色還原
論文名稱(外文):FCRGAN: Face Color Restoration Under Complicated Lighting based on GAN Network
指導教授:姚智原
指導教授(外文):Chih-Yuan Yao
口試委員:姚智原朱宏國莊永裕胡敏君鐘國亮
口試委員(外文):Chih-Yuan YaoHung-Kuo ChuYung-Yu ChuangMin-Jyun HuGuo-Liang Jhong
口試日期:2022-08-12
學位類別:碩士
校院名稱:國立臺灣科技大學
系所名稱:資訊工程系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2022
畢業學年度:110
語文別:中文
論文頁數:49
中文關鍵詞:深度學習生成對抗網路影像處理光源去除膚色還原
外文關鍵詞:GANStyleGANColor RestorationRelighting
相關次數:
  • 被引用被引用:0
  • 點閱點閱:95
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
本篇論文主要針對在複雜的環境光源下所拍攝的人臉,進行光源去除以及膚色還原
的功能。複雜的光源 (如: 遊樂場的昏暗環境中,各式機台所發出的光、夜店使用的七彩
燈、霓虹燈等) 映射在臉上,會讓人臉整體的膚色被強光顏色所覆蓋,而消除該類光源
顏色的方法,目前僅有使用 Photoshop 等修圖軟體進行人工編輯。進行此操作不僅
需要豐富的軟體知識以及影像處理知識,將人臉顏色修正也需要消耗不少的時間以及步
驟。為了能夠解決此問題,我們產生了一複雜光源的資料集,包含了複雜光源人臉以及
自然光源人臉作為輸入以及目標,並且提出了一使用 StyleGANV2 [2] 作為生成器的架
構,進一步去探索 StyleGAN Space 中每一個 Style Code,並針對了影響顏色的重點
Style Code 添加了 MLP 模組用於訓練顏色的對應,以及使用了 HyperNetwork 的架構
進一步的提升臉部細節的還原。本篇所提出的架構,能夠將在各式光源底下的人臉膚色
還原回來,且僅需 0.4 秒的執行時間。
This aper mainly focuses on the functions of light source removal and skin color
restoration for faces captured under complex ambient light sources. Complex light sources
(such as light from various machines in the dim environment of the layground, colorful
lights used in nightclubs, neon lights, etc.) are mapped on the face, which will make the
overall skin color of the face covered y strong light colors, while At resent, the only
way to eliminate the color of this type of light source is to use hoto editing software
such as Photoshop [1] for manual editing. This operation not only requires a wealth of
software knowledge and image rocessing knowledge, ut also requires a lot of time and
steps to correct the face color. In order to solve this roblem, we generated a dataset of
complex light sources, including faces with complex light sources and faces with natural
light sources as input and targets, and roposed an architecture using StyleGANV2 [2]
as a generator to further explore the hidden space of StyleGAN [2] In each Style Code,
and for the key Style Code that affects the color, an MLP module is added for training the
color correspondence, and the HyperNetwork architecture is used to further improve the
restoration of facial details. The architecture roposed in this article can restore the skin
color of the face under various light sources, and it only takes 0.4 seconds to inference.
論文摘要 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I
Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . II
誌謝 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . III
目錄 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IV
圖目錄 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . VI
表目錄 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IX
1 緒論 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
2 相關研究 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2.1 圖像轉譯 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2.2 影像上色 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.3 重新打光 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.4 StyleGAN [2] 影像生成 . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3 研究方法 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.1.1 Restyle [3] 架構 . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.1.2 HyperStyle [4] 架構 . . . . . . . . . . . . . . . . . . . . . . . . 10
3.2 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.3 編碼器架構設計 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.3.1 將影像編碼至 W
+ 隱藏空間
. . . . . . . . . . . . . . . . . . . . 11
3.3.2 顏色轉換 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.3.3 面部細節增強 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
IV
3.3.4 損失函數 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4 實驗設計 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.1 資料集 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.2 風格混合實驗 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.3 Style 碼通道實驗 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.4 去光源品質評估 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4.4.1 品質評估 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4.4.2 User Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.4.3 量化評估 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
5 結論與後續工作 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
參考文獻 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
[1] Adobe Inc., “Adobe hotoshop.”
[2] T. A. Tero Karras, Samuli Laine, “A style-based generator architecture for gener
ative adversarial networks,” in IEEE Conference on Computer Vision and attern
Recognition (CVPR), December 2018.
[3] D. C.-O. Yuval Alaluf, Or Patashnik, “Restyle: A residual-based stylegan encoder
via iterative refinement,” in IEEE Conference on Computer Vision and attern
Recognition (CVPR), April 2021.
[4] Y. Alaluf, O. Tov, R. Mokady, R. Gal, and A. H. Bermano, “Hyperstyle: Stylegan
inversion with hypernetworks for real image editing,” 2021.
[5] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with con
ditional adversarial networks,” in 2017 EEE Conference on Computer Vision and
Pattern ecognition (CVPR), p. 5967–5976, 2017.
[6] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Colorful image colorization,” in 2016
IEEE Conference on Computer Vision and attern ecognition (CVPR), 2017.
[7] I. M. A. M. L. TThomas estmeyer, Jean-François Lalonde, “Learning hysics
guided face relighting under directional light,” in IEEE Conference on Computer
Vision and attern ecognition (CVPR), June 2020.
[8] R. Pandey, S. Orts-Escolano, C. LeGendre, C. Haene, S. Bouaziz, C. Rhemann,
P. Debevec, and S. Fanello, “Total relighting: Learning to relight ortraits for ack
ground replacement,” vol. 40, August 2021.
[9] A. Hertzmann, C. E. Jacobs, . Oliver, B. Curless, and D. H. Salesin, “Image analo
gies,” in SIGGRAPH ’01: roceedings of the 28th annual conference on Computer
graphics and interactive techniques, (New York, Y, USA), p. 327–340, ACM
Press, 2001.
[10] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale
image recognition,” in 3rd nternational Conference on earning epresentations,
37
ICLR 2015, San iego, CA, USA, ay 7-9, 2015, Conference Track roceedings
(Y. Bengio and Y. LeCun, eds.), 2015.
[11] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for
biomedical image segmentation,” CoRR, vol. abs/1505.04597, 2015.
[12] M. U. A. A. R. Hao Zhou Sunil Hadap Kalyan Sunkavalli David W. Jacobs Uni
versity of Maryland, College Park, “Deep single-imageportrait relighting,” in Inter
national Comference on Computer Vision (ICCV), October 2019.

[13] E. S. Zongze Wu, Dani Lischinski, “Stylespace analysis: Disentangled controls for

stylegan image generation,” in IEEE Conference on Computer Vision andPattern

Recognition (CVPR), 2021.

[14] E. Richardson, Y. Alaluf, O. Patashnik, Y.Nitzan, Y. Azar, S. Shapiro, and D. Cohen
Or, “Encoding in style: a stylegan encoder for image-to-image translation,” in IEEE/

CVF Conference on Computer Vision andPatternRecognition (CVPR), June 2021.

[15] J. Deng, J. Guo,N. Xue, and S. Zafeiriou, “Arcface: Additive angular margin loss for

deep face recognition,” in Proceedings of theIEEE/CVF Conference on Computer

Vision andPatternRecognition (CVPR), June 2019.

[16] Daz 3D, “dazstudio.”

[17] “黃種人臉數據集.”
電子全文 電子全文(網際網路公開日期:20240829)
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊