(3.215.180.226) 您好!臺灣時間:2021/03/09 03:14
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

我願授權國圖
: 
twitterline
研究生:李孟薇
研究生(外文):LI, MENG-WEI
論文名稱:基於臉孔標記與透視變換之防眨眼拍照系統
論文名稱(外文):An Anti-Blink Photography System Base on Face Landmarks and Perspective Transform
指導教授:高立人高立人引用關係
指導教授(外文):KAU, LIH-JEN
口試委員:高立人陳仲萍陳彥霖蔣欣翰
口試委員(外文):KAU, LIH-JENCHEN, CHUNG-PINGCHEN, YEN-LINCHIANG, HSIN-HAN
口試日期:2020-07-28
學位類別:碩士
校院名稱:國立臺北科技大學
系所名稱:電子工程系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2020
畢業學年度:108
語文別:中文
論文頁數:71
中文關鍵詞:影像修復眨眼臉孔標記透視變換
外文關鍵詞:In-paintingBlinkFace landmarksPerspective Transform
相關次數:
  • 被引用被引用:0
  • 點閱點閱:48
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
影像修復為數位影像處理領域中被廣泛討論的議題,隨著神經網路的發展更是應用到各種不同的環境中。本研究主要針對的目標為在拍攝照片時因著自然眨眼而呈現閉眼或半睜眼的使用者,雖然目前的確有許多深度神經網路能夠處理此一問題,然其運算複雜度卻使得移植不易,硬體裝置的運算能力需要夠強才能使用。
因此本論文使用臉孔標記和透視變換演算法,提出一眼睛區域圖像置換演算法,其判斷影像中每位使用者之眼睛區域圖像是否需要進行置換的準確度達到98%,亦能有效的保留使用者的個人化特徵,即使使用者有戴眼鏡也能進行,進而達到使影像中每一位使用者皆呈現睜眼的果效,且此演算法運算複雜度為O(M3) (M為透視變換所選取的基準點數,在本篇論文中M = 8),相較深度神經網路降低許多,能更容易地移植到各種硬體拍攝裝置上,從而提高了系統普及化的可能性。此外,本論文的眼睛區域圖像置換演算法還可以適應各種不同之人臉及人眼偵測方法,只要使用者的臉孔可以被正確地標記出特徵標記點,都可以使用本研究的演算法。

In-painting is a widely discussed topic in the field of digital image processing. It has been applied to a variety of environments with the development of neural networks. In this thesis, we focused on the users having closed or half-opened eyes due to spontaneous blinking when taking pictures. Many kinds of DNN have been successfully generate faces and eyes, but their computing complexity makes migration difficult. The computing power of hardware devices need to be strong enough to handle it.
Therefore, we used face landmarks and perspective transform in this thesis, and proposed an approach to replace the part of eye image that has 98% accuracy in determining whether the part of eye image need to be replaced or not for individuals. It can also preserve the identity of users after replacement. Even if the user wears glasses can be carried out, and then achieve the effect of making every user in the image open eyes. The computing complexity of this approach is O(M3) (M is the number of reference points selected for perspective transformation, in this thesis M = 8), which is much lower than DNN, so this approach can be migrated to various hardware easily, thereby increasing the possibility of system popularization. In addition, this approach can also be adapted to various face and eye detection methods, being feasible as long as face landmarks can be correctly marked on user's face.

摘 要 i
ABSTRACT ii
誌謝 iv
目錄 v
表目錄 vii
圖目錄 viii
第一章 簡介 1
1.1 研究背景 1
1.2 研究動機 3
1.3 文獻回顧 3
1.3.1 人臉偵測 3
1.3.2 人眼偵測 5
1.3.3 生成對抗網路 6
1.3.4 樣本生成對抗網路 7
1.4 論文架構 9
第二章 研究方法 10
2.1 方向梯度直方圖 10
2.2 透視變換 10
2.3 最小平方法 12
2.4 奇異值分解 14
2.4.1 奇異值分解 14
2.4.2 奇異值分解於最小平方法之應用 15
第三章 防眨眼演算法 16
3.1 系統架構 16
3.2 拍照設定(A) 17
3.3 臉孔標記(B) 17
3.4 臉孔配對(C) 19
3.5 比較眼睛大小(D) 24
3.6 選擇基底(E) 27
3.7 眼睛合成(F) 27
3.7.1 判斷每隻眼睛圖像是否需要置換 27
3.7.2 計算透視變換矩陣 27
3.7.3 置換區域選定 29
3.7.4 融合 34
第四章 實驗結果與效能分析 36
4.1 效能指標 36
4.1.1 臉孔配對準確度(MattingAccuracy) 37
4.1.2 睜眼變化接受度之效能提升率(ImprovementRate) 38
4.1.3 置換準確度(ReplaceAccuracy) 39
4.1.4 置換精確率(ReplacePrecision) 39
4.1.5 置換召回率(ReplaceRecall) 40
4.1.6 置換成功率(SuccessReplace) 40
4.1.7整體成功率(OverallSuccessRate) 40
4.2 實驗一-靜態情境 41
4.2.1 測試案例1-靜態雙人 41
4.2.2 測試案例2-靜態多人 44
4.2.3 靜態情境實驗效能分析 48
4.3實驗二-動態情境 54
4.3.1 測試案例3-動態3人 54
4.3.2 測試案例4-動態9人 57
4.3.3 動態情境實驗效能分析 62
4.4演算法複雜度分析 66
第五章 結論 67
參考文獻 68

[1]Bertalmio, Marcelo and Sapiro, Guillermo and Caselles, Vincent and Ballester, Coloma, “Image inpainting,” in Proc. Conference on Computer Graphics and Interactive Techniques (SIGGRAPH 2000), New Orleans, LA, USA, 23-28 July 2000, pp. 417-424.
[2]Connelly Barnes, Eli Shechtman, Adam Finkelstein, and Dan B. Goldman, “Patchmatch: A randomized correspondence algorithm for structural image editing,” ACM Transactions on Graphics, vol.28, no.3, July 2009, pp. 1-11.
[3]S. Iizuka, E. Simo-Serra, and H. Ishikawa, “Globally and locally consistent image completion,” ACM Transactions on Graphics, vol.36, no.4, July 2017, pp. 1-14.
[4]T. Leyvand, D. Cohen-Or, G. Dror, and D. Lischinski. “Digital Face Beautification,” In Proc. ACM SIGGRAPH, Boston, USA, July 2006, pp. 169-es.
[5]Dong Guo and T. Sim, “Digital face makeup by example,” in Proc. 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, 20-25 June 2009, pp. 73-79.
[6]D. Vlasic, M. Brand, H. Pfister, and J. Popovic, “Face Transfer with Multilinear Models,” ACM Transactions on Graphics, vol.24, no.3, July 2005, pp. 426-433.
[7]C. Sagonas, Y. Panagakis, S. Zafeiriou and M. Pantic, “Robust Statistical Face Frontalization,” in Proc. 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, 7-13 Dec. 2015, pp. 3871-3879.
[8]Tal Hassner, Shai Harel, Eran Paz, Roee Enbar, “Effective Face Frontalization in Unconstrained Images,” in Proc. 2015 IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7-12 June 2015, pp. 4295-4304.
[9]Pawan Sinha, Benjamin Balas, Yuri Ostrovsky, Richard Russell, “Face Recognition by Humans: Nineteen Results All Computer Vision Researchers Should Know About,” Proceedings of the IEEE, vol. 94, no. 11, Nov. 2006, pp. 1948-1962.
[10]Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio, “Generative adversarial nets,” Advances in Neural Information Processing Systems (NIPS), 2014, pp. 2672–2680.
[11]Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen, “Progressive Growing of GANs for Improved Quality, Stability, and Variation,” arXiv:1710.10196v3, 2018.
[12]Brian Dolhansky, Cristian Canton Ferrer, “Eye in-painting with exemplar generative adversarial networks,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18-22 Jun 2018, pp. 7902-7911.
[13]Y. Taigman, M. Yang, M. Ranzato and L. Wolf, “DeepFace: Closing the Gap to Human-Level Performance in Face Verification,” in Proc. 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, 23-28 June 2014, pp. 1701-1708.
[14]Y. Sun, X. Wang and X. Tang, “Deep Learning Face Representation from Predicting 10,000 Classes,” in Proc. 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, 23-28 June 2014, pp. 1891-1898.
[15]Florian Schroff, Dmitry Kalenichenko, James Philbin, “FaceNet: A Unified Embedding for Face Recognition and Clustering,” in Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition 2015, Boston, MA, 7-12 June 2015, pp.815-823.
[16]A. Singh, M. Singh and B. Singh, “Face detection and eyes extraction using sobel edge detection and morphological operations,” in Proc. 2016 Conference on Advances in Signal Processing (CASP), Pune, India, 9-11 June 2016, pp. 295-300.
[17]“【tensorflow学习】最简单的GAN 实现” (2017). [Online]. Available: https://blog.csdn.net/u012223913/article/details/75051516 [Accessed: 13-July-2017]
[18]“「CVPR 2018」照片閉眼也無妨,Facebook黑科技完美補全大眼睛”(2018). [Online]. Available: https://kknews.cc/zh-tw/photography/nenlkq3.html
[Accessed: 17-June-2018]
[19]N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in Proc. 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05), San Diego, CA, USA, 20-25 June 2005, vol. 1, pp. 886-893.
[20]James F. Blinn, “A Trip Down The Graphics Pipeline: The Homogeneous Perspective Transform,” IEEE Computer Graphics and Applications, vol.14, no.3, May 1993, pp.75-80.
[21]“图像处理的仿射变换与透视变换”(2019). [Online]. Available: https://zhuanlan.zhihu.com/p/36082864 [Accessed: 8-Aug-2019]
[22]“perspective transform(透视变换)的实现过程” [Online]. Available: http://www.jeepxie.net/article/81155.html
[23]Stephen Stigler, The History of Statistics: The Measurement of Uncertainty before 1900. Harvard University Press, 1986, pp. 11-14.
[24]“最小平方法與迴歸分析” (2017). [Online]. Available: https://w3.math.sinica.edu.tw/mathmedia/HTMLarticle18.jsp?mID=41303
[Accessed: Sept-2017]
[25]“SVD的应用:求解Ax=b.” [Online]. Available: https://zhuanlan.zhihu.com/p/131097680
[26]Craig N. Karson, “Spontaneous Eye-Blink Rates And Dopaminergic Systems,” BRAIN, vol.106, no.3, Sept. 1983, pp.643-645.
[27]Antonio A.V. Cruz, MD, Denny M. Garcia, BSc (Physics), Carolina T. Pinto, MD, and Sheila P. Cechetti, MD, “Spontaneous Eyeblink Activity,” CLINICAL SCIENCE, vol.9, no.1, Nov. 1983, pp. 29-41.
[28]“人為什麼要眨眼 原來有這四大好處” (2017). [Online]. Available: https://www.nownews.com/news/20170713/2588314/ [Accessed: 13-July-2017]
[29]Navneet Dalal, Bill Triggs, “Histograms of oriented gradients for human detection,” in Proc. 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego, CA, USA, 20-25 June 2005, pp.886-893.
[30]Zhanpeng Zhang, Ping Luo, Chen Change Loy, Xiaoou Tang, “Facial Landmark Detection by Deep Multi-Task Learning,” in Proc. European Conference on Computer Vision, Zürich, Switzerland, 6-12 Sept. 2014, pp. 94-108
[31]Vahid Kazemi, Josephine Sullivan, “One millisecond face alignment with an ensemble of regression trees,” in Proc. 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA 2014, pp. 1867-1874.
[32]C. Sagonas, G. Tzimiropoulos, S. Zafeiriou and M. Pantic, “A Semi-automatic Methodology for Facial Landmark Annotation,” in Proc. 2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops, Portland, OR, 2013, pp. 896-903.
電子全文 電子全文(網際網路公開日期:20250822)
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關論文
 
無相關期刊
 
無相關點閱論文
 
系統版面圖檔 系統版面圖檔