跳到主要內容

臺灣博碩士論文加值系統

(34.204.181.91) 您好!臺灣時間:2023/10/01 14:38
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:李維哲
研究生(外文):Wei-Che Li
論文名稱:非監督式影像轉換與無掩護影像之偽裝學演算法
論文名稱(外文):Unsupervised Image-to-Image Translation and Coverless Steganographic Algorithms
指導教授:王宗銘王宗銘引用關係
指導教授(外文):Chung-Ming Wang
口試委員:林偉蔡淵裕
口試委員(外文):Woei LinYuan-Yu Tsai
口試日期:2021-05-31
學位類別:碩士
校院名稱:國立中興大學
系所名稱:資訊工程學系所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2021
畢業學年度:109
語文別:中文
論文頁數:82
中文關鍵詞:偽裝學機器學習掩護合成偽裝學無掩護影像祕密訊息影像資料庫
外文關鍵詞:cover synthesis steganographycoverlessimage databasemachine learningsteganographysecret message
相關次數:
  • 被引用被引用:0
  • 點閱點閱:94
  • 評分評分:
  • 下載下載:2
  • 收藏至我的研究室書目清單書目收藏:0
本篇論文主要針對影像偽裝學進行研究,並提出了兩個演算法,分別屬於掩護合成和無掩護影像兩種類型的偽裝學演算法。前者可以提供高嵌密量且安全的祕密通訊技術,而後者雖嵌密量較低,但其祕密通訊之安全性優於前者。
本篇論文中的第一個演算法為「基於一對多非監督式影像轉換的偽裝學演算法」。在此演算法中我們使用MUNIT的機器學習方法,將黑框白底的鞋框影像上色,並在上色的過程中,嵌入秘密訊息。透過事先訓練好的模型,演算法可產生大量不同風格、不同色彩的偽裝鞋子影像。由於這些偽裝影像都是經過訓練產生,原本並不存在,故屬於掩護合成方式之偽裝學,相異於掩護修改偽裝學。由於掩護合成並無原始掩護影像,難以被偽裝偵測方法藉由分析、比較、學習掩護與偽裝影像之異同偵測是否內含藏密資訊,故可抵抗偽裝偵測攻擊,達成祕密通訊之目的。實驗結果顯示:5000張合成影像之嵌入率平均為每像素1.82位元。被偵測出有嵌密的機率接近50%,幾乎與隨機猜測相同,訊息被破解機率極低,訊息傳遞安全無虞。
本篇論文中的第二個演算法為「基於共有影像庫的無掩護影像偽裝學演算法」。此演算法係假設,傳送方與接收方都擁有相同的影像資料庫。據此,我們提出一個無需任何影像處理就能進行祕密通訊的無掩護影像偽裝學演算法。由於進行祕密通訊的過程中,影像並未經過任何修改,惡意攔截方即使分析傳送影像,亦無法偵測是否內含祕密訊息。祕密訊息代表影像在影像庫中的編號。嵌密演算法根據欲傳遞的秘密訊息,挑選影像庫中特定之影像,並將之縮小置放於8×8或16×16之索引影像。當接收方收到索引影像後,對其內的影像與影像庫中影像比對,得知其對應的影像編號後,即可解譯得知傳遞的祕密訊息。我們使用無參考與有參考影像品質評估量化演算法,並事先儲存影像量化結果,除可加速影像之比對外,也消除比對結果之重複性,提供正確的訊息擷取。實驗結果顯示:影像資料庫內含10000張影像時,每張8×8索引影像平均可提供846.1位元嵌入量;每張16×16索引影像平均可提供3384.5位元嵌入量。安全性分析顯示,只要影像資料庫內含35張影像以上,則嵌密方法具有相當於使用128位元密鑰的安全性。
本篇論文的貢獻如下:一、我們提出一個基於機器學習生成影像的嵌密演算法。該法以產生大量的合成影像,進行安全的祕密通訊。二、我們提出一個無掩護影像的偽裝學演算法,僅需傳遞無任何修改的索引影像,即可進行祕密通訊。三、結合我們提出的兩個演算法,我們可以產出大量合成影像資料庫,在不修改任何影像的情況下進行高度安全的祕密通訊。
This thesis investigates image steganography and present two algorithms including a cover synthesis steganography and a coverless steganography. The first provides a secure secret communication technique with high capacity; the latter has less capacity, but it is much more secure than the former.
The first proposed algorithm entitled “A steganographic algorithm based on multimodal unsupervised image-to-image translation.” In this algorithm, we employ the MUNIT machine learning method to colorize the images of shoes consisting of the white background and the black outline of a shoe. Taking advantage of the pretrained models, our algorithm can produce various images of shoes with different styles and colors. In addition, since these images are synthesized through the pretrained models, our steganographic algorithm can embed secret message to the outline of shoes during the image synthesis process without the need of any cover image. Experimental results show that our algorithm can offer an average embedding rate of 1.82 bits per pixel in an image database containing 5,000 synthesized images of shoes. In addition, the probability of the synthesized images being detected to conceal with secret message is around 0.5, which is no better than a random guess.
The second algorithm we introduce entitled “A coverless steganograpic algorithm based on a shared image database,” assuming that the sender and the receiver share a unique image database. In our image database, each image has its corresponding index encoded by a series of binary bits. Referring to every segment of the secret message to be delivered, our steganographic algorithm first selects the corresponding images. Then, these images are resized to a smaller resolution before they are positioned as one of image blocks within an index image containing 8×8 or 16×16 image blocks. Clearly, each image block represents a segment of secret message. The receiver compares each image block with images within the database to retrieve the corresponding index. We employ both no-reference and full-reference image quality assessment schemes to quantify image quality. In addition, we record the derived quantities in a pre-designated dictionary, enabling us to retrieve the image index and the corresponding segment of secret message effectively. Experimental results show that for an image database containing 10,000 images, delivering an 8×8 index image offers an average embedding capacity of 846.1 bits, and a 16×16 index image provides an average of 3384.5 bits. An analysis shows that if an image database contains more than 35 images, the security of our algorithm is equivalent to 128-bit secret key.
In conclusion, our work offers three contributions. First, we present a cover synthesis steganographic algorithm able to perform covert communication and resist malicious steganalytic attack. Second, we introduce a coverless steganographic algorithm, where we can deliver secret messages through an index image without incurring any pixel changes. Finally, combining our proposed two steganographic algorithms we can generate a variety of synthesized images as an image database to perform covert communication resisting the steganalytic attack.
摘 要 i
Abstract iii
目 錄 v
圖 目 錄 viii
表 目 錄 x
第一章 緒論 1
第二章 相關工作 4
2.1 非監督式學習(Unsupervised Learning) 4
2.2 非監督式影像轉換(UNIT) 4
2.3 Bicycle GAN 5
2.4 一對多非監督式影像轉換(MUNIT) 5
2.5 單基底嵌密法 7
2.6 三階編碼 9
2.7 基於生成動漫人物的無掩護影像資訊隱藏方法 11
第三章 基於一對多非監督式影像轉換的偽裝學演算法 13
3.1 訊息嵌入演算法 14
3.1.1 第一階段:生成影像 14
3.1.2 第二階段:祕密訊息嵌入 16
3.2 訊息取出演算法 23
3.2.1 第一階段:祕密訊息取出 23
3.2.2 第二階段:反向祕密訊息進制轉換 24
3.3 實驗結果 25
3.3.1 實驗結果之影像 25
3.3.2 嵌密量分析 28
3.3.3 No-Reference IQA 30
3.3.4 Full-Reference IQA 34
3.3.5 ECMV偽裝偵測 37
3.3.6 LSBM偽裝偵測 42
3.3.7 改良LSBM偽裝偵測 44
3.4 安全性分析 46
3.5 小結 47
第四章 基於共有影像庫的無掩護影像偽裝學演算法 49
4.1 訊息嵌入演算法 51
4.1.1 第一階段:影像庫分析 51
4.1.2 第二階段:建立傳遞影像 52
4.2 訊息取出演算法 58
4.2.1 第一階段:影像分割 58
4.2.2 第二階段:祕密訊息取出 58
4.3 實驗結果 59
4.3.1 實驗結果之影像 60
4.3.2 影像比對方法 63
4.3.3 嵌密量分析 67
4.4 安全性分析 71
4.5 小結 73
第五章 結論與未來工作 74
5.1 結論 74
5.2 未來工作 75
參考資料 76
中英對照表 79
英中對照表 81
[1] A. Bogomjakov, C. Gotsman, and M. Isenburg, “Distortion Free Steganography for Polygonal Meshes,” Computer Graphics Forum, vol. 27, issue 2, pp. 637-642, April 2008.
[2] G. Cancelli, G. Doerr, M. Barni, and I. J. Cox, “A Comparative Study of ±1 Steganalyzers,” IEEE 10th Workshop on Multimedia Signal Processing, 8-10 October 2008.
[3] G. Cancelli, G. Doerr, I. J. Cox, and M. Barni, “Detection of ±1 LSB Steganography Based on the Amplitude of Histogram Local Extrema,” IEEE International Conference on Image Processing, 12-15 October 2008.
[4] Y. Cao, Z. Zhou, Q. M. J. Wu, C. Yuan, and X. Sun, “Coverless Information Hiding Based on the Generation of Anime Characters,” EURASIP Journal on Image and Video Processing, no. 36, September 2020.
[5] C. K. Chan and L. M. Cheng, “Hiding Data in Images by Simple LSB Substitution,” Pattern Recognition, vol. 37, issue 3, pp. 469-474, March 2004.
[6] W. S. Chen, Y. K. Liao, Y. T. Lin, and C. M. Wang, “A Novel General Multiple-base Data Embedding Algorithm,” Information Sciences, vol. 358-359, pp. 164-190, September 2016.
[7] X. Chen, Q. Zhang, M. Lin, and G. Yang, and C. He, “No-reference Color Image Quality Assessment: from Entropy to Perceptual Quality,” EURASIP Journal on Image and Video Processing, no. 77, September 2019.
[8] A. Horé and D. Ziou, “Image Quality Metrics: PSNR vs. SSIM,” 20th International Conference on Pattern Recognition, 23-26 August 2010.
[9] N. C. Huang, M. T. Li, and C. M. Wang, “Toward Optimal Embedding Capacity for Permutation Steganography,” IEEE Signal Processing Letters, vol. 16, issue 9, pp. 802-805, September 2009.
[10] X. Huang, M. Y. Liu, S. Belongie, and J. Kautz, “Multimodal Unsupervised Image-to-Image Translation,” Europeon Conference on Computer Vision, 8-14 August 2018.
[11] I. J. Kadhim, P. Premaratne, P. J. Vial, and B. Halloran, “Comprehensive Survey of Image Steganography: Techniques, Evaluations, and Trends in Future Research,” Neurocomputing, vol. 335, pp. 299-326, March 2019.
[12] J. Kodovsky, J. Fridrich, and V. Holub, “Ensemble Classifiers for Steganalysis of Digital Media,” IEEE Transactions on Information Forensics and Security, vol. 7, issue 2, pp. 432-444, April 2012.
[13] M. Y. Liu, T. Breuel, and J. Kautz, “Unsupervised Image-to-Image Translation Networks,” 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach , California USA, 2017.
[14] A. Mittal, A. K. Moorthy, and A. C. Bovik, “No-reference Image Quality Assessment in the Spatial Domain,” IEEE Transactions on Image Processing, vol. 21, issue 12, pp. 4695-4708, December 2012.
[15] J. Qin, Y. Luo, X. Xiang, Y. Tan, and H. Huang, “Coverless Image Steganography: A Survey,” IEEE Access, vol. 7, pp. 171372-171394, November 2019.
[16] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen, “Improved Techniques for Training GANs,” 30th International Conference on Neural Information Processing Systems, pp. 2234-2242, December 2016.
[17] Z. Wang and A. C. Bovik, “A Universal Image Quality Index,” IEEE Signal Processing Letters, vol. 9, issue 3, pp. 81-84, March 2002.
[18] Z. Wang and A. C. Bovik, “Mean Squared Error: Love It or Leave It? A New Look at Signal Fidelity Measures,” IEEE Signal Processing Magazine, vol. 26, issue 1, pp. 98-117, January 2009.
[19] Z. Wang and Q. Li, “Information Content Weighting for Perceptual Image Quality Assessment,” IEEE Transactions on Image Processing, vol. 20, issue 5, pp. 1185-1198, May 2011.
[20] X. Yu and N. Babaguchi, “An Improved Steganalysis Method of LSB Matching,” International Conference on Intelligent Information Hiding and Multimedia Signal Processing, 22 August 2008.
[21] J. Zhang, I. J. Cox, and G. Doerr, “Steganalysis for LSB Matching in Images with High-frequency Noise,”IEEE 9th Workshop on Multimedia Signal Processing, October 2007.
[22] L. Zhang, Y. Shen, and H. Li, “VSI: A Visual Saliency-Induced Index for Perceptual Image Quality Assessment,” IEEE Transactions on Image Processing, vol. 23, issue 10, pp. 4270-4281, October 2014.
[23] R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, “The Unreasonable Effectiveness of Deep Features as a Perceptual Metric,” IEEE/CVF Conference on Computer Vision and Pattern Recognition, 18-23 June 2018.
[24] J. Y. Zhu, R. Zhang, D. Pathak, T. Darrell, A. A. Efros, O. Wang, and E. Shechtman, “Toward Multimodal Image-to-Image Translation,” 31st Conference on Advances in Neural Information Processing Systems (NIPS 2017), Long Beach, Carlifornia, 2017.
[25] J. Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks,” IEEE International Conference on Computer Vision, 22-29 October 2017.
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top