跳到主要內容

臺灣博碩士論文加值系統

(18.97.9.169) 您好!臺灣時間:2024/12/11 13:20
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:張德林
研究生(外文):Der-Lin Chang
論文名稱:利用顯著擷取與特徵強化融合紅外光與可見光影像
論文名稱(外文):Fusion of Infrared and Visual Images Using Saliency Extraction and Feature Enhancement
指導教授:張恆華
指導教授(外文):Herng-Hua Chang
口試委員:丁肇隆黃乾綱張瑞益
口試日期:2021-10-08
學位類別:碩士
校院名稱:國立臺灣大學
系所名稱:工程科學及海洋工程學研究所
學門:工程學門
學類:綜合工程學類
論文種類:學術論文
論文出版年:2021
畢業學年度:109
語文別:中文
論文頁數:61
中文關鍵詞:影像融合紅外光影像可見光影像顯著圖CLAHE影像增強多尺度轉換類融合
外文關鍵詞:Image fusionInfrared imageVisual imageSaliency mapCLAHE image enhancementMulti-scale transform based fusion
DOI:10.6342/NTU202103838
相關次數:
  • 被引用被引用:0
  • 點閱點閱:285
  • 評分評分:
  • 下載下載:43
  • 收藏至我的研究室書目清單書目收藏:0
近年來,隨著科技發展與感測器功能的進步,影像融合技術為影像處理中重要的一部分。影像融合為將二或多張同一場景,不同感測器或是不同攝影條件的輸入影像,結合為一張含有重要輸入影像資訊,且有更好視覺感知及方便後續影像處理任務的輸出影像。影像融合技術已在許多領域有廣泛的應用:電腦視覺、監視系統、醫學影像、遙測等。其中可見光與紅外光因為兩者的攝影特性,為影像融合領域中較熱門的組合之一。本研究提出SEVE影像融合演算法,主要擷取影像中顯著部分,並改善視覺品質。以一組已對齊的紅外線與可見光影像作為輸入,對可見光影像使用CLAHE直方圖等化法,進行調整與加強,並透過導引濾波器,處理並提取紅外線影像中的顯著物件與細節部分。接著使用透過L0平滑濾波器改良的顯著圖生成方法,取得出較完整的顯著區域。然後以紅外線顯著部分作為優先考量,計算出兩張影像之權重圖。最後再結合使用導引濾波器提取之紅外光影像邊緣資訊,融合成最終的融合影像。結果顯示本研究提出的SEVE演算法,對於TNO資料集中的影像,可以產生視覺感知較好,影像資訊熵達到平均7.24,影像標準差平均47.55的結果。在視覺感知與指標表現中,大部分優於其他相關影像融合方法。
Image fusion is an enhancement technique that aims to combine images obtained by different kinds of sensors to generate a robust or informative image, which can facilitate subsequent processing and decision making. Image fusion has been widely applied to many fields such as computer vision, surveillance, medical imaging, remote sensing, target recognition, and so on. Infrared and visual images can provide different information of the same scene. Their fusion is a hot topic in the field of multi-sensor image fusion.
An ideal image fusion method should integrate the complete bright feature of the infrared image and preserve the original visual information of the visual image without generating halos or fusion artifacts.
To achieve these goals, this thesis proposes a saliency extraction and feature enhancement image fusion method, named as the SEFE. First, the CLAHE image enhancement method is applied to the visual image for adjusting image brightness and contrast. Next, we adopt the guided filter and L0 filter to extract edge details of the infrared image, and the saliency map of both infrared and visual images. Then we calculate the weight maps of both infrared and visual images, while highlighting the salient regions in the infrared image. Finally, we combine different components and useful information to generate the fused image.
Experiment results showed that the SEFE method produced fused images with better visual perception, and achieved better performance measures than other competetive infrared and visual fusion methods.
致謝 ii
中文摘要 iii
Abstract iv
目錄 v
圖目錄 viii
第 1 章 緒論 1
1.1 研究背景與動機 1
1.2 研究目的 4
1.3 論文大綱 5
第 2 章 文獻探討 6
2.1 影像融合方法 6
2.1.1 多尺度轉換類影像融合 6
2.1.2 顯著性表示類影像融合 7
2.1.3 混和模型類影像融合 7
2.2 直方圖等化 7
2.2.1 自適應直方圖等化 8
2.2.2 限制對比度自適應直方圖等化 8
2.3 平滑濾波器 8
2.3.1 導引濾波器 9
2.3.2 L0平滑濾波器 10
2.4 顯著圖生成 10
2.5 融合權重分析 11
2.6 現存影像融合方法 12
2.6.1 混合型多尺度影像融合(HMSD) 12
2.6.2 導引濾波與細節強化影像融合(GFCE) 13
2.6.3 微梯度濾波影像融合(GREPF) 13
2.6.4 紅外光特徵強化與視覺資訊保留影像融合(IFEVIP) 14
2.6.5 顯著性三相影像融合(STS) 15
第 3 章 研究設計與方法 16
3.1 演算法設計理念與架構 16
3.2 可見光影像處理 19
3.2.1 增強可見光影像 19
3.2.2 可見光影像顯著圖 20
3.3 紅外光影像處理 21
3.3.1 紅外光影像顯著圖 21
3.3.2 紅外光影像紋路細節擷取 21
3.4 權重分析與融合 23
3.4.1 融合權重分析 23
3.4.2 融合規則 24
3.5 影像品質評估指標 25
3.5.1 資訊熵 25
3.5.2 平均梯度 25
3.5.3 影像標準差 25
3.5.4 灰階共生矩陣對比度 26
3.5.5 視覺資訊傳真度 29
第 4 章 實驗結果及討論 30
4.1 實驗說明 30
4.1.1 實驗環境 30
4.1.2 資料集 30
4.1.3 比較方法 31
4.2 參數討論 31
4.2.1 CLAHE影像增強參數 31
4.2.2 改良顯著圖生成參數 31
4.2.3 權重圖生成計算參數 32
4.3 實驗結果 44
4.3.1 融合視覺效果評估 44
4.3.2 評估指標數據 50
第 5 章 結論及未來展望 56
5.1 結論 56
5.2 未來展望 57
參考文獻 58
[1]J. Ma, J. Zhao, Y. Ma, and J. Tian, "Non-rigid visible and infrared face registration via regularized Gaussian fields criterion," Pattern Recognition, vol. 48, no. 3, pp. 772-784, 2015.
[2]A. Toet, J. K. IJspeert, A. M. Waxman, and M. Aguilar, "Fusion of visible and thermal imagery improves situational awareness," Displays, vol. 18, no. 2, pp. 85-95, 1997.
[3] J. Kocić, N. Jovičić, and V. Drndarević, "Sensors and sensor fusion in autonomous vehicles," in 2018 26th Telecommunications Forum (TELFOR), 2018: IEEE, pp. 420-425.
[4]S. Li, X. Kang, L. Fang, J. Hu, and H. Yin, "Pixel-level image fusion: A survey of the state of the art," information Fusion, vol. 33, pp. 100-112, 2017.
[5]Y. Liu, L. Wang, J. Cheng, C. Li, and X. Chen, "Multi-focus image fusion: A survey of the state of the art," Information Fusion, vol. 64, pp. 71-91, 2020.
[6]P. Chai, X. Luo, and Z. Zhang, "Image fusion using quaternion wavelet transform and multiple features," IEEE access, vol. 5, pp. 6724-6734, 2017.
[7]A. P. James and B. V. Dasarathy, "Medical image fusion: A survey of the state of the art," Information fusion, vol. 19, pp. 4-19, 2014.
[8]Y. Yang, L. Wu, S. Huang, W. Wan, and Y. Que, "Remote sensing image fusion based on adaptively weighted joint detail injection," IEEE Access, vol. 6, pp. 6849-6864, 2018.
[9]H. Ghassemian, "A review of remote sensing image fusion methods," Information Fusion, vol. 32, pp. 75-89, 2016.
[10]Z. Wang, D. Ziou, C. Armenakis, D. Li, and Q. Li, "A comparative analysis of image fusion methods," IEEE transactions on geoscience and remote sensing, vol. 43, no. 6, pp. 1391-1402, 2005.
[11] R. A. Newcombe et al., "Kinectfusion: Real-time dense surface mapping and tracking," in 2011 10th IEEE international symposium on mixed and augmented reality, 2011: IEEE, pp. 127-136.
[12]G. Xiao, D. P. Bavirisetti, G. Liu, and X. Zhang, Image Fusion. Springer Singapore, 2020.
[13]A. Toet, "TNO Image Fusion Dataset." [Online]. Available: https://figshare.com/articles/dataset/TNO_Image_Fusion_Dataset/1008029/1
[14]G. Petschnigg, R. Szeliski, M. Agrawala, M. Cohen, H. Hoppe, and K. Toyama, "Digital photography with flash and no-flash image pairs," ACM transactions on graphics (TOG), vol. 23, no. 3, pp. 664-672, 2004.
[15]Y. Zhang, L. Zhang, X. Bai, and L. Zhang, "Infrared and visual image fusion through infrared feature extraction and visual information preservation," Infrared Physics & Technology, vol. 83, pp. 227-237, 2017.
[16]L. Meylan and S. Susstrunk, "High dynamic range image rendering with a retinex-based adaptive filter," IEEE Transactions on image processing, vol. 15, no. 9, pp. 2820-2830, 2006.
[17]J. Ma, Y. Ma, and C. Li, "Infrared and visible image fusion methods and applications: A survey," Information Fusion, vol. 45, pp. 153-178, 2019.
[18]Z. Zhou, M. Dong, X. Xie, and Z. Gao, "Fusion of infrared and visible images for night-vision context enhancement," Applied optics, vol. 55, no. 23, pp. 6480-6490, 2016.
[19]Y. Zhou, K. Gao, Z. Dou, Z. Hua, and H. Wang, "Target-aware fusion of infrared and visible images," IEEE Access, vol. 6, pp. 79039-79049, 2018.
[20]J. Ma, W. Yu, P. Liang, C. Li, and J. Jiang, "FusionGAN: A generative adversarial network for infrared and visible image fusion," Information Fusion, vol. 48, pp. 11-26, 2019.
[21]J. Ma and Y. Zhou, "Infrared and visible image fusion via gradientlet filter," Computer Vision and Image Understanding, vol. 197, p. 103016, 2020.
[22]S. Li, X. Kang, and J. Hu, "Image fusion with guided filtering," IEEE Transactions on Image processing, vol. 22, no. 7, pp. 2864-2875, 2013.
[23]S. M. Pizer et al., "Adaptive histogram equalization and its variations," Computer vision, graphics, and image processing, vol. 39, no. 3, pp. 355-368, 1987.
[24]A. M. Reza, "Realization of the contrast limited adaptive histogram equalization (CLAHE) for real-time image enhancement," Journal of VLSI signal processing systems for signal, image and video technology, vol. 38, no. 1, pp. 35-44, 2004.
[25]J. Ma, X. Fan, S. X. Yang, X. Zhang, and X. Zhu, "Contrast limited adaptive histogram equalization-based fusion in YIQ and HSI color spaces for underwater image enhancement," International Journal of Pattern Recognition and Artificial Intelligence, vol. 32, no. 07, p. 1854018, 2018.
[26]H. Ibrahim and N. S. P. Kong, "Brightness preserving dynamic histogram equalization for image contrast enhancement," IEEE Transactions on Consumer Electronics, vol. 53, no. 4, pp. 1752-1758, 2007.
[27]P. Gupta, J. Kumare, U. Singh, and R. Singh, "Histogram based image enhancement techniques: a survey," Int J Comput Sci Eng, vol. 5, no. 6, pp. 475-484, 2017.
[28]J.-Y. Kim, L.-S. Kim, and S.-H. Hwang, "An advanced contrast enhancement using partially overlapped sub-block histogram equalization," IEEE transactions on circuits and systems for video technology, vol. 11, no. 4, pp. 475-484, 2001.
[29]E. D. Pisano et al., "Contrast limited adaptive histogram equalization image processing to improve the detection of simulated spiculations in dense mammograms," Journal of Digital imaging, vol. 11, no. 4, p. 193, 1998.
[30] T. Jintasuttisak and S. Intajag, "Color retinal image enhancement by Rayleigh contrast-limited adaptive histogram equalization," in 2014 14th International Conference on Control, Automation and Systems (ICCAS 2014), 2014: IEEE, pp. 692-697.
[31] C. Tomasi and R. Manduchi, "Bilateral filtering for gray and color images," in Computer Vision, 1998. Sixth International Conference on, 1998: IEEE, pp. 839-846.
[32]Y. Zhang, D. Li, and W. Zhu, "Infrared and Visible Image Fusion with Hybrid Image Filtering," Mathematical Problems in Engineering, vol. 2020, 2020.
[33]K. He, J. Sun, and X. Tang, "Guided image filtering," IEEE transactions on pattern analysis and machine intelligence, vol. 35, no. 6, pp. 1397-1409, 2012.
[34]K. He and J. Sun, "Fast guided filter," arXiv preprint arXiv:1505.00996, 2015.
[35]L. Xu, C. Lu, Y. Xu, and J. Jia, "Image smoothing via L 0 gradient minimization," in Proceedings of the 2011 SIGGRAPH Asia conference, 2011, pp. 1-12.
[36]K. Subr, C. Soler, and F. Durand, "Edge-preserving multiscale image decomposition based on local extrema," ACM Transactions on Graphics (TOG), vol. 28, no. 5, pp. 1-9, 2009.
[37]Z. Farbman, R. Fattal, D. Lischinski, and R. Szeliski, "Edge-preserving decompositions for multi-scale tone and detail manipulation," ACM Transactions on Graphics (TOG), vol. 27, no. 3, pp. 1-10, 2008.
[38]L. Itti, C. Koch, and E. Niebur, "A model of saliency-based visual attention for rapid scene analysis," IEEE Transactions on Pattern Analysis & Machine Intelligence, no. 11, pp. 1254-1259, 1998.
[39]J. Chen, K. Wu, Z. Cheng, and L. Luo, "A saliency-based multiscale approach for infrared and visible image fusion," Signal Processing, vol. 182, p. 107936, 2021.
[40]Z. Zhou, B. Wang, S. Li, and M. Dong, "Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with Gaussian and bilateral filters," Information Fusion, vol. 30, pp. 15-26, 2016.
[41]G. Cui, H. Feng, Z. Xu, Q. Li, and Y. Chen, "Detail preserved fusion of visible and infrared images using regional saliency extraction and multi-scale image decomposition," Optics Communications, vol. 341, pp. 199-209, 2015.
[42]R. M. Haralick, K. Shanmugam, and I. H. Dinstein, "Textural features for image classification," IEEE Transactions on systems, man, and cybernetics, no. 6, pp. 610-621, 1973.
[43]Y. Han, Y. Cai, Y. Cao, and X. Xu, "A new image fusion performance metric based on visual information fidelity," Information fusion, vol. 14, no. 2, pp. 127-135, 2013.
[44]W. Xue, L. Zhang, X. Mou, and A. C. Bovik, "Gradient magnitude similarity deviation: A highly efficient perceptual image quality index," IEEE Transactions on Image Processing, vol. 23, no. 2, pp. 684-695, 2013.
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top