(54.236.62.49) 您好!臺灣時間:2021/03/08 02:13
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

我願授權國圖
: 
twitterline
研究生:鍾孟哲
研究生(外文):CHUNG, MENG-CHE
論文名稱:以深度神經網路作多重對焦影像融合
論文名稱(外文):Multi-focus Image Fusion Using Deep Neural Networks
指導教授:柳金章柳金章引用關係
指導教授(外文):LEOU, JIN-JANG
口試委員:廖弘源范國清張傳育柳金章
口試委員(外文):LIAO, HONG-YUANFAN, KUO-CHINCHANG, CHUAN-YULEOU, JIN-JANG
口試日期:2020-07-30
學位類別:碩士
校院名稱:國立中正大學
系所名稱:資訊工程研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2020
畢業學年度:108
語文別:英文
論文頁數:80
中文關鍵詞:多重對焦影像融合膨脹卷積卷積神經網路
外文關鍵詞:multi-focusimage fusiondilated convolutionconvolutional neural network
相關次數:
  • 被引用被引用:0
  • 點閱點閱:41
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
由於光學鏡頭景深的限制,一般數位相機很難捕捉到包含不同景深的影像,只有在聚焦附近的景物在影像能清晰的呈現。相反地,遠離聚焦附近的景物模糊。為了克服這個問題的方法是多重對焦融合,多重對焦融合是指同一場景的不同景深圖像融合到一個全對焦影像。在本論文中,提出一個基於深度學習網路的多重對焦影像融合方法。首先,前處理獲得所有多重對焦影像的灰階影像和Laplacian-filtered影像。深度學習網路用來擷取多尺度影像特徵。基於深度學習網路輸出的機率以生成初始的 focus map,再透過 binary classification以獲得final decision map。基於final decision map,透過影像融合以生成最後的融合影像。依據實驗結果所示,本論文所提出的研究方法優於現有的五種方法。
Due to limited depth of field, it is difficult for general digital cameras to capture images with different depth-of-fields. Only the scenes near the focus can be clearly displayed in the image. In contrast, the scenes away from focus are blurred. To address this problem, the multi-focus image fusion approach has been proposed. Multi-focus image fusion fuses several images in same scene with different depth-of-fields into a fused image. In this study, a multi-focus image fusion approach using deep neural networks is proposed. Firstly, the gray level version and then the Laplacian-filtered image of each multi-focus color image are obtained. Next, CNN is employed to exact multi-scale image features. Then, the probabilities generated by CNN are employed to generate the initial focus map. Then, the binary maps are obtained by binary classification. Then, the final decision maps are obtained by post processing. Finally, based on the final decision maps, the final fused images are obtained by pixel-based image fusion. Based on the experimental results, the performance of the fused images in this study is better than those of five comparison approaches.
摘 要 i
ABSTRACT ii
ACKNOWLEDGEMENTS iii
TABLE OF CONTENTS iv
LIST OF FIGURES vi
LIST OF TABLES xii
CHAPTER 1 1
INTRODUCTION 1
1.1. Motivation 1
1.2. Survey of Related Researches 2
1.2.1. Transform domain based multi-focus image fusion 2
1.2.2. Spatial domain based multi-focus image fusion 5
1.2.3. Deep learning based multi-focus image fusion 8
1.3. Overview of Proposed Approach 13
1.4. Thesis Organization 13
CHAPTER 2 14
PROPOSED MULI-FOCUS IMAGE FUSION APPROACH 14
2.1. System Architecture 14
2.2. Data preprocessing 16
2.3. Convolutional neural network 17
2.4. The proposed CNN architecture 22
2.5. Image fusion 24
2.6. Training and CNN architecture 25
2.7. Data augmentation 29
CHAPTER 3 30
EXPERMENTAL RESULTS 30
3.1. System Setup 30
3.2. Quality Evaluation 31
3.2.1. Objective evaluation 33
3.2.2. Subjective evaluation 48
CHAPTER 4 72
CONCLUSIONS 72
REFERENCES 73

[1] H. Li, B. S. Manjunath, and S. K. Mitra, “Multi-sensor image fusion using the wavelet transform,” Graph Models and Image Processing, vol. 57, no. 3, pp. 235–245, May 1995.
[2] S. Li and B. Yang, “Multifocus image fusion by combining curvelet and wavelet transform,’’ Pattern Recognition Letters, vol. 29, no. 9, pp. 1295–1301, July 2008.
[3] Y. Yang, “A novel DWT based multi-focus image fusion method,’’ Procedia Engineering, vol. 24, pp. 177-181, 2011.
[4] S. Wei and W. Ke, “A multi-focus image fusion algorithm with DT-CWT,’’ in Proc. of 2007 Int. Conf. on Computational Intelligence and Security, 2007, pp. 147-151.
[5] G. R. Gao, L. P. Xu, and D. Z. Feng, “Multi-focus image fusion based on non-subsampled shearlet transform,’’ IET Image Processing, vol. 7, no. 6, pp. 633-639, Aug. 2013.
[6] Q. Zhang and B.L. Guo, “Multifocus image fusion using the nonsubsampled contourlet transform,” Signal Processing, vol. 89, no. 7, pp. 1334-1346, July 2009.
[7] W. Wang and F. Chang, “A multi-focus image fusion method based on Laplacian pyramid,” Journal of Computers, vol. 6, no. 12, pp. 2559–2566, Dec. 2011.
[8] M. Cai, J. Yang, and G. Cai, “Multi-focus image fusion algorithm using LP transformation and PCNN,” in Proc. of 2015 Int. Conf. on Software Engineering and Service Science (ICSESS), Sept. 2015, pp. 237-241.
[9] B. Yang and S. Li, “Multifocus image fusion and restoration with sparse representation,” IEEE Trans. Instrum. Meas., vol. 59, no. 4, pp. 884–892, Apr. 2010.
[10] B. Yang and S. Li, “Pixel-level image fusion with simultaneous orthogonal matching pursuit,” Information Fusion, vol. 13, no. 1, pp. 10–19, Jan. 2012.
[11] H. Li, L. Li, and J. Zhang, “Multi-focus image fusion based on sparse feature matrix decomposition and morphological filtering,” Optics Communications, vol. 342, pp. 1–11, May 2015.
[12] H. Zhao, A. Shang, Y. Y. Tang, and B. Fang, “Multi-focus image fusion based on the neighbor distance,” Pattern Recognition, vol. 46, no. 3, pp. 1002-1011, Mar. 2013.
[13] S. Li, J. T. Kwok, and Y. Wang, “Multifocus image fusion using artificial neural networks,” Pattern Recognition Letters, vol. 23, no. 8, pp. 985-997, June 2002.
[14] B. Zhang, X. Lu, and W. Jia, “A multi-focus image fusion algorithm based on an improved dual-channel PCNN in NSCT domain,” Optik-Int. Journal for Light and Electron Optics, vol. 124, no. 20, pp. 4104-4109, Oct. 2013.
[15] X. Bai, Y. Zhang, F. Zhou, and B. Xue, “Quadtree-based multi-focus image fusion using a weighted focus-measure,” Information Fusion, vol. 22, no. 1, pp. 105–118, Mar. 2015.
[16] V. Aslantas and R. Kurban, “Fusion of multi-focus images using differential evolution algorithm,” Expert System Applications, vol. 37, no. 12, pp. 8861–8870, Dec. 2010.
[17] D. Guo, J. Yan, and X. Qu, ‘‘High quality multi-focus image fusion using self-similarity and depth information,’’ Optics Communications, vol. 338, pp. 138–144, Mar. 2015.
[18] S. Li, X. Kang, J. Hu, and B. Yang, “Image matting for fusion of multifocus images in dynamic scenes,” Information Fusion, vol. 14, no. 2, pp. 147–162, April 2013.
[19] Y. Zhang, X. Bai, and T. Wang, “Boundary finding based multi-focus image fusion through multi-scale morphological focus-measure,” Information Fusion, vol. 35, pp. 81-101, May 2017.
[20] Y. Liu, S. Liu, and Z. Wang, “Multi-focus image fusion with dense SIFT,” Information Fusion, vol. 23, pp. 139–155, May 2015
[21] S. Li and B. Yang, “Multifocus image fusion using region segmentation and spatial frequency,” Image and Vision Computing, vol. 26, no. 7, pp. 971–979, Jul. 2008
[22] M. Li, W. Cai, and Z. Tan, “A region-based multi-sensor image fusion scheme using pulse-coupled neural network,” Pattern Recognition Letters, vol. 27, no. 16, pp. 1948-1956, Dec. 2006.
[23] X. Qiu, M. Li, L. Zhang, and X. Yuan, “Guided filter based multi-focus image fusion through focus region detection,” Signal Processing: Image Communication, vol. 72, pp. 35–46, Mar. 2019.
[24] J. Duan, L. Chen, and C. P. Chen, “Multifocus image fusion using superpixel segmentation and superpixel based mean filtering,” Applied Optics, vol. 55, no. 36, pp. 10352-10362, Dec. 2016.
[25] M. Nejati, S. Samavi, and S. Shirani, “Multi-focus image fusion using dictionary-based sparse representation,” Information Fusion, vol. 25, pp. 72-84, Sept. 2015.
[26] X. Zhang, et al., “Multi-focus image fusion using image-partition-based focus detection,” Signal Processing, vol. 102, pp. 64-76, Sept. 2014.
[27] Y. Zhang, et al., “Multi-focus image fusion based on cartoon-texture image decomposition,” Optik-Int. Journal for Light and Electron Optics, vol. 127, no. 3, pp. 1291-1296, Feb. 2016.
[28] W. Yin, W. Zhao, D. You, and D. Wang, “Local binary pattern metric-based multi-focus image fusion,” Optics Laser Technol., vol. 110, pp. 62–68, Feb. 2019.
[29] J. Ma, Z. Zhou, B. Wang, and M. Dong, “Multi-focus image fusion based on multi-scale focus measures and generalized random walk,” in Proc. of 36th Chinese Control Conf. (CCC), 2017, pp. 5464–5468.
[30] O. Bouzos, I. Andreadis and N. Mitianoudis, “Conditional random field model for robust multi-focus image fusion,” IEEE Trans. on Image Processing, vol. 28, no. 11, pp. 5636-5648, Nov. 2019.
[31] Y. Liu, X. Chen, H. Peng, and Z. Wang, ‘‘Multi-focus image fusion with a deep convolutional neural network,’’ Information Fusion, vol. 36, pp. 191–207, July 2017.
[32] C. Du and S. Gao, “Image segmentation-based multi-focus image fusion through multi-scale convolutional neural network,” IEEE Access, vol. 5, pp. 15750–15761, Aug. 2017.
[33] H. Tang, B. Xiao, W. Li, and G. Wang, ‘‘Pixel convolutional neural network for multi-focus image fusion,’’ Information Sciences, vols. 433–434, pp. 125–141, Apr. 2018.
[34] M. Amin-Naji, A. Aghagolzadeh, and M. Ezoji, “Ensemble of CNN for multi-focus image fusion,” Information Fusion, vol. 51, pp. 201-214, Nov. 2019.
[35] Y. Yang, Z. Nie, S. Huang, P. Lin, and J. Wu, ‘‘Multilevel features convolutional neural network for multifocus image fusion,’’ IEEE Trans. on Computational Imaging, vol. 5, no. 2, pp. 262–273, Jun. 2019.
[36] C. Wang, Z. Zhao, Q. Ren, Y. Xu, and Y. Yu, “A novel multi-focus image fusion by combining simplified very deep convolutional networks and patch-based sequential reconstruction strategy,” Applied Soft Computing, vol. 91, June 2020 (Art. no. 106253).
[37] M. Amin-Naji, A. Aghagolzadeh, and M. Ezoji, “Fully convolutional networks for multi-focus image fusion,” in Proc. of 2018 9th Int. Symp. on Telecommun. (IST), 2018, pp. 553–558.
[38] H. Ma, J. Zhang, S. Liu, and Q. Liao, ‘‘Boundary aware multi-focus image fusion using deep neural network,’’ in Proc. of 2019 IEEE Int. Conf. on Multimedia and Expo (ICME), 2019, pp. 1150–1155.
[39] H. Xu, F. Fan, H. Zhang, Z. Le, and J. Huang, “A deep model for multi-focus image fusion based on gradients and connected regions,” IEEE Access, vol. 8, pp. 26316– 26327, Feb. 2020.
[40] B. Ma, X. Ban, H. Huang, and Y. Zhu, “Sesf-fuse: an unsupervised deep model for multi-focus image fusion,” arXiv preprint arXiv:1908.01703, 2019.
[41] Z. Wang, X. Li, H. Duan, X. Zhang, and H. Wang, ‘‘Multifocus image fusion using convolutional neural networks in the discrete wavelet transform domain,’’ Multimedia Tools and Applications, vol. 78, no. 24, pp. 34483–34512, Dec. 2019.
[42] X. Guo, R. Nie, J. Cao, D. Zhou, L. Mei, and K. He, ‘‘FuseGAN: learning to fuse multi-focus image via conditional generative adversarial network,’’ IEEE Trans. on Multimedia, vol. 21, no. 8, pp. 1982–1996, Aug. 2019.
[43] H. T. Mustafa, J. Yang, and M. Zareapoor, “Multi-scale convolutional neural network for multi-focus image fusion,” Image and Vision Computing, vol. 85, pp. 26–35, May 2019.
[44] R. Lai, Y. Li, J. Guan, and A. Xiong, “Multi-scale visual attention deep convolutional neural network for multi-focus image fusion,” IEEE Access, vol. 7, pp. 114385-114399, Aug. 2019.
[45] H. T. Mustafa, F. Liu, J. Yang, Z. Khan, and Q. Huang, “Dense multi-focus fusion net: A deep unsupervised convolutional network for multi-focus image fusion,” in Proc. of 2019 Int. Conf. Artif. Intell. and Soft Computing, 2019, pp. 153–163.
[46] J. Li, X. Guo, G. Lu, B. Zhang, Y. Xu, F. Wu, and D. Zhang, “Drpl: deep regression pair learning for multi-focus image fusion,” IEEE Trans. on Image Processing, vol. 29, pp. 4816–4831, Mar. 2020.
[47] H. Li, R. Nie, J. Cao, X. Guo, D. Zhou, and K. He, “Multi-focus image fusion using u-shaped networks with a hybrid objective,” IEEE Sensors J., vol. 19, no. 21, pp. 9755–9765, Nov. 2019.
[48] J. Huang, Z. Le, Y. Ma, X. Mei, and Fan Fan, “A generative adversarial network with adaptive constraints for multi-focus image fusion,” Neural Computing & Applications, Springer Nature 2020, 2020.
[49] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient based learning applied to document recognition,” Proc. of the IEEE, vol. 86, no. 11, pp. 2278-2324, Nov. 1998.
[50] P. Ramachandran, B. Zoph, and Q. V. Le, ‘‘Searching for activation functions,’’ arXiv:1710.05941, Oct. 2017.
[51] M. Lin, Q. Chen, and S. Yan, “Network in network,” arXiv preprint arXiv:1312.4400, 2013.
[52] C. Szegedy , V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in Proc. of 2016 IEEE Conf. Computer Vision Pattern Recognition, 2016, pp. 2818–2826.
[53] F. Yu and V. Koltun, “Multi-scale context aggregation by dilated convolutions,” arXiv preprint arXiv:1511.07122, 2015.
[54] G. Huang, Z. Liu, L. Van Der Maaten and K. Q. Weinberger, “Densely connected convolutional networks,” in Proc. of 2017 IEEE Conf. on Computer Vision and Pattern Recognition, 2017, pp. 2261-2269.
[55] S. Ioffe and C. Szegedy, “Batch normalization: accelerating deep network training by reducing internal covariate shift,” in Proc. Int. Conf. on Machine Learning, 2015, pp. 448–456.
[56] J Duchi, E Hazan, and Y Singer, “Adaptive subgradient methods for online learning and stochastic optimization,” J. of Machine Learning Research, vol. 12, pp. 2121-2159, July 2011.
[57] T. Tieleman and G. Hinton, “Lecture 6.5-rmsprop: divide the gradient by a running average of its recent magnitude,” COURSERA: Neural networks for Machine Learning, vol. 4, no. 2, pp. 26-31, 2012.
[58] D. Kingma and J. Ba, “Adam: a method for stochastic optimization,” in Proc. of 2015 Int. Conf. on Learning Representations, 2015, pp. 1-13.
[59] T.-Y. Lin, et al., “Microsoft COCO: common objects in context,” in Proc. of 2014 European Conf. on Computer Vision, 2014, pp. 740–755.
[60] C. S. Xydeas and V. Petrovic, “Objective image fusion performance measure,” Electronics Letters, vol. 36, no. 4, pp. 308–309, Feb. 2000.
[61] M. Hossny, S. Nahavandi, and D. Creighton, “Comments on ‘information measure for performance of image fusion’,” Electronics Letters, vol. 44, no. 18, pp. 1066–1067, Aug. 2008.
[62] Q. Wang, Y. Shen, and J. Q. Zhang, “A nonlinear correlation measure for multivariable data set,” Physica D: Nonlinear Phenomena, vol. 200, nos. 3-4, pp. 287-295, Jan. 2005.
[63] C. Yang, et al., “A novel similarity based quality metric for image fusion,” Information Fusion, vol. 9, no. 2, pp. 156-160, Apr. 2008.


電子全文 電子全文(網際網路公開日期:20220824)
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
系統版面圖檔 系統版面圖檔