跳到主要內容

臺灣博碩士論文加值系統

(35.172.136.29) 您好!臺灣時間:2021/08/02 18:34
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:吳文豪
研究生(外文):Wen-hao Wu
論文名稱:使用彩色表現模型進行畫素點為基礎之多焦點影像融合
論文名稱(外文):Pixel-based Multi-focused Image Fusion using Color Appearance Model
指導教授:謝禎冏謝禎冏引用關係
指導教授(外文):Chen-Chiung Hsieh
口試委員:謝禎冏
口試委員(外文):Chen-Chiung Hsieh
口試日期:2015-03-25
學位類別:碩士
校院名稱:大同大學
系所名稱:資訊工程學系(所)
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2015
畢業學年度:103
語文別:中文
論文頁數:72
中文關鍵詞:融合的遮罩多點對焦影像融合色彩飽和度
外文關鍵詞:Multi-focus imagesimage fusioncolor saturationfusion mask
相關次數:
  • 被引用被引用:0
  • 點閱點閱:89
  • 評分評分:
  • 下載下載:22
  • 收藏至我的研究室書目清單書目收藏:0
多點對焦影像技術在於將具有同一場景卻有不同對焦景物的多張照片合成一張同時具數個對焦景物的照片;一般多點對焦影像融合技術大部分只能對灰階影像進行處理;在本研究中直接以彩色影像進行分析與處理,且採用顏色的飽和度即彩度(Chrominance)除以亮度(Luminance)來作為計算的基礎;而測量對焦與否則不採用傳統的矩形MASK。我們發現矩形區塊中大部分參與對焦計算的畫素點,其形狀類似成像對焦所產生的星芒狀,故而稱為星芒(star light)遮罩MASK用來測量畫素點的對焦程度;再者,本研究最小的處理單位為一個畫素點,相對於以區塊或區域為影像融合的方式較不會忽視細小的物體,而且合成圖案上每一畫素點之數值會盡可能相同於原始圖案上畫素點之數值,以達到忠於原圖的呈現;最後考慮實際的應用場合會因著拍攝過程震動、光學組件造成影像變形、景物被風吹動等因素,使得原始圖案上相同的物體有位移與變形的現象,所以在進行融合時會考慮對位(Registration)的處理。為了驗證融合的結果圖,我們以人工方式合成MASK圖為基準產生最佳融合結果圖,將本研究所合成的最終MASK圖與人工合成的MASK圖做比較並計算正確率與錯誤率以完成最佳融合圖計算PSNR。本研究與現行的方法(the state of art)用來測試兩組彩色圖案做比較,所得結果圖平均正確率與PSNR各約為80%與42db,再測試兩組灰階圖案,所得之平均正確率與PSNR各約為83%與40db,均優於大部分參考到的方法,證實色彩飽和度確實可用於影像融合。
The technology of multi-focus imaging is to fuse multiple images, which have the same scene but with difference focal distance, into one picture with several focusing simultaneously. General multi-focus image fusion technology can only process gray image mostly. In this paper, it analyzes and processes color image and uses color saturation value that is based on Chrominance value divide to Luminance value directly, and it does not measure the focusing with traditional rectangle window mask. And when we analyze most of the focusing pixels, their focusing shapes are similar to star light shapes. Therefore, we name the focusing share Star-light focusing detection, which is used to measure the focusing status of the pixels. In addition, the minimum measurement unit is based on pixels in this discussion; pixel-based fusion can take more details than block-based or region-based fusion. And each pixel value on the fused image shall be the same as the pixel value on original image possibly. Therefore, it will be presented faithfully to the original image. Finally, considering actual application factors in photographing process, such as hand vibration, image distortion caused by optical assembly, wind blowing of the objects in the scene and so on, which make the same object on two original images have displacement or deformation phenomena, when processing fusion, it must consider registration. To verify resultant fused image, we shall use manual method to fuse a mask image and take it as reference to produce the best resultant fused image and calculate PSNR. Furthermore, this paper makes comparison between the fused finial mask image and synthetic fused mask image to calculate the correction rate and error rate. The paper also makes comparison between the two groups of color images produced by current method (state of the art). The obtained mean correction rate and PSNR are 80% and 42dB respectively. Additionally, two other groups of gray images processed by the obtained mean correction rate and PSNR are 83% and 40dB respectively. Method in this paper is superior to most referenced methods. It is proved that Color Saturation and Star-light focusing detection can be applied to image fusion reliably.
摘要 iiiii
ABSTRACT iv
目次 v
表次 vii
圖次 viii
第壹章 研究動機 1
第貳章 相關研究 3
第參章 系統架構 8
第一節 色彩飽和的概論 8
第二節 系統架構 15
第三節 對位 16
第四節 星芒對焦檢測 17
第五節 對焦量測 19
第六節 遮罩編排 21
第七節 影像融合 23
第肆章 實驗結果 25
第一節 所提出之方法的成效 25
第二節 其他的案例 26
第三節 星芒對焦檢測的效果 36
第四節 對位的效果 43
第五節 與其他現有方式的比較 45
第六節 客觀的測量比較 47
第七節 星芒對焦檢測的量測實驗 57
第伍章 結論與後續研究建議 67
第一節 結論 67
第二節 後續研究建議 68
參考文獻 69
[1] S. Nikolov, P. Hill, D. Bull, and N. Canagarajah, Wavelets for image fusion, Wavelets in Signal and Image Analysis, Computational Imaging and Vision Series, 213–244, Kluwer Academic Publishers, Dordrecht, The Netherlands (2001)
[2] W. Huang, and Z. Jing, Evaluation of focus measures in multi-focus image fusion, Pattern Recognition Letters, 28(4), 2007, 493-500, ISSN 0167-8655.
[3] AB. Siddiqui, M.A. Jaffar, A. Hussain, A.M. Mirza, Block-based feature-level multi-focus image fusion, the 5th International Conference on Future Information Technology, 2010, pp. 1-7, 21-23 May 2010.
[4] S. Li and B. Yang, Multifocus image fusion using region segmentation and spatial frequency, Image and Vision Computing, 26(7) (2008), 971-979. DOI=10.1016/j.imavis.2007.10.012
[5] J. Dong, D. Zhuang, Y. Huang, J. Fu, Survey of multispectral image fusion techniques in remote sensing applications, Image Fusion and Its Applications, Intech., 1–22 (2011)
[6] W. Wang and F. Chang, A multi-focus image fusion method based on Laplacian pyramid, Journal of Computers, 6(12), Dec. 2011, 2559-2566.
[7] Y. Yang, A novel DWT based multi-focus image fusion method, Procedia Engineering, Vol. 24, 2011, 177-181, ISSN 1877-7058.
[8] S. Li and B. Yang, Multifocus image fusion by combining curvelet and wavelet transform, Pattern Recognition Letters, 29(9), 1 July 2008, 1295-1301.
[9] Q. Zhang and B. Guo, Multifocus image fusion using the nonsubsampled contourlet transform, Signal Processing, 89(7), July 2009, 1334-1346.
[10] A.L. Cunha, J. Zhou, and M.N. Do, The Nonsubsampled contourlet transform: theory, design, and applications, IEEE Transactions on Image Processing, 15(10), 3089-3101, Oct. 2006, doi: 10.1109/TIP.2006.877507
[11] Y. Zhang, S. De Backer, P. Scheunders, Bayesian fusion of multispectral and hyperspectral image in Wavelet domain, IEEE International on Geoscience and Remote Sensing Symposium, 7-11 July 2008, V-69-V-72.
[12] S. Le Hegarat-Mascle, D. Richard, and C. Ottle, (2003). Multi-scale data fusion using Dempster-Shafer evidence theory, Integrated Computer-Aided Engineering, 10(1), 2003, 9-22.
[13] K. L. Hua, H.C. Wang, A. H. Rusdi, S. Y. Jiang, A novel multi-focus image fusion algorithm based on random walks, Journal of Visual Communication and Image Representation, 25(5), July 2014, 951-962, http://dx.doi.org/10.1016/j.jvcir.2014.02.009.
[14] H. Singh, V. Kumar, and S. Bhooshan, Weighted least squares based detail enhanced exposure fusion, ISRN Signal Processing, vol. 2014, Article ID 498762, 18 pages, 2014. doi:10.1155/2014/498762
[15] S. Li, J. T. Kwok, and Y. Wang, A multifocus image fusion using artificial neural networks, Pattern Recognition Letter, 23 (2002) 985–997.
[16] Dansong Cheng Wei Zhao Xianglong Tang Jiafeng Liu, Image segmentation based on pulse coupled neural network, Department of Computer Science and Engineering, Harbin Institute of Technology, Harbin,150001, China
[17] A. Chamankar, M. Sheikhan, and F. Razaghian, Multi-focus image fusion using fuzzy logic, 2013 13th Iranian Conference on Fuzzy Systems (IFSC), 27-29 Aug. 2013, 1-4, doi: 10.1109/IFSC.2013.6675601
[18] M. Song and P. Guo P, A combinatorial optimization method for remote sensing image fusion with contourlet and HSI transform, Journal of Computer-Aided Design & Computer Graphics, Jan. 2012.
[19] A. Krista, Z. Yun, and D. Peter, Wavelet based image fusion techniques — An introduction, review and comparison, ISPRS Journal Photogram and Remote Sensing, vol. 62, 2007, 249-263.
[20] M. D. Fairchild, Color Appearance Models, 3rd Ed. Wiley-IS&T, Chichester, UK, 2013, ISBN 978-1-119-96703-3.
[21] C. Li and R. A. Alastair, An image quality metric based on a colour appearance model, Advanced Concepts for Intelligent Vision Systems, Lecture Notes in Computer Science, vol. 5259, 2008, 696-707.
[22] J. Kuang, G. M. Johnson, M. D. Fairchild, iCAM06: A refined image appearance model for HDR image rendering, Journal of Visual Communication and Image Representations, 18, 2007 406–414.
[23] X. Li and M. He, Multifocus color image fusion on feature level, 2010 International Conference on Computer Application and System Modeling (ICCASM), vol.1, pp. V1-541-V1-544, 22-24 Oct. 2010, doi: 10.1109/ICCASM.2010.5619445
[24] Y. Yang, W. Zheng, and S. Huang, Effective multifocus image fusion based on HVS and BP neural network, Scientific World Journal, vol. 2014, Article ID 281073, 10 pages, http://dx.doi.org/10.1155/2014/281073
[25] Jamal Saeedi and Karim Faez, A classification and fuzzy-based approach for digital multi-focus image fusion, DOI 10.1007/s10044-011-0235-9
[26] Jean-Yves Bouguet, Pyramidal Implementation of the Lucas Kanade Feature Tracker Description of the algorithm, Intel Corporation Microprocessor Research Labs, http://jean-yves.bouguet@intel.com
[27] Gunnar Farneback, Two-frame motion estimation based on polynomial expansion, Lecture Notes in Computer Science, 2003, Computer Vision Laboratory, http://www.isy.liu.se/cvl/
[28] J. Shi and C. Tomasi, Good Features to Track. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 593-600, June 1994.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top