跳到主要內容

臺灣博碩士論文加值系統

(18.97.14.89) 您好!臺灣時間:2024/12/13 13:13
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:陳建呈
研究生(外文):Chien-Cheng Chen
論文名稱:以深度差異進行單張彩色影像去雨
論文名稱(外文):Visual Depth Guided Color Image Rain Streaks Removal Using Sparse Coding
指導教授:陳敦裕陳敦裕引用關係
指導教授(外文):Duan-Yu Chen
口試委員:謝君偉黃于飛
口試委員(外文):Jun-Wei HsiehYu-Fei Huang
口試日期:2013-06-04
學位類別:碩士
校院名稱:元智大學
系所名稱:電機工程學系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2013
畢業學年度:101
語文別:英文
論文頁數:47
中文關鍵詞:去雨彩色圖片稀疏表達字典學習影像拆解深度
外文關鍵詞:rain removalcolor imagesparse representationdictionary learningimage decompositiondifference of depth
相關次數:
  • 被引用被引用:0
  • 點閱點閱:272
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
單張影像去雨化一直以來都是極具挑戰性的議題,因為單張影像缺少了時間上的動態資訊,然而我們提出綜合多種特徵的影像處理技術達到單張影像去雨的效果。首先,初始影像利用”Guided Filter”將原始影像先粗略地分為”高頻”及”低頻影像”,再進階對高頻影像拆解成”雨成分”及”非雨成分”,當中利用到影像處理的”稀疏矩陣”及”字典訓練”等技術。另外分類的特徵除了傳統的”HOG”以外,還另外增加兩項新的特徵”DoD”和”Eigen color”來加強分類的效率及準確度,與文獻[12]相比,我們提出的方法去雨效果更為顯著,而且視覺的感受更為清晰。
Rain removal from a single colorful image is a challenging problem since no motion information can be obtained from successive images. In this work, an input image is first decomposed into low-frequency part and high-frequency part by using guided image filter. So that the rain streaks would be in the high-frequency part with non-rain textures, and then the high-frequency part is decomposed into a “rain component” and a “non-rain component” by performing dictionary learning and sparse coding. To separate rain streaks from high-frequency part, a hybrid feature set is exploited which includes histogram of gradient (HoG), difference of depth (DoD) and Eigen color. With the hybrid feature set applied, most rain streaks can be removed; meanwhile, non-rain components can be enhanced. Compared with the state-of-the-art work, our proposed approach is the first solve the problem on color images and achieves better results with not only the rain components being removed more effectively but also the visual quality of restored images being improved.
摘要 …………………………………………………………………… i
Abstract ……………………………………………………………… ii
Index ………………………………………………………………… iii
List of Table and Figure ………………………………………… iv
一、 Introduction ………………………………………… 1
二、 Difference of Depth (DoD) ……………………… 4
三、 Rain Streaks Removal Using Hybrid Feature Set 8
3.1 Low-frequency …………………………………… 10
3.2 MCA-based Image Decomposition ………………… 11
3.3 Dictionary Learning and Partition…………… … 13
3.4 Identify Miss-classified Rain Streaks by DoD 14
3.5 Sparse Coefficients ……………………………… 17

四、 Rain Removal and Restoration of High Frequency Non-rain Components …………………………… 19
4.1 Automatic Identifying Dictionary with Rain Components Included ……………………………… 19
4.2 Rain removal ………………………………………… 21
4.3 Restoration of High Frequency Non-rain Component
by DoD ………………………………………………… 22
4.4 Restoration of High Frequency Non-rain Component by Eigen color………………………………………… 23

五、 Experimental results ……………………………… 27
5.1 Results of Rain Streaks Removal ………………… 27
5.2 Time Complexity Analysis ………………………… 37
六、 Conclusions …………………………………………… 30
七、 Reference ……………………………………………… 43
[1] P. C. Barnum, S. Narasimhan, and T. Kanade, “Analysis of rain and snow in frequency space,” Int. J. Comput. Vis., vol. 86, no. 2–3, pp. 256–274, 2010.
[2] K. Garg and S. K. Nayar, “Detection and removal of rain from videos,” Proc. IEEE Conf. Comput. Vis. Pattern Recognit., June 2004, vol. 1, pp. 528–535.
[3] K. Garg and S. K. Nayar, “When does a camera see rain?” Proc. of IEEE Int. Conf. Comput. Vis., Oct. 2005, vol. 2, pp. 1067-1074.
[4] K. Garg and S. K. Nayar, “Vision and rain,” Int. J. Comput. Vis., vol. 75, no. 1, pp. 3–27, 2007.
[5] K. Garg and S. K. Nayar, “Photorealistic rendering of rain streaks,” ACM Trans. on Graphics, vol. 25, no. 3, pp. 996-1002, July 2006.
[6] X. Zhang, H. Li, Y. Qi, W. K. Leow, and T. K. Ng, “Rain removal in video by combining temporal and chromatic properties,” Proc. IEEE Int. Conf. Multimedia Expo, Toronto, Ont. Canada, July 2006, pp. 461–464.
[7] N. Brewer and N. Liu, “Using the shape characteristics of rain to identify and remove rain from video,” Lecture Notes in Computer Science, vol. 5342/2008, pp. 451–458, 2008.
[8] J. Bossu, N. Hautière, and J. P. Tarel, “Rain or snow detection in image sequences through use of a histogram of orientation of streaks,” Int. J. Comput. Vis., vol. 93, no. 3, pp. 348–367, July 2011.
[9] M. S. Shehata, J. Cai, W. M. Badawy, T. W. Burr, M. S. Pervez, R. J. Johannesson, and A. Radmanesh, “Video-based automatic incident detection for smart roads: the outdoor environmental challenges regarding false alarms,” IEEE Trans. Intell. Transportation Syst., vol. 9, no. 2, pp. 349–360, June 2008.
[10] M. Roser and A. Geiger, “Video-based raindrop detection for improved image registration,” Proc. IEEE Int. Conf. Comput. Vis. Workshops, Kyoto, Sept. 2009, pp. 570–577.
[11] J. C. Halimeh and M. Roser, “Raindrop detection on car windshields using geometric-photometric environment construction and intensity-based correlation,” Proc. IEEE Intell. Vehicles Symp., Xi'an, China, June 2009, pp. 610–615.
[12] L. W. Kang, and C. W. Lin, “Automatic Single-Image-Based Rain Streaks Removal via Image Decomposition,”IEEE Transactions on Image Processing, 2011.
[13] O. Le. Meur, “Prediction of the Inter-Observer Visual Congruency (IOVC) and application to image ranking,” ACM Multimedia 2011: 373-382
[14] K. He, J. Sun, and X. Tang, “Guided image filtering,” Proc. ECCV, 2010.
[15] C. Tomasi, and R. Manduchi, “Bilateral filtering for gray and color images,” Proc. ICCV, 1998.
[16] A. Levin, D. Lischinski, and Y. Weiss, “A closed form solution to natural image matting,” Proc. CVPR, 2006.
[17] J. M. Fadili, J. L. Starck, J. Bobin, and Y. Moudden, “Image decomposition and separation using sparse representations: an overview,” Proc. IEEE, vol. 98, no. 6, pp. 983–994, June 2010.
[18] J. M. Fadili, J. L. Starck, M. Elad, and D. L. Donoho, “MCALab: reproducible research in signal and image decomposition and inpainting,” IEEE Computing in Science & Engineering, vol. 12, no. 1, pp. 44–63, 2010.
[19] J. Bobin, J. L. Starck, J. M. Fadili, Y. Moudden, and D. L. Donoho, “Morphological component analysis: an adaptive thresholding strategy,” IEEE Trans. Image Process., vol. 16, no. 11, pp. 2675–2681, Nov. 2007.
[20] G. Peyré, J. Fadili, and J. L. Starck, “Learning adapted dictionaries for geometry and texture separation,” Proc. SPIE, vol. 6701, 2007.
[21] J. L. Starck, M. Elad, and D. L. Donoho, “Image decomposition via the combination of sparse representations and a variational approach,” IEEE Trans. Image Process., vol. 14, no. 10, pp. 1570–1582, Oct. 2005.
[22] M. Aharon, M. Elad, and A. M. Bruckstein, “The K-SVD: An algorithm for designing of overcomplete dictionaries for sparse representation,” IEEE Trans. Signal Process., vol. 54, 2006.
[23] D. L. Donoho, “Compressed sensing,” IEEE Trans. Info. Theory, vol. 52, no. 4, pp. 1289–1306, Apr. 2006.
[24] A. M. Bruckstein, D. L. Donoho, and M. Elad, “From sparse solutions of systems of equations to sparse modeling of signals and images,” SIAM Rev., vol. 51, no. 1, pp. 34–81, Feb. 2009.
[25] J. Mairal, F. Bach, J. Ponce, and G. Sapiro, “Online learning for matrix factorization and sparse coding,” J. Mach. Learn. Res., vol. 11, pp. 19–60, 2010.
[26] O. Ludwig, D. Delgado, V. Goncalves, and U. Nunes, “Trainable classifier-fusion schemes: an application to pedestrian detection,” Proc. IEEE Int. Conf. Intell. Transportation Syst., St. Louis, MO, USA, Oct. 2009, pp. 1–6
[27] Y. Luo and X. Tang, “Photo and video quality evaluation: focussing on the subject,” Proc. ECCV, pp. 386–399, 2008
[28] D. Y. Chen, K. R. Chen and Y. W. Wang, "Real-Time Dynamic Vehicle Detection on Resource-Limited Mobile Platform," IET Computer Vision, Vol. 7, No. 2, April 2013. (SCI, IF: 0.636)
[29] L. W. Tsai, J. W. Hsieh, and K. C. Fan, “Vehicle Detection Using Normalized Color and Edge Map", IEEE Trans. on Image Processing, vol. 16, issue. 3, March 2007, pp.850 - 864 (SCI/EI )
[30] L. W. Kang, C. Y. Hsu, H. W. Chen, C.S. Lu, C.Y. Lin, and S.C. Pei, "Feature-based sparse representation for image similarity assessment," IEEE Trans. on Multimedia, volume 13, number 5, pages 1019-1030, October 2011.
[31] S. J. Wright, R. D. Nowak, and M. A. T. Figueiredo, “Sparse reconstructionby separable approximation,” IEEE Trans. Signal Process.,vol. 57, no. 7, pp. 2479–2493, Jul. 2009.
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top