|
[1] P. C. Barnum, S. Narasimhan, and T. Kanade, “Analysis of rain and snow in frequency space,” Int. J. Comput. Vis., vol. 86, no. 2–3, pp. 256–274, 2010. [2] K. Garg and S. K. Nayar, “Detection and removal of rain from videos,” Proc. IEEE Conf. Comput. Vis. Pattern Recognit., June 2004, vol. 1, pp. 528–535. [3] K. Garg and S. K. Nayar, “When does a camera see rain?” Proc. of IEEE Int. Conf. Comput. Vis., Oct. 2005, vol. 2, pp. 1067-1074. [4] K. Garg and S. K. Nayar, “Vision and rain,” Int. J. Comput. Vis., vol. 75, no. 1, pp. 3–27, 2007. [5] K. Garg and S. K. Nayar, “Photorealistic rendering of rain streaks,” ACM Trans. on Graphics, vol. 25, no. 3, pp. 996-1002, July 2006. [6] X. Zhang, H. Li, Y. Qi, W. K. Leow, and T. K. Ng, “Rain removal in video by combining temporal and chromatic properties,” Proc. IEEE Int. Conf. Multimedia Expo, Toronto, Ont. Canada, July 2006, pp. 461–464. [7] N. Brewer and N. Liu, “Using the shape characteristics of rain to identify and remove rain from video,” Lecture Notes in Computer Science, vol. 5342/2008, pp. 451–458, 2008. [8] J. Bossu, N. Hautière, and J. P. Tarel, “Rain or snow detection in image sequences through use of a histogram of orientation of streaks,” Int. J. Comput. Vis., vol. 93, no. 3, pp. 348–367, July 2011. [9] M. S. Shehata, J. Cai, W. M. Badawy, T. W. Burr, M. S. Pervez, R. J. Johannesson, and A. Radmanesh, “Video-based automatic incident detection for smart roads: the outdoor environmental challenges regarding false alarms,” IEEE Trans. Intell. Transportation Syst., vol. 9, no. 2, pp. 349–360, June 2008. [10] M. Roser and A. Geiger, “Video-based raindrop detection for improved image registration,” Proc. IEEE Int. Conf. Comput. Vis. Workshops, Kyoto, Sept. 2009, pp. 570–577. [11] J. C. Halimeh and M. Roser, “Raindrop detection on car windshields using geometric-photometric environment construction and intensity-based correlation,” Proc. IEEE Intell. Vehicles Symp., Xi'an, China, June 2009, pp. 610–615. [12] L. W. Kang, and C. W. Lin, “Automatic Single-Image-Based Rain Streaks Removal via Image Decomposition,”IEEE Transactions on Image Processing, 2011. [13] O. Le. Meur, “Prediction of the Inter-Observer Visual Congruency (IOVC) and application to image ranking,” ACM Multimedia 2011: 373-382 [14] K. He, J. Sun, and X. Tang, “Guided image filtering,” Proc. ECCV, 2010. [15] C. Tomasi, and R. Manduchi, “Bilateral filtering for gray and color images,” Proc. ICCV, 1998. [16] A. Levin, D. Lischinski, and Y. Weiss, “A closed form solution to natural image matting,” Proc. CVPR, 2006. [17] J. M. Fadili, J. L. Starck, J. Bobin, and Y. Moudden, “Image decomposition and separation using sparse representations: an overview,” Proc. IEEE, vol. 98, no. 6, pp. 983–994, June 2010. [18] J. M. Fadili, J. L. Starck, M. Elad, and D. L. Donoho, “MCALab: reproducible research in signal and image decomposition and inpainting,” IEEE Computing in Science & Engineering, vol. 12, no. 1, pp. 44–63, 2010. [19] J. Bobin, J. L. Starck, J. M. Fadili, Y. Moudden, and D. L. Donoho, “Morphological component analysis: an adaptive thresholding strategy,” IEEE Trans. Image Process., vol. 16, no. 11, pp. 2675–2681, Nov. 2007. [20] G. Peyré, J. Fadili, and J. L. Starck, “Learning adapted dictionaries for geometry and texture separation,” Proc. SPIE, vol. 6701, 2007. [21] J. L. Starck, M. Elad, and D. L. Donoho, “Image decomposition via the combination of sparse representations and a variational approach,” IEEE Trans. Image Process., vol. 14, no. 10, pp. 1570–1582, Oct. 2005. [22] M. Aharon, M. Elad, and A. M. Bruckstein, “The K-SVD: An algorithm for designing of overcomplete dictionaries for sparse representation,” IEEE Trans. Signal Process., vol. 54, 2006. [23] D. L. Donoho, “Compressed sensing,” IEEE Trans. Info. Theory, vol. 52, no. 4, pp. 1289–1306, Apr. 2006. [24] A. M. Bruckstein, D. L. Donoho, and M. Elad, “From sparse solutions of systems of equations to sparse modeling of signals and images,” SIAM Rev., vol. 51, no. 1, pp. 34–81, Feb. 2009. [25] J. Mairal, F. Bach, J. Ponce, and G. Sapiro, “Online learning for matrix factorization and sparse coding,” J. Mach. Learn. Res., vol. 11, pp. 19–60, 2010. [26] O. Ludwig, D. Delgado, V. Goncalves, and U. Nunes, “Trainable classifier-fusion schemes: an application to pedestrian detection,” Proc. IEEE Int. Conf. Intell. Transportation Syst., St. Louis, MO, USA, Oct. 2009, pp. 1–6 [27] Y. Luo and X. Tang, “Photo and video quality evaluation: focussing on the subject,” Proc. ECCV, pp. 386–399, 2008 [28] D. Y. Chen, K. R. Chen and Y. W. Wang, "Real-Time Dynamic Vehicle Detection on Resource-Limited Mobile Platform," IET Computer Vision, Vol. 7, No. 2, April 2013. (SCI, IF: 0.636) [29] L. W. Tsai, J. W. Hsieh, and K. C. Fan, “Vehicle Detection Using Normalized Color and Edge Map", IEEE Trans. on Image Processing, vol. 16, issue. 3, March 2007, pp.850 - 864 (SCI/EI ) [30] L. W. Kang, C. Y. Hsu, H. W. Chen, C.S. Lu, C.Y. Lin, and S.C. Pei, "Feature-based sparse representation for image similarity assessment," IEEE Trans. on Multimedia, volume 13, number 5, pages 1019-1030, October 2011. [31] S. J. Wright, R. D. Nowak, and M. A. T. Figueiredo, “Sparse reconstructionby separable approximation,” IEEE Trans. Signal Process.,vol. 57, no. 7, pp. 2479–2493, Jul. 2009.
|