|
[1] A. Colombari and A. Fusiello, “Patch-based background initialization in heavily cluttered video,” IEEE Transactions on Image Processing, vol. 19, no. 4, pp. 926–933, 2010. [2] C.-C. Chen and J. K. Aggarwal, “An adaptive background model initialization algorithm with objects moving at different depths,” IEEE International Conference on Image Processing, pp. 2664 – 2667, 2008. [3] T. Georgiev, “Photoshop healing brush: a tool for seamless cloning,” Workshop on Applications of Computer Vission (ECCV), pp. 1–8, 2004. [4] L. Yang, H. Cheng, J. Su, and X. Li, “Pixel-to-model distance for robust background reconstruction,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 26, no. 5, pp. 903–916, 2016. [5] C. Stauffer and W. E. L. Grimson, “Adaptive background mixture models for real-time tracking,” IEEE International Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 246–252, 1999. [6] S. Li, X. Kang, and J. Hu, “Image fusion with guided filtering,” IEEE Transactions on Image Processing., vol. 22, no. 7, pp. 2864–2875, 2013. [7] S. Li, X. Kang, J. Hu, and B. Yang, “Image matting for fusion of multi-focus images in dynamic scenes,” Information Fusion., vol. 14, no. 2, pp. 147–162, 2013. [8] H. Li, B. Manjunath, and S. K. Mitra, “Multisensor image fusion using the wavelet transform,” Graphical Models and Image Processing., vol. 57, no. 3, pp. 235–245, 1995. [9] W. Wang and F. Chang, “A multi-focus image fusion method based on laplacian pyramid,” Journal of Computers., vol. 6, no. 12, pp. 2559–2566, 2011. [10] J. Tian and L. Chen, “Adaptive multi-focus image fusion using a wavelet-based statistical sharpness measure,” Signal Processing., vol. 92, no. 9, pp. 2137–2146, 2012. [11] H. Zhao, Z. Shang, Y. Y. Tang, and B. Fang, “Multi-focus image fusion based on the neighbor distance,” Pattern Recognition., vol. 46, no. 3, pp. 1002–1011, 2013. [12] S. Pertuz, D. Puig, M. A. Garcia, and A. Fusiello, “Generation of all-in-focus images by noise-robust selective fusion of limited depth-of-field images,” IEEE Transactions on Image Processing., vol. 22, no. 3, pp. 1242–1251, 2013. [13] K.-L. Hua, H.-C. Wang, A. H. Rusdi, and S.-Y. Jiang, “A novel multi-focus image fusion algorithm based on random walks,” Journal of Visual Communication and Image Representation, vol. 25, no. 5, pp. 951–962, 2014. [14] L. Cao, L. Jin, H. Tao, G. Li, Z. Zhuang, and Y. Zhang, “Multi-focus image fusion based on spatial frequency in discrete cosine transform domain,” Signal Processing Letters., vol. 22, no. 2, pp. 220–224, 2015. [15] W.-H. Cheng, C.-W. Wang, and J.-L. Wu, “Video adaptation for small display based on content recomposition,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 17, no. 1, pp. 43–58, 2007. [16] R. Kapoor and A. Dhamija, “Fast tracking algorithm using modified potential function,” IET Computer Vision, vol. 6, no. 2, pp. 111–120, 2012. [17] S.-C. Huang and B.-H. Do, “Radial basis function based neural network for motion detection in dynamic scenes,” IEEE Transactions on Cybernetics, vol. 44, no. 1, pp. 114–125, 2014. [18] P. Chiranjeevi and S. Sengupta, “Detection of moving objects using multi-channel kernel fuzzy correlogram based background subtraction,” IEEE Transactions on Cybernetics, vol. 44, no. 6, pp. 870–881, 2014. [19] O. Barnich and M. Van Droogenbroeck, “Vibe: A universal background subtraction algorithm for video sequences,” IEEE Transactions on Image Processing, vol. 20, no. 6, pp. 1709–1724, 2011. [20] N. Liu, H. Wu, and L. Lin, “Hierarchical ensemble of background models for ptz-based video surveillance,” IEEE Transactions on Cybernetics, vol. 45, no. 1, pp. 89–102, 2015. [21] L. Lin, Y. Lu, C. Li, H. Cheng, and W. Zuo, “Detection-free multiobject tracking by reconfigurable inference with bundle representations,” IEEE Transactions on Cybernetics, vol. 46, no. 11, pp. 2447–2458, 2016. [22] Change Detection Workshop. http://changedetection.net/. [23] N. Goyette, P.-M. Jodoin, F. Porikli, J. Konrad, and P. Ishwar, “Changedetection.net: A new change detection benchmark dataset,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pp. 1–8, 2012. [24] Y. Wang, P.-M. Jodoin, F. Porikli, J. Konrad, Y. Benezeth, and P. Ishwar, “Cdnet 2014: An expanded change detection benchmark dataset,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pp. 387–394, 2014. [25] P. Chiranjeevi and S. Sengupta, “Interval-valued model level fuzzy aggregation-based background subtraction,” IEEE Transactions on Cybernetics, pp. 1–12, 2016. [26] X. Cao, L. Yang, and X. Guo, “Total variation regularized rpca for irregularly moving object detection under dynamic background,” IEEE Transactions on Cybernetics, vol. 46, no. 4, pp. 1014–1027, 2016. [27] W.-H. Cheng, C.-W. Hsieh, S.-K. Lin, C.-W. Wang, and J.-L. Wu, “Robust algorithm for exemplarbased image inpainting,” Proc. Int. Conf. Computer Graphics Imaging and Vision, pp. 64–69, 2005. [28] D. Gutchess, M. Trajkovics, E. Cohen-Solal, D. Lyons, and A. Jain, “A background model initialization algorithm for video surveillance,” IEEE International Conference on Computer Vision, vol. 1, pp. 733–740, 2001. [29] W. Long and Y.-H. Yang, “Stationary background generation: an alternative to the difference of two images,” Pattern Recognition, vol. 23, no. 12, pp. 1351–1359, 1990. [30] V. Reddy, C. Sanderson, and B. C. Lovell, “An efficient and robust sequential algorithm for background estimation in video surveillance,” IEEE International Conference on Image Processing, pp. 1109–1112, 2009. [31] V. Reddy, C. Sanderson, and B. C. Lovell, “A low-complexity algorithm for static background estimation from cluttered image sequences in surveillance contexts,” Journal on Image and Video Processing, 2011. [32] D. Ortego, J. C. SanMiguel, and J. M. Martínez, “Rejection based multipath reconstruction for background estimation in video sequences with stationary objects,” Computer Vision and Image Understanding, vol. 147, no. C, pp. 23–37, 2016. [33] Scene Background Modeling and Initialization. http://sbmi2015.na.icar.cnr.it/. [34] L. Maddalena and A. Petrosino, Background Modeling and Foreground Detection for Video Surveillance, ch. Background Model Initialization for Static Cameras, pp. 1–16. Chapman and Hall/CRC, 2014. [35] K. Toyama, J. Krumm, B. Brumitt, and B. Meyers, “Wallflower: Principles and practice of background maintenance,” IEEE International Conference on Computer Vision, vol. 1, pp. 255–261, 1999. [36] A. Sobral, T. Bouwmans, and E. Zahzah, “Comparison of matrix completion algorithms for background initialization in videos,” SBMI 2015 Workshop in conjunction with ICIAP 2015, pp. 510–518, 2015. [37] Scene Background Initialization (SBI) dataset. http://sbmi2015.na.icar.cnr.it/SBIdataset.html. [38] L. Maddalena and A. Petrosino, “Towards benchmarking scene background initialization,” SBMI 2015 Workshop in conjunction with ICIAP 2015, pp. 469–476, 2015. [39] A. Criminisi, P. Pérez, and K. Toyama, “Region filling and object removal by exemplar-based image inpainting,” IEEE Transactions on Image Processing, vol. 13, no. 9, pp. 1200–1212, 2004. [40] M. Bertalmio, L. Vese, G. Sapiro, and S. Osher, “Simultaneous structure and texture image inpainting,” IEEE Transactions on Image Processing, vol. 12, no. 8, pp. 882–889, 2004. [41] M. Kumar and S. Dass, “A total variation-based algorithm for pixel-level image fusion,” IEEE Transactions on Image Processing, vol. 18, no. 9, pp. 2137–2143, 2009. [42] S. Zheng, W. Shi, J. Liu, G. Zhu, and J.-W. Tian, “Multisource image fusion method using support value transform,” IEEE Transactions on Image Processing, vol. 16, no. 7, pp. 1831–1839, 2007. [43] S. Li, J. T.-Y. Kwok, I. W.-H. Tsang, and Y. Wang, “Fusing images with different focuses using support vector machines,” IEEE Transactions on Neural Networks, vol. 15, no. 6, pp. 1555–1561, 2004. [44] H. Zhao, Q. Li, and H. Feng, “Multi-focus color image fusion in the hsi space using the sum-modifiedlaplacian and a coarse edge map,” Image and Vision Computing, vol. 26, no. 9, pp. 1285–1295, 2008. [45] T. Mertens, J. Kautz, and F. V. Reeth, “Exposure fusion,” Computer Graphics and Applications, pp. 382–390, 2007. [46] L. Bogoni and M. Hansen, “Pattern-selective color image fusion,” Pattern Recognition, vol. 34, no. 8, pp. 1515 – 1526, 2001. [47] X. Qin, J. Shen, X. Mao, X. Li, and Y. Jia, “Robust match fusion using optimization,” IEEE Transactions on Cybernetics, vol. 45, no. 8, pp. 1549–1560, 2015. [48] J. Shen, Y. Zhao, S. Yan, and X. Li, “Exposure fusion using boosting laplacian pyramid,” IEEE Transactions on Cybernetics, vol. 44, no. 9, pp. 1579–1590, 2014. [49] G. Piella, “Image fusion for enhanced visualization: A variational approach,” International Journal of Computer Vision, vol. 83, no. 1, pp. 1–11, 2009. [50] V. Petrovic and C. Xydeas, “Gradient-based multiresolution image fusion,” IEEE Transactions on Image Processing, vol. 13, no. 2, pp. 228–237, 2004. [51] L. Grady, “Random walks for image segmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 11, pp. 1768–1783, 2006. [52] J. Shen, Y. Du, W. Wang, and X. Li, “Lazy random walks for superpixel segmentation,” IEEE Transactions on Image Processing, vol. 23, no. 4, pp. 1451–1462, 2014. [53] X. Dong, J. Shen, L. Shao, and L. V. Gool, “Sub-markov random walk for image segmentation,” IEEE Transactions on Image Processing, vol. 25, no. 2, pp. 516–527, 2016. [54] Y. Liang, J. Shen, X. Dong, H. Sun, and X. Li, “Video supervoxels using partially absorbing random walks,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 26, no. 5, pp. 928–938, 2016. [55] J.-G. Yu, J. Zhao, J. Tian, and Y. Tan, “Maximal entropy random walk for region-based visual saliency,” IEEE Transactions on Cybernetics, vol. 44, no. 9, pp. 1661–1672, 2014. [56] X. Li, Z. Han, L. Wang, and H. Lu, “Viisual tracking via random walks on graph model,” IEEE Transactions on Cybernetics, vol. 46, no. 9, pp. 2144–2155, 2016. [57] R. Shen, I. Cheng, J. Shi, and A. Basu, “Generalized random walks for fusion of multi-exposure images,” IEEE Transactions on Image Processing, vol. 20, no. 12, pp. 3634–3646, 2011. [58] H. Wechsler and M. Kidode, “A random walk procedure for texture discrimination,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 1, no. 3, pp. 272–280, 1979. [59] N. Biggs, “Algebraic potential theory on graphs,” Bull. London Mathematical Society., vol. 26, no. 6, pp. 641–682, 1997. [60] S. Kakutani, “Markov processes and the dirichlet problem,” Proc. Japanese Academy, vol. 21, pp. 227–233, 1945. [61] P. Doyle and L. Snell, “Random walks and electric networks,” Washington, D.C.: Mathematical Association of America, 1984. [62] R. Courant and D. Hilbert, “Methods of math. physics,” John Wiley and Sons, 1989. [63] R. Hersh and R. Griego, “Brownian motion and potential theory,” Scientific American, vol. 220, no. 3, pp. 67–74, 1969. [64] D. Marr and E. Hildreth, “Theory of edge detection,” Proc. of the Royal Society, vol. 207, no. 1167, pp. 187–217, 1980. [65] B. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision,” International Joint Conferences on Artificial Intelligence, vol. 2, pp. 674–679, 1981. [66] S. Baker and I. Matthews, “Lucas-kanade 20 years on: A unifying framework,” International Journal of Computer Vision, vol. 56, no. 3, pp. 221–255, 2004. [67] R. C. Luo, C.-C. Yih, and K. L. Su, “Multisensor fusion and integration: approaches, applications, and future research directions,” IEEE Sensors Journal, vol. 2, pp. 107–119, August 2002. [68] V. Aslantas and R. Kurban, “Fusion of multi-focus images using differential evolution algorithm,” Expert Systems with Applications., vol. 37, no. 12, pp. 8861–8870, 2010. [69] S. Li, J. T. Kwok, and Y. Wang, “Multifocus image fusion using artificial neural networks,” Pattern Recognition Letters., vol. 23, no. 8, pp. 985–997, 2002. [70] S. Li and B. Yang, “Multifocus image fusion using region segmentation and spatial frequency,” Image and Vision Computing., vol. 26, no. 7, pp. 971–979, 2008. [71] I. De, B. Chanda, and B. Chattopadhyay, “Enhancing effective depth-of-field by image fusion using mathematical morphology,” Image and Vision Computing., vol. 24, no. 12, pp. 1278–1287, 2006. [72] Y. Zhang and L. Ge, “Efficient fusion scheme for multi-focus images by using blurring measure,” Digital Signal Processing., vol. 19, no. 2, pp. 186–193, 2009. [73] J. Tian and L. Chen, “Multi-focus image fusion using wavelet-domain statistics,” IEEE International Conference on Image Processing (ICIP)., pp. 1205–1208, 2010. [74] S. Li and B. Yang, “Multifocus image fusion by combining curvelet and wavelet transform,” Pattern Recognition Letters., vol. 29, no. 9, pp. 1295–1301, 2008. [75] M. N. Do and M. Vetterli, “The contourlet transform: an efficient directional multiresolution image representation,” IEEE Transactions on Image Processing., vol. 14, no. 12, pp. 2091–2106, 2005. [76] W. Huang and Z. Jing, “Evaluation of focus measures in multi-focus image fusion,” Pattern Recognition Letters., vol. 28, no. 4, pp. 493–500, 2007. [77] M. Subbarao, T.-S. Choi, and A. Nikzad, “Focusing techniques,” Optical Engineering., vol. 32, no. 11, pp. 2824–2836, 1993. [78] V. Aslantas and R. Kurban, “A comparison of criterion functions for fusion of multi-focus noisy images,” Optics Communications., vol. 282, no. 16, pp. 3231–3242, 2009. [79] M. F. Tappen, C. Liu, E. H. Adelson, and W. T. Freeman, “Learning gaussian conditional random fields for low-level vision,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR)., pp. 1–8, 2007. [80] X. Wang, X.-P. Zhang, I. Clarke, and Y. Yakubovich, “A new gaussian mixture conditional random field model for indoor image labeling,” Proceedings of ACM the 1st international workshop on Interactive multimedia for consumer electronics., pp. 51–56, 2009. [81] R. Shen, I. Cheng, and A. Basu, “Qoe-based multi-exposure fusion in hierarchical multivariate gaussian crf,” IEEE Transactions on Image Processing., vol. 22, no. 6, pp. 2469–2478, 2013. [82] J. D. Lafferty, A. McCallum, and F. C. N. Pereira, “Conditional random fields: Probabilistic models for segmenting and labeling sequence data,” in Proceedings of the Eighteenth International Conference on Machine Learning, ICML ’01, pp. 282–289, 2001. [83] M. B. Dillencourt, H. Samet, and M. Tamminen, “A general approach to connected-component labeling for arbitrary image representations,” Journal of the ACM (JACM)., vol. 39, no. 2, pp. 253–280, 1992. [84] M.-Y. Liu, O. Tuzel, S. Ramalingam, and R. Chellappa, “Entropy rate superpixel segmentation,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR)., pp. 2097–2104, 2011. [85] S. K. Nayar and Y. Nakagawa, “Shape from focus,” IEEE Transactions on Pattern Analysis and Machine Intelligence., vol. 16, no. 8, pp. 824–831, 1994. [86] K.-S. Choi, J.-S. Lee, and S.-J. Ko, “New autofocusing technique using the frequency selective weighted median filter for video cameras,” IEEE Transactions on Consumer Electronics., vol. 45, no. 3, pp. 820–827, 1999. [87] H. Rue and L. Held, Gaussian Markov Random Fields: Theory and Applications. Chapman and Hall/CRC, Boston, MA, 2005. [88] L. A. Zadeh, “Fuzzy sets,” Information and Control., vol. 8, no. 3, pp. 338–353, 1965. [89] Photo(shop) Contests. http://www.pxleyes.com/photography-contest/19726. [90] Imagefusion. http://www.imagefusion.org. [91] Computational Imaging Group at Xiamen University. http://www.quxiaobo.org/index\_software.html. [92] G. Qu, D. Zhang, and P. Yan, “Information measure for performance of image fusion,” Electronics Letters., vol. 38, no. 7, pp. 313–315, 2002. [93] C. Xydeas and V. Petrović, “Objective image fusion performance measure,” Electronics Letters., vol. 36, no. 4, pp. 308–309, 2000. [94] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising with block-matching and 3d filtering,” Electronic Imaging., pp. 354–365, 2006.
|