|
[1]S. Li and M. C. Lee, “An Efficient Spatiotemporal Attention Model and Its Application to Shot Matching,” IEEE Trans. on Circuits and Systems for Video Technology, Vol.17, No.10, pp. 1383-1387, Oct. 2007. [2]Y. F. Ma, L. Lu, H. J. Zhang, and M. Li, “A User Attention Model for Video Summarization,” Proc. ACM Multimedia, pp.533-541, Dec. 2002. [3]Y. Zhai and M. Shah, “Visual Attention Detection in Video Sequences Using Spatiotemporal Cues,” Proc. ACM Multimedia, pp.815-824, Oct. 2006. [4]L. Laptev and T. Lindeberg, “Space-Time Interest Points,” Proc. IEEE International Conference on Computer Vision, pp.432-439, Oct. 2003. [5]L. Itti, C. Koch, and E. Niebur, “A Model of Saliency-Based Visual Attention for Rapid Scene Analysis,” IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol.20, No.11, pp.1254-1259, 1998. [6]C. Harris and M. Stephens, “A Combined Corner and Edge Detector,” In Alvey Vision Conference, pp. 147-151, 1988. [7]V. Navalpakkam, and L. Itti, “An Integrated Model of Top-Down and Bottom-Up Attention for Optimizing Detection Speed,” Proc. IEEE CVPR, Vol.2, pp. 2049-2056, 2006. [8]L. Itti, and C. Koch, “Computational Modeling of Visual Attention,” Neuroscience, Vol.2, pp. 1-11, 2001. [9]B. Lucas and T. Kanade, “An Iterative Image Registration Technique with an Application to Stereo Vision,” Proc. International Joint Conference on Artificial Intelligence, pp. 674-679, 1981. [10]W. James. The Principles of Psychology, Harvard Univ. Press, Cambridge, Massachusetts, 1980/1981. [11]C. C. Shih, H. R. Tyan and H. Y. Mark Liao, “Shot Change Detection based on the Reynolds Transport Theorem”, Proc. Second IEEE Pacific Rim Conference on Multimedia, Oct.24-26, Beijing, China, LNCS 2195, pp.819-824, 2001. [12]C. W. Su, H. Y. Mark Liao, H. R.Tyan, K. C. Fan, and L.-H. Chen, “A Motion-Tolerant Dissolve Detection Algorithm” IEEE Trans. on Multimedia, Vol.7, No.6, December 2005. [13]C. Y. Chiu and H. M. Wang, “Time-Series Linear Search for Video Copies Based on Compact Signature Manipulation and Containment Relation Modeling,” IEEE Transactions on Circuits and Systems for Video Technology, No. 5604280 , pp. 1603-1613, 2010. [14]D. Y. Chen “Modelling salient visual dynamics in videos,” Multimedia Tools and Applications, pp. 271–284, 2011. [15]TRECVID 2010 Guidelines, http://www-nlpir.nist.gov/projects/tv2010/tv2010.html#ccd [16]X. Guo and X. Cao “Triangle-Constraint for Finding More Good Features,” International Conference on Pattern Recognition, No. 5597550 , pp. 1393-1396, 2010. [17]H. Bay, T. Tuytelaars, and L. Van Gool, “Surf:speeded up robust features,” In Lecture Notes in Computer Science, pp. 404–417, 2006. [18]M. Brown and D. Lowe, “Recognising panoramas,” IEEE International Conference on Computer Vision, pp. 1218–1227, 2003. [19]C. Harris and M. J. Stephens, “A combined corner and edge detector,” Alvey Vision Conference, volume 20, pp. 147–152, 1988. [20]H. Jiang and S. Yu, “Linear solution to scale and rotation invariant object matching,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, pp. 2474–2481, 2009. [21]S. Lee and Y. Liu, “Curved glide-reflection symmetry detection,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, pp. 1046–1053, 2009. [22]M. Leordeanu and M. Hebert, “A spectral technique for correspondence problems using pairwise constraints,” IEEE International Conference on Computer Vision, pp. 1482–1489, 2005. [23]D. Lowe, “Distinctive image features from scale-invariant Keypoints,” International Journal of Computer Vision, pp. 91–110, 2004. [24]J. Rabin, J. Delon, and Y. Gousseau, “Circular earth mover’s distance for the comparison of local features,” International Conference on Pattern Recognition, pp. 1–4, 2008. [25]T. Tuytelaars and L. Van Gool, “Matching widely separated views based on affine invariant regions,” International Journal of Computer Vision, pp. 61–85, 2004. [26]S. Zhang, Q. Tian, G. Hua, Q. Huang, and S. Li, “Descriptive visual words and visual phrases for image applications,” Proc. ACM Int’l Conf. Multimedia, pp. 75-84, Beijing, China, Oct.19-24,2009. [27]M. Aharon, M. Elad, and A. M. Bruckstein, “The K-SVD: An algorithm for designing of overcomplete dictionaries for sparse representation,” IEEE Trans. Signal Process., vol. 54, no. 11, pp. 4311–4322, Nov. 2006. [28]S. Mallat and Z. Zhang, “Matching pursuits with time-frequency dictionaries,” IEEE Trans. Signal Process., vol. 41, no. 12, pp. 3397–3415,Dec. 1993. [29]S. Wang, L. Cui, D. Liu, R. Huck, P. Verma, J. J. Sluss, S. Cheng, “Vehicle Identification via Sparse Representation,” Intelligent Transportation Systems, pp. 955-962, 2011. [30]L. W. Kang, C. Y. Hsu, H. W. Chen, C. S. Lu, C. Y. Lin, and S. C. Pei, “Feature-Based Sparse Representation for Image Similarity Assessment,” IEEE Trans. on Multimedia, volume 13, number 5, pp. 1019-1030, October 2011.
|