|
[1] http://scien.stanford.edu/pages/conferences/mvs/presentations/WorkshopMVSDec2009.pdf [2] V. Chandrasekhar, G. Takacs, D. Chen, S. Tsai, and B. Girod, “Transform Coding of Feature Descriptors,” in VCIP, 2009. [3] David Chen, Sam Tsai, Vijay Chandrasekhar, Gabriel Takacs, Jatinder Singh, and Bernd Girod, "Tree histogram coding for mobile image matching", IEEE Data Compression Conference (DCC), Snowbird, Utah, March 2009. [4] V. Chandrasekhar, G. Takacs, D. M. Chen, S. S. Tsai, R. Grzeszczuk, and B. Girod. CHoG: Compressed Histogram of Gradients - A low bit rate feature descriptor. In Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Miami, Florida, June 2009. [5] K. Grauman and T. Darrell. The Pyramid Match Kernel: Efficient Learning with Sets of Features. Journal of Machine Learning Research (JMLR), 8 (Apr): 725--760, 2007. [6] K. Grauman. Matching Sets of Features for Efficient Retrieval and Recognition, Ph.D. Thesis, MIT, 2006. [7] D. Lowe, “Distinctive Image Features from Scale-Invariant Keypoints,” International Journal of Computer Vision, vol. 60, no. 2, pp. 91–110, 2004. [8] H. Bay, T. Tuytelaars, and L. V. Gool, “SURF: Speeded Up Robust Features,” in ECCV (1), 2006, pp. 404–417. [9] D. Nist’er and H. Stew’enius, “Scalable Recognition with a Vocabulary Tree,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 2, 2006, pp. 2161–2168. [10] Lloyd, S. P. (1957). "Least square quantization in PCM". Bell Telephone Laboratories Paper. Published in journal much later: Lloyd., S. P. (1982). "Least squares quantization in PCM". IEEE Transactions on Information Theory 28 (2): 129–137. [11] Arthur, D. and Vassilvitskii, S. (2007). "k-means++: the advantages of careful seeding". Proceedings of the eighteenth annual ACM-SIAM symposium on Discrete algorithms. pp. 1027--1035. [12] Aloise, D.; Deshpande, A.; Hansen, P.; Popat, P. (2009). "NP-hardness of Euclidean sum-of-squares clustering". Machine Learning 75: 245–249. [13] Dasgupta, S. and Freund, Y. (July 2009). "Random Projection Trees for Vector Quantization". Information Theory, IEEE Transactions on 55: 3229–3242. [14] W. Fernandez de la Vega, Marek Karpinski, Claire Kenyon, and Yuval Rabani. Approximation schemes for clustering problems. In STOC ’03: Proceedings of the thirty-fifth annual ACM symposium on Theory of computing, pages 50–58, New York, NY, USA, 2003. ACM Press. [15] Sariel Har-Peled and Soham Mazumdar. On coresets for k-means and k-median clustering. In STOC ’04: Proceedings of the thirty-sixth annual ACM symposium on Theory of computing, pages 291–300, New York, NY, USA, 2004. ACM Press. [16] Nello Cristianini and John Shawe-Taylor. An Introduction to Support Vector Machines and other kernel-based learning methods. Cambridge University Press, 2000. ISBN 0-521-78019-5. [17] A Tutorial on Support Vector Machines for Pattern Recognition by Christopher J. C. Burges. Data Mining and Knowledge Discovery 2:121–167, 1998. [18] Dibike, Y. B., S. Velickov, D. Solomatine, and M. B. Abbott (2001). “Model Induction with Support Vector Machines: Introduction and Applications.” Journal of Computing in Civil Engineering, 15(3), 208-216. [19] Boser, B.E., Guyon, I., & Vapnik, V.N. (1992). A training algorithm for optimal margin classifiers. InProceedings of the Fifth Annual Workshop of Computational Learning Theory, 5, 144–152. Pittsburgh, ACM. [20] http://www.csie.ntu.edu.tw/~cjlin/libsvm/ [21] http://www.chrisevansdev.com/computer-vision-opensurf.html [22] http://opencv.willowgarage.com/wiki/ [23] http://www.stanford.edu/~darthur/kmpp.zip [24] http://www.vis.uky.edu/~stewe/ukbench/ [25] http://tahiti.mis.informatik.tu-darmstadt.de/oldmis/Research/Projects/ [26] T. Yeh, J. Lee, and T. Darell. “Adaptive vocabluary forests for dynamic indexing and category learning.“ In International Conference on Computer Vision, 2007.
|