|
[1]DAVID G. LOWE, “Distinctive Image Features from Scale-Invariant Keypoints,” International Journal of Computer Vision, vol. 60, pp. 91–110, Nov. 2004. [2]Herbert Bay, Tinne Tuytelaars, and Luc Van Gool, “SURF: Speeded Up Robust Features,” European Conference on Computer Vision, ECCV, pp. 404-417, 2006. [3]Ashwani Kumar Dubey and Zainul Abdin Jaffery, “Maximally Stable Extremal Region Marking (MSERM) based Railway Track Surface Defect Sensing,” IEEE Sensors Journal, vol. 16, pp. 9047-9052, Oct. 2016. [4]Sang Jun Lee, Jaepil Ban, Hyeyeon Choi and Sang Woo Kim,” Localization of slab identification numbers using deep learning”, 2016 16th International Con-ference on Control, Automation and Systems (ICCAS), pp. 16-19, Oct. 2016. [5]A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks.” Neural Information Processing Systems 2012, Dec. 2012. [6]D. H. Huble, T. H. Wiesel, “Receptive fields of single neuroes in the cat’s striate cortex”, Journal of Physiology, pp. 574-591, Oct. 1959. [7]WARREN S. McCULLOCH, WALTER PITTS, “A Logical Calculus of the Ide-as Immanent in Nervous Activity,” The Bulletin of Mathematical Biophysics, vol. 5, pp. 115-133, Dec.1943. [8]F. ROSENBLATT, “The Perceptron: A Probabilistic Model for Information Storage And Organization in The Brain,” .Psychological Review, Vol. 65, No. 6, pp. 386-408, Feb. 1958. [9]J. J. Hopfield, D. W. Tank, “ “Neural” Computation of Decisions in Optimization Problems,” Biological Cybernetics, vol. 52, pp.141-152, Jul. 1985.
[10]G. E. Hinton, T. J. Sejnowski and D. H. Ackley, “Boltzmann Machines: Constraint Satisfaction Networks that Learn,” Department of Computer Science, CMU-CS-84-119,May. 1984. [11]Gail A. Carpenter, Stephen Grossberg, and David B. ROSEN, “Fuzzy ART: Fast Stable Learning and Categorization of Analog Patterns by an Adaptive Resonance System,” Neural Networks, vol. 4, pp. 759-771, 1991. [12]Yoshua Bengio. “Learning deep architectures for AI,” Foundations and Trends in Machine Learning, vol. 2, no. 1, pp. 1-127, Nov. 2009. [13]D. H. Hubel, T. N. Wiesel, “Receptive Fields, Binocular Interaction and Func-tional Architecture in the Cat’s Visual Cortex,” Journal of Physiology, vol. 160, pp. 106-154, Jan. 1962. [14]KUNIHIKO FUKUSHIMA and SEI MIYAKE. “Neocognitron: a new algo-rithm for pattern recognition Tolerant of Deformations and Shifts in Position,” Pattern Recognition, vol. 15, no. 6, pp. 455-469, 1982. [15]D. E. Rumelhart, G. E. Hinton, R. J. Williams, “Learning representations by back-propagating errors,” Letters to Nature, Nature 323, pp. 533-536, Oct. 1986. [16]Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. “Caffe: Convolutional architecture for fast feature embedding,” Proceedings of the 22nd ACM international conference on Mul-timedia, pp. 675-678, Nov. 2014. [17]T.Dettmers, Which GPU to Get for Deep Learning: My Experience and Advice for Using GPUs in Deep Learning, http://timdettmers.com/2017/04/09/which-gpu-for-deep-learning/. [18]Yangqing Jia, and Evan Shelhamer, Caffe Tutori-al,http://caffe.berkeleyvision.org/tutorial/, 2016. [19]Stirmark Benchmark by, http://www.petitcolas.net/watermarking/stirmark/ [20]Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke and Andrew Rabinovich, “Going Deeper with Convolutions,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
|