|
[1] W. J. Frawley, G. P. Shapiro and C. J. Matheus, “Knowledge Discovery in Databases: An Overview” AI Magazine, Vol. 13, pp. 57-70. Nov. 1992. [2] J. Ginsberg, M. H. Mohebbi, R. S. Patel, L. Brammer, M. S. Smolinski and L. Brilliant, “Detecting influenza epidemics using search engine query data” Nature , pp. 1012-1014. Feb. 2009. [3] S. B. Kotsiantis, D. Kanellopoulos, and P. E. Pintelas, “Data Preprocessing for Supervised Leaning” International Journal of Computer Science, Vol. 1, No. 12, pp. 4091-4096. 2007. [4] D. M. Strong, Y. W. Lee, and R. Y. Wang, “Data Quality In Context” Communications of The ACM, Vol. 40, No. 5, pp. 103-110. May. 1997. [5] J. Han, J. Pei and M. Kamber, “Classification: Basic Concepts,” in Data mining: concepts and techniques,3th ed.ELSEVIER,2011,ch.8, pp. 327-385. [6] I. Guyon, A. Elisseeff, “An Introduction to Variable and Feature Selection” Journal of machine learning research, pp. 1157-1182. Mar. 2003. [7] R. Kohavi, G. H. John, “Wrappers for feature subset selection” Artificial Intelligence, pp.273-324. May. 1996. [8] Y. Zhai, YS. Ong and I. W. Tsang, “The Emerging "Big Dimensionality"” IEEE Computational Intelligence Magazine, pp. 14-26. Aug. 2014. [9] M. Haghighat , M. Abdel-Mottaleb, and W. Alhalabi, “Discriminant Correlation Analysis: Real-Time Feature Level Fusion for Multimodal Biometric Recognition” IEEE Transactions On Information Forensics And Security, Vol. 11, No. 9, Sep. 2016. [10] P. N. Sabes, M. I. Jordan, “Reinforcement Learning by Probability Matching” Advances in Neural Information Processing Systems, 1995. [11] P. Zhu, W. Zhu, Q. Hu,C. Zhang,W. Zuo, “Subspace clustering guided unsupervised feature selection.” Pattern Recognition, Vol. 66, pp. 364-374. Jun. 2017. [12] JH Holland, “Interim and Prospectus” in Adaptation in natural and artificial systems: an introductory analysis with applications to biology, control, and artificial intelligence. Bradford Books, 1992, ch 10, pp. 171-181. [13] C. Cortes, V. Vapnik, “Support-Vector Networks” Machine Learning, pp.273-297. 1995. [14] J.G. Carbonell, R.S. Michalski, T.M. Mitchell, “An overview of machine learning” in Machine Learning: An Artificial Approach, Tioga Publishing Co., 1983, ch 1, pp. 3-20. [15] M. Mohri, “Multi-Class Classification” in Foundations of Machine Learning, MIT press, 2012, ch8, pp.183-207. [16] PC. Chang, CH. Liu, “A TSK type fuzzy rule based system for stock price prediction” Expert Systems with Applications, pp. 35-144. Aug. 2008. [17] G. James, D. Witten, T. Hastie, R. Tibshirani, “Classification” in An introduction to statistical learning: with applications in R,1th ed. Springer, Jun. 2013. ch 4, pp.129-170. [18] A. M. MartõÂnez and A. C. Kak, “PCA versus LDA” IEEE Transactions On Pattern Analysis And Machine Intelligence, Vol. 23, No. 2, Feb. 2001. [19] R. Kohavi, “A Study of Cross-Validation and Bootstrap for Accuracy Estimation and Model Selection” International Joint Conference on Articial Intelligence (IJCAI),Vol. 14, No.2, pp. 1137-1145. 1995. [20] H. Ince and T. B. Trafalis, “Kernel principal component analysis and support vector machines for stock price prediction” IIE Transactions, pp. 629–637. Mar. 2007. [21] A. Kalousis, J. Prados, M. Hilario, “Stability of Feature Selection Algorithms: a study on high dimensional spaces” Knowledge and information systems, pp. 95-116. Mar. 2007. [22] M. Dash, H. Liu, “Feature Selection for Classification” Intelligent Data Analysis, Vol. 1,pp. 131-156. 1997. [23] P. M. Narendra,. K. Fukunaga,, “A branch and bound algorithm for feature selection” IEEE Transactions on Computers, pp. 917-922. Sep. 1977. [24] H. Liu, H. Motoda, “Perspectives of Feature Selection” in Feature selection for knowledge discovery and data mining, Springer Science & Business Media, Vol. 454., 2012. ch 2. pp. 17-38. [25] H. Liu and L. Yu, “Toward Integrating Feature Selection Algorithms for Classification and Clustering” IEEE Transactions on knowledge and data engineering, pp. 491-502. 2005. [26] V. Kumar and S. Minz, “Feature Selection: A literature Review” Smart Computing Review, Vol. 4, No. 3, pp. 211-229. Jun. 2014. [27] KJ. Kim, I. Han, “Genetic algorithms approach to feature discretization in artificial neural networks for the prediction of stock price index”, Expert Systems with Applications, pp. 125–132. 2000. [28] Q. Guo, W. Wu, DL. Massart, C. Boucon, S. D. Jong, “Feature selection in principal component analysis of analytical data”, Chemometrics and Intelligent Laboratory Systems, Vol. 61, pp. 123-132. Feb. 2002. [29] B. E. Boser, I. M. Guyon, V. N. Vapnik, “A Training Algorithm for Optimal Margin Classifiers” Proceedings of the fifth annual workshop on Computational learning theory. ACM, pp. 144-152. Jul. 1992. [30] R. Bekkerman, R. El-Yaniv, N. Tishby, Y. Winter, “Distributional Word Clusters vs. Words for Text Categorization” Journal of Machine Learning Research, pp. 1183-1208. 2003. [31] J. R. Quinlan, “Constructing Decision Tree” in C4. 5: programs for machine learning, Elsevier, ch 2, pp. 17-25. 2014. [32] V. Kumar, M. Steinbach, PN. Tan, “Introduction To Data Mining” in Introduction To Data Mining, ch 4, pp.145-205. Mar. 2006. [33] S. Wold, K. Esbensen, P. Geladi, “Principal component analysis” Chemometrics and intelligent laboratory systems, Vol .2, pp. 37-52. Aug. 1987. [34] D. Enke, S. Thawornwong, “The use of data mining and neural networks for forecasting stock market returns”, Expert Systems with Applications, Vol. 29,pp. 927–940. 2005. [35] ST. Li, SC. Kuo, “Knowledge discovery in financial investment for forecasting and trading strategy through wavelet-based SOM networks” Expert Systems with Applications, Vol. 34, pp. 935-951. Feb. 2008. [36] A. Abraham, B. Nath, P Mahanti, “Hybrid intelligent systems for stock market analysis” Computational science-ICCS 2001, pp. 337-345. 2001. [37] W. Siedlecki, J. Sklansky, “A note on genetic algorithms for large-scale feature selection” Pattern recognition letters, pp. 335-347. 1989. [38] CF. Tsai, YC. Hsiao, “Combining multiple feature selection methods for stock prediction: union, intersection, and multi-intersection approaches” Decision Support Systems, Vol.50, pp. 258-269. Aug. 2010. [39] S. Moon, H. Qi, “Hybrid dimensionality reduction method based on support vector machine and independent component analysis” IEEE transactions on neural networks and learning systems, Vol. 23, pp. 749-761. 2012. [40] K. Tumer, J. Gosh, “Linear order statistics combiners for pattern classification, Combining Artificial Neural Networks” Combining Artificial Neural Networks, Ed. Amanda Sharkey, pp 127-162. 1999. [41] F. Herrera, M. Lozano, JL. Verdegay, “Tackling real-coded genetic algorithms: Operators and tools for behavioural analysis” Artificial Intelligence Review, Vol. 12 , pp. 265–319. 1998. [42] A. Venkatachalam, “M-infosift: A Graph-based Approach For Multiclass document Classification” Master Of Science In Computer Science And Engineering, Aug. 2007. [43] JJ. Grefenstette, “Optimization of control parameters of genetic algorithms” IEEE Transactions on systems, man, and cybernetics, Vol.16, pp. 122-128. 1986. [44] L. Yu,, S. Wang, K. K. Lai, “Mining Stock Market Tendency Using GA-Based Support Vector Machines” Internet and Network Economics, pp. 336-345. 2005.
|