|
[1]H. Almunallim and T. G. Dieterich, “Learning with many irrelevant features,” The Proceedings of the Ninth National Conference on Artificial Intelligence, 1991, Vol. 2, pp. 547-552. [2]Q. A. Al-Radaideh, M. N. Sulaiman, M. H. Selamat and H. Ibrahim “Approximate reduct computation by rough sets based attribute weighting,” The Proceedings of the IEEE International Conference on Granular Computing, 2005, Vol. 2, pp. 383-386. [3]A. Ben-Dor and Z. Yakhini, “Clustering gene expression patterns,” Journal of Computational Biology, 1999, Vol. 6, pp. 281-297. [4]A. L. Blum and R. L. Rivest, “Training a 3-node neural networks is NP-complete,” Neural Networks, 1992, Vol. 5, pp. 117-127. [5]A. L. Blum and P. Langley, “Selection of relevant features and examples in machine learning,” Artificial Intelligence, 1997, Vol. 97, pp. 245-271. [6]H. Bozdogan, “Model selection and Akaike’s information criterion: the general theory and its analytical extensions”, Psychometrika, 1987, Vol. 52, No. 3, pp. 345-370. [7]B. G. Buchanan and E. H. Shortliffe, Rule-Based Expert System: The MYCIN Experiments of the Standford Heuristic Programming Projects, Addison-Wesley, MA., 1984. [8]N. Cercone, A. An and C. Chan, “Rule-induction and case-based reasoning: hybrid architectures appear advantageous”, IEEE Transactions on Knowledge and Data Engineering, 1999, Vol. 11, No. 1, 166-174. [9]Y. M. Cheung, “Rival penalization controlled competitive learning for data clustering with unknown cluster number”, The Proceedings of the Ninth International Conference on Neural Information Processing, 2002, Vol. 1, pp. 18-22. [10]R. M. Cole, Clustering with Genetic Algorithms, University of Western Australia, Master Thesis, 1998, pp. 2-3. [11]M. Dash, K. Choi, P. Scheuermann and H. Liu, “Feature selection for clustering – a filter solution,” The Proceedings of the Second International Conference on Data Mining, 2002, pp. 115-122. [12]K. Gao, M. Liu, K. Chen, N. Zhou and J. Chen, “Sampling-based tasks scheduling in dynamic grid environment,” The Proceedings of the Fifth WSEAS International Conference on Simulation, Modeling and Optimization, 2005, pp.25-30. [13]I. Guyon and A. Elisseeff, “An introduction to variable and feature selection,” Journal of Machine Learning Research, 2003, Vol. 3, pp. 1,157-1,182. [14]K. M. Gupta and A. R. Montazemi, “Empirical evaluation of retrieval in case-based reasoning systems using modified cosine matching function”, IEEE Transactions on Systems, Man, and Cybernetics Part A: Systems and Humans, 1997, Vol. 27, No. 5, pp. 601-612. [15]M. Hall, “Correlation-based feature selection for discrete and numeric class machine learning,” The Proceedings of the Seventeenth International Conference on Machine Learning, 2000, pp. 359-366. [16]M. A. Hall and G. Holmes, “Benchmarking attribute selection techniques for discrete class data mining,” IEEE Transactions on Knowledge and Data Engineering, 2003, Vol. 15, No. 3, pp. 1,437-1,447. [17]J. Han, X. Hu and T.Y. Lin, “Feature selection based on rough set and information entropy,” The Proceedings of the IEEE International Conference on Granular Computing, 2005, Vol. 1, pp. 153-158. [18]J. Han and M. Kamber, Data Mining: Concepts and Techniques, Morgan Kaufmann, 2006. [19]K. Hu, L. Diao, Y. Lu, and C. Shi, “A heuristic optimal reduct algorithm,“ Lecture Notes in Computer Science, Vol. 1983, Springer, Berlin, 2000, pp. 139-144. [20]L. Kaufman and P. J. Rousseeuw, Finding Groups in Data: An Introduction to Cluster Analysis, John Wiley & Sons, 1990. [21]K. Kira and L. Rendell, “A practical approach to feature selection”, The Proceedings of the Ninth International Conference on Machine Learning, 1992, pp. 249-256. [22]Y. Kodratoff and R. S. Michalski, Machine Learning: An Artificial Intelligence Artificial Intelligence Approach, Vol. 3, Morgan Kaufmann Publishers, San Mateo, CA., 1983. [23]J. Komorowski, L. Polkowski and A. Skowron, “Rough sets: a tutorial”, http:// www.let.uu.nl/esslli/Courses/skowron/skowron.ps. [24]I. Kononenko, “Estimating attributes: analysis and extensions of relief,” The Proceedings of the Seventh European Conference on Machine Learning, 1994, pp. 171-182. [25]Y. Li, S. C. K. Shiu and S. K. Pal, “Combining feature reduction and case selection in building CBR classifiers,” IEEE Transactions on Knowledge and Data Engineering, 2006, Vol. 18, No. 3, pp. 415- 429. [26]H. Liu and R. Setiono, “A probabilistic approach to feature selection: a filter solution,” The Proceedings of the Thirteenth International Conference on Machine Learning, 1996, pp. 319-327. [27]S. P. Lloyd, “Least squares quantization in PCM,” IEEE Transactions on Information Theory, 1982, Vol. 28, pp. 128-137, (original version: Technical Report, Bell Labs, 1957). [28]R. S. Michalski, J. G. Carbonell and T. M. Mitchell, Machine Learning: An Artificial Intelligence Approach, Vol. 1, Morgan Kaufmann Publishers, Los Altos, CA., 1983. [29]R. S. Michalski, J. G. Carbonell and T. M. Mitchell, Machine Learning: An Artificial Intelligence Approach, Vol. 2, Morgan Kaufmann Publishers, Los Altos, CA., 1983. [30]Z. Pawlak, “Rough set,” International Journal of Computer and Information Sciences, 1982, Vol. 11, No. 1, pp. 341-356. [31]Z. Pawlak, “Why rough sets?,” The Proceedings of the Fifth IEEE International Conference on Fuzzy Systems, 1996, Vol. 2, pp. 738-743. [32]P. Pudil, J. Novovicova, and J. Kittler, “Floating search methods in feature selection,” Pattern Recognition Letters, 1994, Vol. 15, pp. 1,119-1,125. [33]G. Riley, Expert Systems - Principles and Programming, Pws-Kent, Boston, 1989. [34]M. Sarkar, B. Yegnanarayana and D. Khemani, “A cluster algorithm using an evolutionary programming-based approach”, Pattern Recognition Letters, 1997, Vol. 18, pp. 975-986 [35]G. Schwarz, “Estimating the dimension of a model”, The Annals of Statistics, 1978, Vol. 6, No. 2, pp. 461-464. [36]K. S. Shin and I. Han, “Case-based reasoning supported by genetic algorithms for corporate bond rating”, Expert Systems with Applications, 1999, Vol. 16, pp. 85-95. [37]A. Skowron and C. Rauszer, “The discernibility matrices and functions in information systems”, Handbook of Application and Advances of the Rough Sets Theory, Kluwer Academic Publishers, Dordrecht, 1992, pp. 331-362. [38]H. Q. Sun, Z. Xiong, “Finding minimal reducts from incomplete information systems,” The Proceedings of the Second International Conference on Machine Learning and Cybernetics, 2003, Vol. 1, pp. 350-354. [39]J. Wroblewski, “Finding minimal reducts using genetic algorithms,” The Proceedings of the Second Annual Join Conference on Information Sciences, 1995, pp. 186-189. [40]L. Xu, A. Krzyiak and E. Oja, “Rival penalized competitive learning for clustering analysis, RBF Net, and Curve Detection”, IEEE Transaction on Neural Networks, 1993, Vol. 4, pp. 636-648. [41]L. Yu and H. Liu, “Efficient feature selection via analysis of relevance and redundancy,” Journal of Machine Learning Research, 2004, Vol. 5, pp. 1,205-1,224. [42]J. Zhang, J. Wang, D. Li, H. He, and J. Sun, “A new heuristic reduct algorithm based on rough sets theory,” Lecture Notes in Computer Science, Vol. 2762, Springer, Berlin, 2003, pp. 247-253. [43]M. Zhang and J. T. Yao, “A rough sets based approach to feature selection,” The Proceedings of the IEEE Annual Meeting of Fuzzy Information, 2004, pp. 434-439.
|