|
參考文獻 [1] Alpaydin, E. (2010). Introduction to Machine Learning. [2] Chawla, N. V., Japkowicz, N., & Kotcz, A. (2004). Editorial: special issue on learning from imbalanced data sets. ACM Sigkdd Explorations Newsletter, 6(1), 1-6. [3] Drummond, C., & Holte, R. C. (2003). C4. 5, class imbalance, and cost sensitivity: why under-sampling beats over-sampling. Paper presented at the Workshop on learning from imbalanced datasets II. [4] Liu, X.-Y., Wu, J., & Zhou, Z.-H. (2009). Exploratory undersampling for class-imbalance learning. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 39(2), 539-550. [5] Chawla, N. V., Bowyer, K. W., Hall, L. O., & Kegelmeyer, W. P. (2002). SMOTE: synthetic minority over-sampling technique. Journal of Artificial Intelligence Research, 16,321-357. [6] Chang, C.-C., & Lin, C.-J. (2011). LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology (TIST), 2(3), 27. [7] Breiman, L. (2001). Random forests. Machine learning, 45(1), 5-32. [8] Murphy, K. P. (2006). Naive bayes classifiers. University of British Columbia. [9] Tan, S. (2005). Neighbor-weighted k-nearest neighbor for unbalanced text corpus. Expert Systems with Applications, 28(4), 667-671. [10] Liu, T.-Y. (2009). Easyensemble and feature selection for imbalance data sets. Paper presented at the Bioinformatics, International Joint Conference on Systems Biology and Intelligent Computing, 2009. IJCBS'09. [11] 张琦, 吴斌, & 王柏. (2006). 非平衡数据训练方法概述. 计算机科学, 32(10), 181-186. [12] Guyon, I., & Elisseeff, A. (2003). An introduction to variable and feature selection. The Journal of Machine Learning Research, 3, 1157-1182. [13] Domingos, P. (1999, August). Metacost: A general method for making classifiers cost-sensitive. In Proceedings of the fifth ACM SIGKDD international conference on Knowledge discovery and data mining (pp. 155-164). [14] Dietterich, T. G. (2000). Ensemble methods in machine learning. Springer Berlin Heidelberg In Multiple classifier systems (pp. 1-15). [15] Drummond, C., & Holte, R. C. (2003, August). C4. 5, class imbalance, and cost sensitivity: why under-sampling beats over-sampling. In Workshop on learning from imbalanced datasets II (Vol. 11). [16] Thanathamathee, P., & Lursinsap, C. (2013). Handling imbalanced data sets with synthetic boundary data generation using bootstrap re-sampling and AdaBoost techniques. Pattern Recognition Letters, 34(12),1339-1347. [17] Park, S. H., & Ha, Y. G. (2014, July). Large Imbalance Data Classification Based on MapReduce for Traffic Accident Prediction. In Innovative Mobile and Internet Services in Ubiquitous Computing (IMIS), 2014 Eighth International Conference on (pp. 45-49). IEEE. [18] Grefenstette, J. J. (1993). Genetic algorithms and machine learning. Paper presented at the Proceedings of the sixth annual conference on Computational learning theory. [19] Akima, H. (1970). A new method of interpolation and smooth curve fitting based on local procedures. Journal of the ACM (JACM), 17(4), 589-602. [20] Shepard, D. (1968). A two-dimensional interpolation function for irregularly-spaced data. Paper presented at the Proceedings of the 1968 23rd ACM national conference. [21] Zeng, Z.-Q., & Gao, J. (2009). Improving SVM classification with imbalance data set. Paper presented at the Neural Information Processing. [22] Han, H., Wang, W.-Y., & Mao, B.-H. (2005). Borderline-SMOTE: a new over-sampling method in imbalanced data sets learning Advances in intelligent computing (pp. 878-887): Springer. [23] Lewis, D., & Gale, W. (1994). Training text classifiers by uncertainty sampling. [24] Kubat, M., Holte, R. C., & Matwin, S. (1998). Machine learning for the detection of oil spills in satellite radar images. Machine learning, 30(2-3), 195-215. [25] Tan, P., Steinbach, M., & Kumar, V. (2006). others. Introduction to data mining: Pearson Addison Wesley Boston.
|