|
Agresti, A. (2003). Categorical data analysis (Vol. 482). John Wiley & Sons. Bartlett, M. S., Littlewort, G., Frank, M., Lainscsek, C., Fasel, I., & Movellan, J. (2005,June). Recognizing facial expression: machine learning and application tospontaneous behavior. In Computer Vision and Pattern Recognition, 2005. CVPR2005. IEEE Computer Society Conference on (Vol. 2, pp. 568-573). IEEE. Belleflamme, P., Lambert, T., & Schwienbacher, A. (2014). Crowdfunding: Tapping theright crowd. Journal of business venturing, 29(5), 585-609. Bolton, R. N. (1998). A dynamic model of the duration of the customer''s relationshipwith a continuous service provider: The role of satisfaction. Marketing science,17(1), 45-65. Boser, B. E., Guyon, I. M., & Vapnik, V. N. (1992, July). A training algorithm foroptimal margin classifiers. In Proceedings of the fifth annual workshop onComputational learning theory (pp. 144-152). ACM. Breiman, L. (1997). Arcing the edge. Technical Report 486, Statistics Department,University of California at Berkeley. Breiman, L., Friedman, J. H., & Olshen, R. A. (85). stone, CJ (1984). Classification andregression trees. Chen, T., & Guestrin, C. (2016, August). Xgboost: A scalable tree boosting system. InProceedings of the 22nd acm sigkdd international conference on knowledgediscovery and data mining (pp. 785-794). ACM. Fader, P. S., Hardie, B. G., & Lee, K. L. (2005). “Counting your customers” the easyway: An alternative to the Pareto/NBD model. Marketing science, 24(2), 275-284. Fader, P. S., Hardie, B. G., & Lee, K. L. (2005). RFM and CLV: Using iso-value curvesfor customer base analysis. Journal of Marketing Research, 42(4), 415-430. Friedman, J. H. (2001). Greedy function approximation: a gradient boosting machine.Annals of statistics, 1189-1232. Galindo, J., & Tamayo, P. (2000). Credit risk assessment using statistical and machinelearning: basic methodology and risk modeling applications. ComputationalEconomics, 15(1), 107-143. Huang, C. Y. (2012). To model, or not to model: Forecasting for customer prioritization.International Journal of Forecasting, 28(2), 497-506. Kourou, K., Exarchos, T. P., Exarchos, K. P., Karamouzis, M. V., & Fotiadis, D. I.(2015). Machine learning applications in cancer prognosis and prediction.Computational and structural biotechnology journal, 13, 8-17. Powers, D. M. (2011). Evaluation: from precision, recall and F-measure to ROC,informedness, markedness and correlation. Samuel, Arthur L. (1959). "Some Studies in Machine Learning Using the Game ofCheckers". IBM Journal of Research and Development. Taylor, J., King, R. D., Altmann, T., & Fiehn, O. (2002). Application of metabolomicsto plant genotype discrimination using statistics and machine learning.Bioinformatics, 18(suppl_2), S241-S248. Vapnik, V. N., & Chervonenkis, A. Y. (1971). On the uniform convergence of relativefrequencies of events to their probabilities. Theory of Probability & ItsApplications, 16(2), 264-280. Wübben, M., & Wangenheim, F. V. (2008). Instant customer base analysis: Managerialheuristics often “get it right”. Journal of Marketing, 72(3), 82-93.
|