|
[1]K. Guolin, M. Qi, F. Thomas, W. Taifeng, C. Wei, M. Weidong, Y. Qiwei, L. Tie-Yan, "LightGBM: A Highly Efficient Gradient Boosting Decision Tree," Advances in Neural Information Processing Systems vol. 30, pp. 3149-3157, 2017. [2]A. Dorogush, V. Ershov, A. Gulin "CatBoost: gradient boosting with categorical features support," NIPS, pp.1-7, 2017. [3]J. Friedman. "Greedy function approximation: a gradient boosting machine." Annals of Statistics, 29(5): pp.1189-1232, 2001. [4]J. Friedman. "Stochastic gradient boosting." Computational Statistics & Data Analysis, 38(4): pp. 367-378, 2002. [5]Tianqi Chen and Carlos Guestrin. "Xgboost: A scalable tree boosting system." In Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 785–794. ACM, 2016. [6]Tony Duan, Anand Avati, Daisy Yi Ding, Sanjay Basu, Andrew Y Ng, and Alejandro Schuler. "Ngboost: Natural gradient boosting for probabilistic prediction." arXiv preprint arXiv:1910.03225. 2019. [7]Stephen Tyree, Kilian Q Weinberger, Kunal Agrawal, and Jennifer Paykin. "Parallel boosted regression trees for web search ranking." In Proceedings of the 20th international conference on World wide web, pp. 387–396. ACM, 2011. [8]Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. "Scikit-learn: Machine learning in python." Journal of Machine Learning Research, 12(Oct): pp. 2825–2830, 2011. [9]Ridgeway. Greg., "Generalized boosted models: A guide to the gbm package." 2007. Retrieved from https://cran.r-project.org/web/packages/gbm/vignettes/gbm.pdf [10]Daoud. E. A., "Comparison between XGBoost, LightGBM and CatBoost Using a Home Credit Dataset," International Journal of Computer and Information Engineering 13(1), pp. 6-10, 2019. [11]Potdar, K., T. Pardawala and C. Pai, "A Comparative Study of Categorical Variable Encoding Techniques for Neural Network Classifiers," Article in International Journal of Computer Applications, 2017. [12]Richard Ernest Bellman, Dynamic Programming, Princeton University Press, 1957. [13]Longadge, R. and S. Dongre, "Class Imbalance Problem in Data Mining Review." arXiv preprint arXiv:1305.1707, 2013. [14]Charles X. Ling., Huang. Jin, Zhang. Harry, "AUC: a statistically consistent and more discriminating measure than accuracy." Proceedings of the Eighteenth International Joint Conference of Artificial Intelligence (IJCAI) 2003. [15]LightGBM API, Microsoft, "Advanced Topics," April 2020, https://lightgbm.readthedocs.io/en/latest/Advanced-Topics.html [16]Frank E. Harrell Jr., Thomas Cason (1994). Titanic: Machine Learning from Disaster. Retrieved March 20, 2020 from https://www.kaggle.com/c/titanic/data. [17]Kaggle (2015 April). Titanic: Machine Learning from Disaster. Retrieved March 2020 from https://www.kaggle.com/c/titanic/data [18]Kaggle (2019 August). Categorical Feature Encoding Challenge. Retrieved March 2020 from https://www.kaggle.com/c/cat-in-the-dat/data [19]Moro, S., P. Cortez and P. Rita. A Data-Driven Approach to Predict the Success of Bank Telemarketing. Decision Support Systems, Elsevier, 62: pp. 22-31, June 2014 Retrieved March 2020 from https://www.kaggle.com/c/bank-marketing-uci/data [20]E-Sun Bank(玉山銀行). Credit Card Fraud Detection Challenge, September 2019 Retrieved Sept 2019 from https://tbrain.trendmicro.com.tw/Competitions/Details/10
|