跳到主要內容

臺灣博碩士論文加值系統

(18.97.14.85) 您好!臺灣時間:2024/12/07 17:42
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:邵立瑜
研究生(外文):Li-Yu Shao
論文名稱:LightGBM與CatBoost在類別資料集下之效能探討
論文名稱(外文):A Study on Performance of LightGBM and CatBoost under categorical datasets
指導教授:蔣明晃蔣明晃引用關係
指導教授(外文):Ming-Huang Chiang
口試委員:林我聰郭人介
口試委員(外文):Woo-Tsong LinRen-Jieh Kuo
口試日期:2020-06-23
學位類別:碩士
校院名稱:國立臺灣大學
系所名稱:商學研究所
學門:商業及管理學門
學類:一般商業學類
論文種類:學術論文
論文出版年:2020
畢業學年度:108
語文別:英文
論文頁數:35
中文關鍵詞:梯度提升決策樹演算法LightGBMCatBoost大數據資料探勘
外文關鍵詞:Gradient BoostingLightGBMCatBoostBig DataData mining
DOI:10.6342/NTU202001258
相關次數:
  • 被引用被引用:0
  • 點閱點閱:564
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
對於現今中小型的資料集,梯度提升決策樹演算法(GBDT)在業界、學術界以及競賽被廣泛應用,此篇論文目的為比較目前最常使用的兩個GBDT套件,LightGBM與CatBoost,並找出兩個演算法之間效能差異的原因。為了讓比較具有公平性與一致性,我們根據一般現有真實資料集的特性設計了一個實驗,並根據此實驗的限制尋找資料集。實驗結果指出CatBoost在類別欄位較多的資料集確實預測效果更佳,而LightGBM則傾向於使用數值欄位來預測。在訓練時間上,LightGBM恆比CatBoost來的迅速。
On medium-sized datasets, Gradient Boosting Decision Tree(GBDT) methods have been proven to be effective both academically and competitively. This paper aims to investigate and compare the efficiency of the two most used GBDT methods, LightGBM and CatBoost, and discover the reason behind the performance difference. To make a fairer comparison, we designed an experiment based on data characteristic, and found several desirable raw datasets accordingly. The implementation indicates that CatBoost tends to perform better when the dataset has indeed more categorical columns, while LightGBM incline to use numerical columns to predict. For training speed, LightGBM is always faster than CatBoost under all circumstances.
口試委員會審定書 #
誌謝 i
中文摘要 ii
ABSTRACT iii
CONTENTS iv
LIST OF FIGURES vii
LIST OF TABLES viii
Chapter 1 Introduction 1
1.1 Motivation 1
1.2 Objective 2
1.3 Organization of thesis 2
1.4 Limitations 3
Chapter 2 Related Work 4
2.1 Boosting Methods 4
2.2 Categorical Encoding 6
Chapter 3 Research Methodology 8
3.1 Research flow 8
3.2 Experimental Design and Performance metrics 9
3.2.1 Control Variables 9
3.2.2 Evaluation Metrics 10
3.3 Datasets 11
3.3.1 Titanic: Machine Learning from Disaster[17] 11
3.3.2 Cat in the Dat: Categorical Feature Encoding Challenge[18] 12
3.3.3 Bank Marketing UCI[19] 13
3.3.4 E-Sun Bank Fraud Detection[20] 14
3.3.5 Data Preprocessing 16
3.3.6 Hyperparameters 17
Chapter 4 Results of our experimental design 19
4.1 Titanic: Machine Learning from Disaster 19
4.1.1 LightGBM (Self-made train(0.75)/test(0.25) on initial training sets) 19
4.1.2 CatBoost (Self-made train(0.75)/test(0.25) on initial training sets) 20
4.2 Cat in the Dat: Categorical Feature Encoding Challenge 21
4.2.1 LightGBM(Kaggle) 21
4.2.2 CatBoost(Kaggle) 22
4.3 Bank Marketing UCI 22
4.3.1 LightGBM(Kaggle) 23
4.3.2 CatBoost(Kaggle) 23
4.4 E-Sun Bank Fraud Detection 24
4.4.1 LightGBM(Self-made train(0.75)/test(0.25) on initial training sets) 25
4.4.2 CatBoost(Self-made train(0.75)/test(0.25) on initial training sets) 25
4.5 Summary (AUC & Private Score) 26
4.5.1 Training speed 26
4.5.2 Performance 27
Chapter 5 Conclusion 30
5.1.1 Summary: 30
5.1.2 Contribution: 31
5.1.3 Limits: 31
5.1.4 Future studies: 32
REFERENCE 33
[1]K. Guolin, M. Qi, F. Thomas, W. Taifeng, C. Wei, M. Weidong, Y. Qiwei, L. Tie-Yan, "LightGBM: A Highly Efficient Gradient Boosting Decision Tree," Advances in Neural Information Processing Systems vol. 30, pp. 3149-3157, 2017.
[2]A. Dorogush, V. Ershov, A. Gulin "CatBoost: gradient boosting with categorical features support," NIPS, pp.1-7, 2017.
[3]J. Friedman. "Greedy function approximation: a gradient boosting machine." Annals of Statistics, 29(5): pp.1189-1232, 2001.
[4]J. Friedman. "Stochastic gradient boosting." Computational Statistics & Data Analysis, 38(4): pp. 367-378, 2002.
[5]Tianqi Chen and Carlos Guestrin. "Xgboost: A scalable tree boosting system." In Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 785–794. ACM, 2016.
[6]Tony Duan, Anand Avati, Daisy Yi Ding, Sanjay Basu, Andrew Y Ng, and Alejandro Schuler. "Ngboost: Natural gradient boosting for probabilistic prediction." arXiv preprint arXiv:1910.03225. 2019.
[7]Stephen Tyree, Kilian Q Weinberger, Kunal Agrawal, and Jennifer Paykin. "Parallel boosted regression trees for web search ranking." In Proceedings of the 20th international conference on World wide web, pp. 387–396. ACM, 2011.
[8]Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. "Scikit-learn: Machine learning in python." Journal of Machine Learning Research, 12(Oct): pp. 2825–2830, 2011.
[9]Ridgeway. Greg., "Generalized boosted models: A guide to the gbm package." 2007. Retrieved from https://cran.r-project.org/web/packages/gbm/vignettes/gbm.pdf
[10]Daoud. E. A., "Comparison between XGBoost, LightGBM and CatBoost Using a Home Credit Dataset," International Journal of Computer and Information Engineering 13(1), pp. 6-10, 2019.
[11]Potdar, K., T. Pardawala and C. Pai, "A Comparative Study of Categorical Variable Encoding Techniques for Neural Network Classifiers," Article in International Journal of Computer Applications, 2017.
[12]Richard Ernest Bellman, Dynamic Programming, Princeton University Press, 1957.
[13]Longadge, R. and S. Dongre, "Class Imbalance Problem in Data Mining Review." arXiv preprint arXiv:1305.1707, 2013.
[14]Charles X. Ling., Huang. Jin, Zhang. Harry, "AUC: a statistically consistent and more discriminating measure than accuracy." Proceedings of the Eighteenth International Joint Conference of Artificial Intelligence (IJCAI) 2003.
[15]LightGBM API, Microsoft, "Advanced Topics," April 2020, https://lightgbm.readthedocs.io/en/latest/Advanced-Topics.html
[16]Frank E. Harrell Jr., Thomas Cason (1994). Titanic: Machine Learning from Disaster. Retrieved March 20, 2020 from https://www.kaggle.com/c/titanic/data.
[17]Kaggle (2015 April). Titanic: Machine Learning from Disaster. Retrieved March 2020 from https://www.kaggle.com/c/titanic/data
[18]Kaggle (2019 August). Categorical Feature Encoding Challenge. Retrieved March 2020 from https://www.kaggle.com/c/cat-in-the-dat/data
[19]Moro, S., P. Cortez and P. Rita. A Data-Driven Approach to Predict the Success of Bank Telemarketing. Decision Support Systems, Elsevier, 62: pp. 22-31, June 2014
Retrieved March 2020 from https://www.kaggle.com/c/bank-marketing-uci/data
[20]E-Sun Bank(玉山銀行). Credit Card Fraud Detection Challenge, September 2019
Retrieved Sept 2019 from https://tbrain.trendmicro.com.tw/Competitions/Details/10
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊