跳到主要內容

臺灣博碩士論文加值系統

(100.28.132.102) 您好!臺灣時間:2024/06/16 14:06
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:吳旻樺
研究生(外文):Min-Hua Wu
論文名稱:應用平滑樣條及支持向量機迴歸於銀行客戶之信用風險預測和分群
論文名稱(外文):Application of Smoothing Spline and Support Vector Machine Regression for Prediction and Clustering of Consumer Credit Risk
指導教授:蔣明晃蔣明晃引用關係郭瑞祥郭瑞祥引用關係
指導教授(外文):Ming-Huang ChiangRuey-Shan Guo
學位類別:碩士
校院名稱:國立臺灣大學
系所名稱:商學研究所
學門:商業及管理學門
學類:一般商業學類
論文種類:學術論文
論文出版年:2007
畢業學年度:96
語文別:英文
論文頁數:69
中文關鍵詞:平滑樣條支持向量機迴歸分群信用風險
外文關鍵詞:Smoothing SplineSupport Vector Machine RegressionClusteringConsumer Credit Risk
相關次數:
  • 被引用被引用:5
  • 點閱點閱:284
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:1
本研究之目的在於預測銀行客戶之信用風險和客戶分群。信用風險管理的重要性一方面在於維持銀行之信用評等,避免銀行缺現引發倒閉風險,另一方面可積極地增進資金分配的效率以創造最佳之利潤。因此如何能正確地去估算信用風險便成為本研究之重要議題。本研究也著重於以信用風險來做客戶分群的重要性,因為分群的好壞會影響整體客戶信用風險的估計。同時,銀行也能針對不同的客戶分群實施不同的行銷策略以增加獲利。
在預測客戶信用風險上,本研究建議採用兩種非線性的Curve Fitting Model(曲線擬合),Smoothing Spline(平滑樣條) 及Support Vector Machine Regression(支持向量機迴歸),來找出客戶群之每月倒帳機率和曝險金額兩時間序列資料的模式,並用來預測新客戶之倒帳機率和曝險金額。本研究發現以這兩種方式預測的誤差都比傳統的多項式迴歸小,然而Support Vector Machine Regression的預測效果又較Smoothing Spline為佳。原因主要是來自於這兩種方法在衡量誤差的方式不同,Smoothing Spline採平方誤差,Support Vector Machine Regression採用絕對值誤差,並容許一小段誤差範圍。
在客戶分群上,傳統上只能針對單一時間序列來作分群,而本研究結合k-means分群法和上述兩種Curve Fitting Model,可針對擬合後之每月倒帳機率和曝險金額曲線同時來作分群。由於本研究以平方誤差來衡量分群的效果,因此用Smoothing Spline分群的效果會比用Support Vector Machine Regression略佳。
This research mainly focuses on forecasting and clustering retail banking customers by their credit risk metrics, such as probability of default and exposure. Forecasting the probability of default and exposure is critical in credit risk management. The credit loss should be estimated correctly for banks to allocate fund more efficiently, avoid running out of cash and maintain a bank''s credit rating at its target level.
The importance of customer segmentation is also emphasized. Segmentation affects the credit risk measurement of portfolio. Banks can also make profit by applying marketing strategies to different customer segments.
Two nonlinear curve fitting models, Smoothing Spline and Support Vector Machine Regression (SVR), are used to identify the patterns of customer behavior. We then cluster customers by their different behavior patterns.
We show that both smoothing spline and SVR are better in capturing the patterns of PD and EAD curves than polynomial regression. However, when predicting new vintage, SVR outperforms smoothing spline with its characteristic of allowing a small deviation. We also modify the k-means clustering method to cluster customers by fitted PD and EAD curves together, and find that smoothing spline is better than SVR in clustering.
Abstract ii
Chinese Abstract iii
List of Figures vi
List of Tables viii
1 Introduction 1
1.1 Motivation 2
1.2 Objectives 4
1.3 Thesis Framework 5
2 Literature Review 6
2.1 Credit Risk Management in Retail Banking 6
2.1.1 Credit Card Customers Management 7
2.2 Univariate Curve Fitting Models 9
2.2.1 Splines 9
2.2.2 Support Vector Machine Regression 12
2.2.3 Summary 15
2.3 Univariate Time Series Clustering Models 15
3 Research Methods 17
3.1 Problem Statement and Research Framework 17
3.2 Assumptions 19
3.3 Curve Fitting 20
3.3.1 Parameter Selection of Smoothing Spline 20
3.3.2 Parameter Selection of SVR 21
3.4 Prediction 22
3.5 Scaling 22
3.6 Clustering 23
3.7 Performance Measurement 25
4 Results and Discussion 26
4.1 Data 26
4.2 Assumption Check 31
4.3 Cross Validation 35
4.4 Prediction Results and Discussion 38
4.5 Clustering Results and Discussion 41
5 Conclusion 43
A The Prediction Results 45
B The Clustering Results 63
Reference 68
[1] Vladimir Cherkassky and Yunqian Ma. Practical selection of svm parameters and noise estimation for svm regression. Neural Networks, 2004.
[2] Abel Elizalde. Credit risk model i: Default correlation in intensity models. MSc in Financial Mathematics, Kings College London, 2003.
[3] Abel Elizalde. Credit risk model ii: Structural models. MSc in Financial Mathematics, Kings College London, 2005.
[4] DaVid J. Hand. Modelling consumer credit risk. IMA Journal of Management Mathematics, 2001.
[5] Trevor Hastie, Robert Tibshirani, and Jerome Friedman. The Elements of Statistical Learning. Springer, 2001.
[6] Rob J. Hyndman, Maxwell L. King, Ivet Pitrun, and Baki Billah. Local linear forecasts using cubic smoothing splines. Australian and New Zealand Journal of Statistics, 2005.
[7] Gareth M. James and Catherine A. Sugar. Clustering for sparsely sampled functional data. Journal of the American Statistical Association, 2003.
[8] Chih-Jen Lin. A guide to support vector machines. 2004.
[9] P Ma, CI Castillo-Davis, W Zhong, and JS Liu. A data-driven clustering method for time course gene expression data. Nucleic Acids Research, 2006.
[10] Christopher Marrison. The Fundamentals of Risk Measurement. McGraw-Hill, 2002.
[11] Michael P. Perrone and Scott D. Connell. K-means clustering for hidden markov models. Proceedings of IWFHR VII, 2000.
[12] Lawrence R. Rabiner. A tutorial on hidden markov models and selected applications in speech recognition. Proceedings of the IEEE, 1989.
[13] Alex J. Smola and Bernhard Scholkopf. A tutorial on support vector regression. Statistics and Computing, 2004.
[14] LC Thomas, RW Oliver, and DJ Hand. A survey of the issues in consumer credit modelling research. Journal of the Operational Research Society, 2005.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊