(18.206.177.17) 您好!臺灣時間:2021/04/16 23:01
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

我願授權國圖
: 
twitterline
研究生:曾瑞智
研究生(外文):Jui-Chih Tseng
論文名稱:應用資料探勘技術建構整合型目標客戶選擇模式
論文名稱(外文):A Hybrid Data Mining Approach to Construct the Target Customers Choice Reference Model
指導教授:陳煇煌陳煇煌引用關係陳世智陳世智引用關係
指導教授(外文):Huei-Huang ChenShih-Chih Chen
口試委員:陳煇煌陳世智
口試委員(外文):Huei-Huang ChenShih-Chih Chen
口試日期:2013-07-05
學位類別:碩士
校院名稱:大同大學
系所名稱:資訊經營學系(所)
學門:商業及管理學門
學類:一般商業學類
論文種類:學術論文
論文出版年:2013
畢業學年度:101
語文別:中文
論文頁數:62
中文關鍵詞:類神經網路K-Means演算法支援向量機資料探勘
外文關鍵詞:Neural NetworkData MiningSupport Vector MachineK-Means
相關次數:
  • 被引用被引用:14
  • 點閱點閱:637
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:201
  • 收藏至我的研究室書目清單書目收藏:0
行銷是每個企業最常進行的商業行為,如何行銷才能增加客戶忠誠度,進而挖掘潛在客戶獲取更大利益是相當重要的。為了在有限的資源下,最大化獲得行銷利益,能夠準確的選擇目標客戶對企業來說是相當有用的。因此,建立一套快速,客觀且能增進準確性的目標客戶選擇模型確有其必要性。
利用資料探勘技術找出目標客戶是常見的方式,但過去的研究只著重在尋找高準確率的分類模型。各種不同資料探勘的方法不同情境下有不同應用,沒有哪一種分類器的分類結果永遠最佳。故本研究利用資料探勘技術提出整合性的目標客戶選擇模型,整合支援向量機 (Support Vector Machines, SVM)、類神經網路 (Neural Network, NN)與K-Means演算法 (K-Means Clustering)建構出兩階段分析模型,以期達到提升準確率且同時降低型I誤差與型II誤差的效果。經案例分析後,準確率有較高提升,且型I誤差與型II誤差也皆下降。
Marketing, the prevailing commercial activity of enterprises, is an important strategy to increase customer loyalty and potential customer for more profit. To maximize profit with limited resources, it would be more profitable for enterprises to choose the right target customers. Therefore, it is necessary to build up an efficient, objective and accurate target customer choice model.
Using data mining techniques to find the target customers is a traditional way. However, researches in the past mainly focused on finding the high accuracy classifier, but different classifiers perform differently in varied situations. So this study is to propose an integrated choice of target customer model, integrating support vector machine, neural network and K-Means algorithm into a two-phase analysis model. This model is expected to enhance classification accuracy and reduce Type I and Type II errors at the same time. The research results indicate that the integrated model is effective in simultaneously enhancing classification accuracy and reducing Type I and Type II errors.
Abstract i
摘要 ii
謝誌 iii
目錄 iv
表目錄 vi
圖目錄 vii
第一章 緒論 1
1.1 研究背景與動機 1
1.2 研究目的 3
1.3 研究流程 4
第二章 文獻探討 6
2.1 資料探勘 6
2.1.1 資料探勘的定義 6
2.1.2 資料探勘的功能 7
2.1.3 資料探勘的流程 9
2.1.4 資料探勘的工具 10
2.2 支援向量機 (SUPPORT VECTOR MACHINE) 11
2.2.1 基本原理 11
2.2.2 核心函數 13
2.3 類神經網路 (NEURAL NETWORK) 14
2.3.1 基本原理 14
2.3.2 倒傳導網路 16
2.4 決策樹C5.0 (DECISION TREE C5.0) 18
2.4.1 基本原理 18
2.4.2 Boosting 20
2.5 K-MEANS演算法 21
2.5.1 基本原理 21

第三章 研究方法 23
3.1 單一型分類模式 25
3.1.1 支援向量機 (Support Vector Machine, SVM) 25
3.1.2 類神經網路 (Neural Network, NN) 27
3.1.3 決策樹C5.0 (Decision tree C5.0) 30
3.2 整合型分類模式 31
第四章 案例分析 34
4.1 單一型分類模式 34
4.1.1支援向量機 (Support Vector Machine) 35
4.1.2 類神經網路 (Neural Network) 37
4.1.3 決策樹C5.0 (Decision tree C5.0) 40
4.1.4 單一型分類模式分析 41
4.2 整合型分類模式 42
4.2.1整合型分類模式分析 47

第五章 結論與建議 49
5.1 研究結論 49
5.2 研究建議 50
5.3 研究限制 50
參考文獻 51
1.Aizerman M. A., Braverman, E. M., Rozonoer, L. I., (1964). “Theoretical Foundations of the Potential Function Method in Pattern Recognition Learning,“Autom. Remote Control, vol. 25.

2.Berry, M.J.A., &; Linoff, G. (1996). Mastering Data Mining, the Art and Science of Customer Relationship Management. NY: John Wiley and Sons.

3.Buttrey, S.E. &; Karo, C. (2002). “Using K-nearest- neighbor classification in the leaves of a tree,” Computational Statistics and Data Analysis, 40(1), 27-37.

4.Brachman, R.J., Khabaza, T., Kloesgen, W., Shapiro, G.P. &; Simoudis, E.(1996), “Mining business databases,” Communications of the ACM, vol.39, no.11, 42-48.

5.Benamor N., Benferhat S., &; Elouedi Z., (2004) “Naive Bayes vs Decision Trees in Intrusion Detection Systems,” In The 19th ACM Symposium On Applied Computing, 420-424.

6.Hsu C. W., Chang C. C. &; C. J. Lin (2003). “A Practical Guide to Support Vector Classification,” Technical Report, Department of Computer Science and Information Engineering, University of National Taiwan, Taipei, 1-12.

7.Chen, I.C. K., Coffey, J. T., &; Mudge, T. N. (1996). “Analysis of Branch Prediction Via Data Compression,“ ASPLOS VII, 128-137, Cambridge, Massachusetts.

8.Davidson, I. (2002). “Understanding K-means non-hierarchical clustering,” SUNY Albany Technical Report, 2-25.

9.Fayyad, U. &; Piatetsky-Shapiro, G. &; Smyth, P. (1996). “From Data Mining to Knowledge Discovery in Databases,” Advances in Knowledge Discovery and Data Mining, Calif.: AAAI Press, 37–54.

10.Haykin, S., (1999). Neural Networks, A Comprehensive Foundation 2nd Edition, Prentice Hall.

11.Zha H., Ding C., M. Gu, X. He, &; H.D. Simon. (2001). “Spectral rlaxation for k-means clustering,” Neural Information Processing Systems vol.14 (NIPS), 1057–1064.

12.Han J.&; Kamber M., (2006). Data Mining, Concepts and Techniques (2nd ed.), Morgan Kaufmann. ,New York.

13.He, J., A. Tan, C. L. Tan, &; S. Y. Sung, (2003). “On quantitative evaluation of clustering systems,” Clustering and Information Retrieval Anonymous, 105-134.

14.Hui, S. C. &; Jha, G. (2000). “Data mining for customer service support, “Information &; Management, 38(1), 1-13.

15.Huang, C. L., Chen, M. C. &; Wang, C. J., (2007) “Credit scoring with a data mining approach based on support vector machines,” Expert Systems with Application, vol.33, 847-856.

16.IBM SPSS Modeler 15 Applications Guide (2011).

17.MacQueen J. B. (1967). “Some Methods for classification and Analysis of Multivariate Observations,” Proceedings of 5-th Berkeley Symposium on Mathematical Statistics and Probability, Berkeley, University of California Press, 1, 281-297.

18.Joachims, T. (1998). “Text categorization with support vector machines,” In Proceedings of European conference on machine learning (ECML). Chemintz, DE, 137–142.

19.Kanungo, T., Mount, D., Netanyahu, N., Piatko, C., Silverman, R., and Wu (2000). “An efficient K-means clustering algorithm: Analysis and implementation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(7), 881–892.

20.Lippmann, R., (1987). “An Introduction to Computing with Neural Nets,” IEEE ASSP MAGAZINE, pp. 4-22.

21.Martens, D., Baesens, B., Gestel, T. V. &; Vanthienen, J., (2007). “Comprehensible credit scoring models using rule extraction from support vector machines,” European Journal of Operational Research, vol.183, 1466-1476.

22.Neethu Baby &; Priyanka L.T, (2008). “Customer Classification And Prediction Based On Data Mining Technique,“ International Journal of Emerging Technology and Advanced Engineering, Volume 2, Issue 12, December 2012.

23.Ott, J., (2000). “Successfully development and Implementing Continuous relationship management,” eBusiness executive report, 26-30.

24.Peppers, D. &; Rogers, M. (1999). The One to One Future, Doubleday, N.Y.

25.Pontil, M. &; Verri, A. (1998). “Support vector machines for 3D object recognition.,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(6), 637–646.

26.Quinlan, J. R., (1986). “Induction of Decision Trees,” Machine Learning, vol.1, no.1, 81-106.

27.Ribeiro, B. (2002). "On Data Based Learning Using Support Vector Clustering," Proceeding of the 9th International Conference on Neural Information Processing, 5 ,2516-2521.

28.Moro S., R. Laureano and Cortez P. (2011). “Using Data Mining for Bank Direct Marketing: An Application of the CRISP-DM Methodology,” In P. Novais et al. (Eds.), Proceedings of the European Simulation and Modelling Conference - ESM'2011, 117-121.
29.Scott, N. (2006). “The basis for bibliomining: frameworks for bringing together usage-based data mining and bibliometrics through data warehousing in digital library services“. Information Processing and Management, 42, 785-804.

30.Santos, Y. M. &; Amaral, L.Y. (2004). “Mining geo-referenced data with qualitative spatial reasoning strategies. “ Computers and Graphic,28(3), 371-379.

31.Stern, H. S. (1996). “Neural Networks in Applied Statistics,” Technometrics, 38, 205-216.

32.Tay, F. E. H., &; Cao, L. J., (2001). “Application of support vector machines in financial time series forecasting”, Omega, vol. 29, iss. 4, 309-317.

33.UC Irvine Machine Learning Repository, http://archive.ics.uci.edu/ml/

34.Vapnik, V. (1995). The Nature of Statistical Learning Theory, Springer-Verlag, New York.

35.Vellido, A., Lisboa, P. J. G., &; Vaughan, J. (1999). “Neural networks in business: a survey of applications (1992–1998),” Expert Systems with Applications, 17, 51-70.

36.Yu, G. X., Ostrouchov, G., Geist, A., &; Samatova, N.F. (2003). “An SVM-based algorithm for identification of photosynthesis-specific genome features,” In 2nd IEEE computer society bioinformatics conference, CA, USA, 235–243.

37.Zhang, G., Patuwo, B. E., &; Hu, M. Y. (1998). “Forecasting with artificial neural networks: the state of the art,” International Journal of Forecasting, 14, 35-62.

38.廖述賢、溫志皓 (2009),資料採礦與商業智慧,雙葉書廊有限公司,台北市。

39.尹相志,SQL Server 2005資料採礦聖經 (2006),學貫行銷股份有限公司,台北市。

40.謝邦昌,類神經網路概述及實例 (2006),輔仁大學統計學系,http://140.136.11.11/Teachonline/謝邦昌/DOWNLOAD/neural.doc

41.黃建銘 (2005),支撐向量機的自動參數選擇,國立台灣科技大學資訊工程系碩士論文。

42.黃上益 (2007),運用資料探勘技術於動脈粥狀硬化預測模式之研究,國立雲林科技大學工業工程與管理研究所碩士論文。

43.吳植森,鄭清俊 (2005),應用類神經網路與支援向量機於目標客戶選取,國立成功大學資訊管理研究所碩士論文。

44.周俊宏 (2006),運用支撐向量機與類神經網路於信用卡授信決策之研究,台灣科技大學資訊管理研究所碩士論文。
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊
 
系統版面圖檔 系統版面圖檔