跳到主要內容

臺灣博碩士論文加值系統

(35.174.62.102) 您好!臺灣時間:2021/07/25 04:27
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:古祐嘉
研究生(外文):Yu-Jia Gu
論文名稱:適應性K最近鄰演算法
論文名稱(外文):Adaptive K-Nearest Neighbor Algorithm
指導教授:林志麟林志麟引用關係
學位類別:碩士
校院名稱:元智大學
系所名稱:資訊管理學系
學門:電算機學門
學類:電算機一般學類
論文種類:學術論文
論文出版年:2009
畢業學年度:97
語文別:中文
論文頁數:32
中文關鍵詞:K最近鄰演算法區域的K最近鄰演算法模糊C平均分群演算法網格密度
外文關鍵詞:KNNLocal KNNFuzzy C-meansGridDensity
相關次數:
  • 被引用被引用:1
  • 點閱點閱:925
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
傳統的K-最近鄰(K-Nearest Neighbors,簡稱KNN)分類演算法是使用一個固定的K值,由最相近的K個鄰居中,投票決定受測資料應歸屬於哪一個類別。然而,相關研究顯示,變動的K值可改善KNN的分類效果。因此,本研究在KNN分類演算法中,加入Local KNN及Fuzzy C-means歸屬程度值的概念,讓個別測試資料使用較適合其本身的K值,進而改善整體分類效果。
The K-nearest-neighbor algorithm traditionally predicts the class of a record based on the decision from the K nearest neighbors of the record, for a fixed K value. However, recent studies showed that using different K values for different records could improve the prediction accuracy. This study integrates Fuzzy C-means algorithm to assist determining a proper K value for each record in a local KNN algorithm. Performance results show this method outperforms the traditional KNN in term of prediction accuracy.
書名頁...............i
論文口試委員審定書.....ii
授權書..............iii
中文摘要.............iv
英文摘要..............v
致謝................vi
目錄................vii
表目錄..............ix
圖目錄...............x
第一章 緒論...........1
第一節 研究背景........1
第二節 研究動機........1
第三節 研究目的........2
第四節 論文架構........2
第二章 文獻探討........3
第一節 資料採礦........3
第二節 非監督式學習與監督式學習......4
第三節 K-means演算法............. 5
第四節 模糊C平均分群演算法(Fuzzy C-means Clustering Algorithm) ...5
第五節 網格與密度.................8
第六節 K最近鄰演算法(K-Nearest Neighbor Algorithm)...11
第七節 Local KNN演算法...........12
第八節 分類效能評估...............13
第三章 研究方法..................15
第一節 研究方法..................15
第二節 研究步驟..................23
第四章 實驗結果..................24
第一節 實驗資料..................24
第二節 實驗設計..................24
第三節 實驗結果..................24
第五章 結論與未來發展.............29
第一節 結論…....................29
第二節 未來發展..................30
參考文獻..........................31
[1]MacQueen, J. (1967). Some methods for classification and analysis of multivariate observations. Proceedings of the fifth Berkeley Symposium on Mathematical Statistics and Probability, 1, 281-297.
[2]Bezdek, J.C. (1973). Fuzzy mathematics in pattern classification, PhD dissertation, Center for Applied Mathematics. Ithaca, NY: Cornell University.
[3]Dasarathy, B.V. (1991). Nearest neighbor (NN) norms: NN pattern classification techniques. LA: IEEE Computer Society Press.
[4]Weiss, S.M., & Kulikowski, C.A. (1991). Computer Systems That Learn: Classification and Prediction Methods from Statistics, Neural Nets, Machine Learning, and Expert Systems. San Francisco: Morgan Kaufmann.
[5]Wettschereck, D., & Dietterich, T.G. (1994). Locally adaptive nearest neighbor algorithms. Advances in Neural Information Processing Systems, 6, 184-191.
[6]McLachlan, G.J., & Krishnan, T. (1997). The EM Algorithm and Extensions. NJ: Wiley Publisher.
[7]Mackinnon, M.J., & Glick, N. (1999). Data mining and knowledge discovery in databases - An overview. Australian and New Zealand Journal of Statistics, 41(3), 255-275.
[8]Pei, J.H., Fang, J.L., & Xie, W.X. (1999). An initialization method of cluster centers. Journal of Electronics and Science, 21(3), 320-325.
[9]Webb, R.A. (2002). Statistical Pattern Recognition (2nd ed.). NJ: Wiley Publisher.
[10]Hu, Y., & Chen, G. (2003). An effective cluster analysis algorithm based on grid and intensity. Computer Applications, 23(12), 64-67.
[11]Wu, W., Xiong, H., & Shekhar, S. (2003). Clustering and information retrieval. Netherlands: Kiuwer Academic Publisher.
[12]Alpaydin, E. (2004). Introduction to Machine Learning. MA: The MIT Press Publisher. 133-150.
[13]Huang, J.Z., Ng, M.K., & Rong, H., Li, Z. (2005). Automated variable weighting in k-means type clustering. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(5), 657-668.
[14]Liu, H., & Yu, L. (2005). Toward integrating feature selection algorithms for classification and clustering. IEEE Transactions on Knowledge and Data Engineering, 17(4), 491-502.
[15]Bao, Z., Han, B., & Wu, S. (2006). A general weighted fuzzy clustering algorithm. Lecture Notes in Computer Science, 4042, 102-109.
[16]He, Y., Pan, W., & Lin, J. (2006). Cluster analysis using multivariate normal mixture models to detect differential gene expression with microarray data. Computational Statistics and Data Analysis, 51(2), 641-658.
[17]Jiawei, H., & Micheline, K. (2006). Data mining: concepts and techniques (2nd ed.). SF: Morgan Kaufmann Publisher.
[18]Tan, P.N., Steinbach, M., & Kumar, V. (2006). Introduction to Data Mining. Boston: Addison-Wesley Publisher. 487-559.
[19]Chiu, C.C., & Tsai, C.Y. (2007). A weighted feature C-means clustering algorithm for case indexing and retrieval in cased-based reasoning. Lecture Notes in Computer Science, 4570, 541-551.
[20]Tsai, C.Y., & Chiu, C.C. (2008). Developing a feature weight self-adjustment mechanism for a K-means clustering algorithm. Computational Statistics and Data Analysis, 52(10), 4658-4672.
[21]Zou, K., Wang, Z., & Hu, M. (2008). An new initialization method for fuzzy c-means algorithm. Fuzzy Optimization and Decision Making, 7(4), 409-416.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top