(3.238.88.35) 您好!臺灣時間:2021/04/10 19:55
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

我願授權國圖
: 
twitterline
研究生:韓明峰
研究生(外文):Ming-Feng Han
論文名稱:具有不確定參數C之模糊支持向量機
論文名稱(外文):Fuzzy Support Vector Machines with the Uncertainty of Parameter C
指導教授:鍾鴻源鍾鴻源引用關係
指導教授(外文):Hung-Yuan Chung
學位類別:碩士
校院名稱:國立中央大學
系所名稱:電機工程研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2008
畢業學年度:96
語文別:英文
論文頁數:60
中文關鍵詞:模糊理論支持向量機不確定性
外文關鍵詞:Fuzzy setSupport Vector MachinesUncertainty
相關次數:
  • 被引用被引用:0
  • 點閱點閱:140
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
在圖形識別(Pattern Recognition)中,常希望可以在雜亂無章的原始資料裡,設法挖掘出其規律及特徵,進而輔助分類器在決策上的分析。因此,本文研究目的是藉由訓練樣本間的模糊性質採掘出有用的特徵,以改善支持向量機(Support Vector Machines;SVMs)的性能。在理論建構上,本文假設訓練樣本分布為兩個高斯密度函數且具有類重疊(Class Overlap)現象,一個具有模糊性質的類重疊分布可以通過機率密度函數的交集加以確定。因此,對於訓練樣本的模糊歸屬函數(Fuzzy Membership Function),本文將依據支持向量(Support Vectors)的特性加以建構,例如:落在間隔(Margin)內的訓練樣本即支持向量,一般發生在類重疊中心區域,它們對於決策邊界(decision boundary)的建立,提供較多的貢獻,所以歸屬函數的確定過程中將給予較大的權重。另外,落在間隔外的支持向量,一般發生在較遠離類重疊交點,由於本身是訓練誤差(Training Error)且對於決策邊界的貢獻較少,所以歸屬函數值予以較小的權重。


實際中,在支持向量機的目標函數設計上,本文利用歸屬函數與參數C重新定義一個模糊懲罰參數(Fuzzy-penalizing parameter),以每個訓練樣本存有的不同貢獻度去平衡間隔大小與訓練誤差,最後呈現一種新的且有效率的模糊支持向量機(Fuzzy Support Vector Machines;FSVMs)。為了驗證這個分類器,我們從UCI資料庫中解決四個真實世界中的分類問題。實驗1進行與傳統支持向量機的比較,其結果顯示模糊支持向量機有較好的性能且是一個具有價值的分類器。實驗2進行不同歸屬函數的比較,其結果證實本論文呈現的歸屬函數建立法是較可行且較客觀的方法。
In typical pattern recognition applications, there are usually only some vague and general knowledge about the situation. An optimal classifier will be definitely hard to develop if the decision function lacks sufficient knowledge. The aim of our experiments is to extract some features by some appropriate transformation of the training data set. In this thesis, we assume that the training samples are drawn from a Gaussian distribution. We also assume that if the data sets are in an imprecise situation, such as classes overlap. The overlap can be represented by fuzzy sets. Therefore, a fuzzy membership can be created according to the property of class overlap. For example, one can treat the closer training data of decision boundary as Support Vectors (SVs) in the center of classes overlap and let these points have higher degree of the fuzzy membership. That is because these points have higher contribution to the decision boundary. Relatively, one can treat the father training data of the decision boundary as SVs outside the margin and let these points have lower degree of fuzzy membership. In Support Vector Machines (SVMs), we define a fuzzy-penalizing parameter to balance both margin width and model complexity.

Finally, a powerful learning classifier is shown. It is the Fuzzy Support Vector Machines with the Uncertainty of Parameter C rule (FSVMs-UPC). In order to verify this classifier, the proposed method is compared with traditional SVM in experiment 1. Results show that the proposed FSVMs-UPC is superior to the traditional SVM in terms of both testing accuracy rate and stability. Experiment 2 shows our membership generation method concentrate on overlapping is a more feasible and better membership.
Abstract ……………….…………………………………… I
Contents ……………………………………………………. III
List of Figures ……………………………………………. VI
List of Tables ……………………………………………. IX
Chapter 1 Introduction ………………………………………1
1.1 Background ………………………………………1
1.2 Purpose and Motivation …………………3
1.3 Contribution …………………………………4
1.4 Organization ……………………………4
Chapter 2 Support Vector Machines ………………………5
2.1 Linear Support Vector Machines ………………………5
2.2 A Separable Case ……………………………….5
2.3 A Non-Separable Case……………………………8
2.2 Nonlinear Support Vector Machines …………………10
2.2.1 Kernels ………………………………………….10
2.2.2 Nonlinear Model……………………………………11
2.3 Learning Curves …………………………………13
Chapter 3 Fuzzy Support Vector Machines ……….………17
3.1 Fuzzy theory in training data………………….………17
3.2 Formulation of FSVMs ……………………………………18
3.2.1 FSVM Framework…………………………………………20
3.3 Creation of a Fuzzy Membership …………………………22
3.3.1 Fuzzy Membership Focus On One Class ……………22
3.3.2 Fuzzy Membership Focus On Overlapping …23
Chapter 4 Experimental Results and Discussion………27
4.1 Data Sets………………………………………27
4.2 Experiment 1 ……………………………………………28
4.2.1 Training Phase ………………………………………28
4.2.2 Testing Phase …………………………………………37
4.3 Experiment 2 ……………………………………………40
4.3.1 Training Phase ……………………40
4.3.2 Testing Phase …………………47
Chapter 5 Conclusions and Recommendations…50
References …………………………52
Appendix I Data Set……………………………..……………54
List of Publications …………………………..……………60
[1]V. N. Vapnik, The Nature of Statistical Learning Theory, Springer-Verlag, Berlin Heidelberg, New York, 1995.
[2]V. N. Vapnik, “An Overview of Statistical Learning Theory,” IEEE Transaction on Neural Networks, Vol. 10, pp 988-999, 1999.
[3]V. N. Vapnik, Statistical Learning Theory, Wiley, New York, 1998.
[4]C. F. Lin and S. D. Wang, “Fuzzy Support Vector Machine,” IEEE Transaction on Neural Networks, Vol. 13,No.2, 2002 .
[5]D. M. J. Tax, and R. P. W. Duin, “Characterizing One-Class Datasets,” In Proceedings of the 16th Annual Symposium of the Pattern Recognition Association of South Africa, pp 21–26 , 2005.
[6]R. C. Prati, G. E. A. P. A. Batista, and M. C. Monard, “Class Imbalances versus Class Overlapping: an Analysis of a Learning System Behavior,” In Mexican International Conference on Artificial Intelligence, pp 312–321,2004.
[7]B. Schölkopf, P. Simard, A. Smola and V. Vapnik, “Prior Knowledge in Support Vector Kernels,” In M. Jordan, M. Kearns, S. Solla, editors. Advances in Neural Information Processing System 10. MIT Press, pp 312-321, 1998.
[8]L. A. Zadeh, Fuzzy sets, Information and Control, Vol. 8, pp 338–353, 1965.
[9]Yongqiao Wang, Shouyang Wang, and K. K. Lai, “A New Fuzzy Support Vector Machine to Evaluate Credit Risk,” IEEE Transaction on Fuzzy systems, vol. 13, no. 6, 2005.
[10]I. Guyon, N. Matic, and V. N. Vapnik, Discovering Information Patterns and Data Cleaning. Cambridge, MA: MIT Press, 1996.

[11]X. Zhang, “Using class-center vectors to build support vector machines,” International Workshop on Neural Networks for Signal Processing, pp. 3–11, 1999.
[12]Hastie, T., Tibshirani, R. and Friedman, J., The elements of statistical learning: Data mining, inference, and prediction, Springer-Verlag., New York, 2001.
[13]K. K. Lee, S. R. Gunn, C. J. Harris, and P. A. S. Reed, “Classification of unbalanced data with transparent kernels,” Conference on Neural Networks, vol. 4, pp. 2445, 2001.
[14]A. T. Quang, Q.-L. Zhang, and X. Li , “Evolving support vector machine parameters,” International Conference on Machine Learning and Cybernetics, vol. 1, pp. 548, 2002.
[15]L. Breiman, Bias, Variance and Arcing Classifiers, Technical Report 460, Statistics Department, University of California, CA, 1996.
[16]P. M. Murphy, UCI-Benchmark Repository of Artificial and Real Data Sets, http://www.ics.uci.edu/~mlearn, University of California Irvine, CA, 1995.
[17]P. Vlachos, and M. Meyer, StatLib Biomed data, http://lib.stat.cmu.edu/ , Department of Statistics, Carnegie Mellon University , 1989.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊
 
系統版面圖檔 系統版面圖檔