跳到主要內容

臺灣博碩士論文加值系統

(3.229.142.104) 您好!臺灣時間:2021/07/27 06:56
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:蔡賢亮
研究生(外文):Hsien-Leing Tsai
論文名稱:監督式類神經網路自動建構演算法及應用
論文名稱(外文):Automatic Construction Algorithms for Supervised Neural Networks and Applications
指導教授:李錫智李錫智引用關係
指導教授(外文):Shie-Jue Lee
學位類別:博士
校院名稱:國立中山大學
系所名稱:電機工程學系研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2004
畢業學年度:92
語文別:中文
論文頁數:135
中文關鍵詞:動態時間校正法決策樹學習法則資訊熵模擬退火技術模糊理論影像壓縮類神經網路
外文關鍵詞:information entropyimage compressionneural networksdynamic time warpinglearning rulesdecision treesimulated annealing methodfuzzy theories
相關次數:
  • 被引用被引用:5
  • 點閱點閱:279
  • 評分評分:
  • 下載下載:71
  • 收藏至我的研究室書目清單書目收藏:0
類神經網路已被研究了將近一甲子,在這段時間內雖曾遭遇到瓶頸而研究停滯,但仍有很多好用的類神經網路模型及學習法則先後被提出,而且它們也被廣泛地應用到不同的領域,也獲得不錯的成果,成功地解決了很多傳統演算法難以有效解決的問題。

然而,當使用者要使用類神經網路來解決問題時,他們都會遇到要使用多大的類神經網路的困擾,也就是說,使用者必需自行決定類神經網路需要幾個隱藏神經層,每個隱藏神經層含有多少隱藏神經元。對於類神經網路的使用者來說,要決定一個合適類神經網路確實是一個相當因難且重要的任務,因為類神經網路的大小會嚴重地影響到它們的效率和品質,唯有適當的類神經網路才能夠有效率地解決問題。

我們的第一個研究目標就是要提出更好的方法來決定合適的類神經網路。研究的過程中,我們提出一系列的解決方法。我們首先提出應用決策樹來建立類神經網路,我們成功地解決了類神經網路架構的煩惱,同時也改進了學習速度慢的缺陷,但是它只能解two-class的問題,而且它建構出的類神經網路較大。接下來,我們提出使用資訊熵來去除這些缺陷,它可以很容易地建構出multi-class的類神經網路,但它只適合解標準型態的問題。最後,我們再將上個方法延伸至循序性(sequential domain)及結構性(structured domain)的問題,所以我們的方法可以應用的範圍很廣泛。目前,我們正推廣我們的研究至量子資訊世界,正著手研究量子類神經網路(quantum neural networks)自動建構演算法。
The reseach on neural networks has been done for six decades. In this period, many neural models and learning rules have been proposed. Futhermore, they were popularly and successfully applied to many applications. They successfully solved many problems that traditional algorithms could not solve efficiently .

However, applying multilayer neural networks to applications, users are confronted with the problem of determining the number of hidden layers and the number of hidden neurons in each hidden layer. It is too difficult for users to determine proper neural network architectures. However, it is very significant, because neural network architectures always influence critically their performance. We may solve problems efficiently, only when we has proper neural network architectures.

To overcome this difficulty, several approaches have been proposed to generate the architecture of neural networks recently. However, they still have some drawbacks. The goal of our research is to discover better approachs to automatically determine proper neural network architectures. We propose a series of approaches in this thesis. First, we propose an approach based on decision trees. It successfully determines neural network architectures and greatly decreases learning time. However, it can deal only with two-class problems and it generates bigger neural network architectures. Next, we propose an information entropy based approach to overcome the above drawbacks. It can generate easily multi-class neural networks for standard domain problems. Finally, we expand the above method for sequential domain and structured domain problems. Therefore, our approaches can be applied to many applications. Currently, we are trying to work on quantum neural networks.

We are also interested in ART neural networks. They are also incremental neural models. We apply them to digital signal processing. We propose a character recognition application, a spoken word recognition application, and an image compression application. All of them have good performances.
摘要 2
Abstract 3

第一章 研究動機與簡介
1.1 類神經網路簡介 10
1.2 研究動機 11
1.3 研究目標與成果 14
1.4 章節介紹 15


第一部分 Multi-layer Perceptron類神經網路自動建構演算法

第二章 用決策樹來建構類神經網路
2.1 決策樹 17
2.2 萃取邏輯描述 20
2.3 建構門檻網路 20
2.3.1 門檻邏輯 21
2.3.2 門檻值的計算 22
2.4 建構類神經網路 25
2.4.1 初始化類神經網路 25
2.4.2 最終的類神經網路 28
2.5 實驗 28
2.6 問題探討與結論 31

第三章 利用資訊熵建構multi-class類神經網路
3.1 利用資訊熵量測尋找超平面 34
3.1.1 隱藏神經元的資訊熵函式 35
3.1.2 輸出神經元的資訊熵函式 37
3.1.3 資訊熵量測探討 40
3.2 差距法則 41
3.2.1 隱藏神經元的差距法則 41
3.2.2 輸出神經元的差距法則 44
3.3 類神經網路建構程序 46
3.4 實驗結果 47
3.5 問題探討與結論 51

第四章 利用資訊熵建構能處理結構化樣本的類神經網路
4.1 廣義遞迴神經元 54
4.2 資訊熵量測 57
4.2.1 隱藏神經元的資訊熵函式 58
4.2.2 輸出神經元的資訊熵函式 59
4.3 廣義差距法則 60
4.3.1 類神經網路架構 61
4.3.2 隱藏神經元之差距法則 62
4.3.3 輸出神經元的差距法則 65
4.3.4 實例說明 67
4.4 類神經網路建構程序 68
4.5 改良後的演算法 72
4.6 實驗結果 74
4.6.1 實驗一:甲狀腺分類問題 74
4.6.2 實驗二:英文單字語音辨識 75
4.6.3 實驗三:相似中文字辨識 76
4.6.4 實驗四:化學分子結構式分類問題 80
4.6.5 實驗五:模擬退火演算法 81
4.6.6 實驗六:含雜訊相似中文字辨識 82
4.7 問題探討與結論 83


第二部分 ART- Adaptive Resonance Theory類神經網路自動建構演算法在訊號處理上的應用

第五章 嵌入樣本融合技術之特徵辨識類神經網路在文字辨識上的應用
5.1 類神經網路架構 85
5.2 類神經網路的建構及訓練程序 88
5.3 類神經網路的辨識程序 91
5.4 範例 92
5.5 實驗結果 94
5.5.1 篩選過學習樣本實驗 94
5.5.2 未篩選過學習樣本實驗 96
5.6 問題探討與結論 99

第六章 ART類神經網路架構在語音辨識上的應用
6.1 動態時間校正法DTW語音辨識演算法簡介 100
6.2 我們的方法 101
6.2.1 類神經網路架構 102
6.2.2 學習演算法 104
6.2.3 辨識演算法 106
6.3 實驗結果 106
6.4 問題探討與結論 107

第七章 ART類神經網路架構在影像壓縮上的應用
7.1 影像壓縮技術簡介 109
7.1.1 非餘值壓縮法 110
7.1.2 餘值壓縮法 110
7.1.3 壓縮效果評估方式 112
7.2 我們的類神經影像壓縮演算法 112
7.2.1 類神經網路架構 112
7.2.2 學習演算法 114
7.2.3 壓縮演算法 115
7.2.4 解壓縮演算法 116
7.3 實驗結果 116
7.3.1 實驗一:非餘值壓縮法 116
7.3.2 實驗二:餘值壓縮法 118
7.3.3 實驗三:利用Lena編碼簿來壓縮其它影像 119
7.4 問題探討與結論 121

第八章 結論與未來研究方向 123

參考文獻 125

名詞對照表 134
[1].A. G. Parlos, K. T. Chong and A. F. Atiya. “Application of the recurrent multilayer perceptron in modeling complex process dynamics.”IEEE Transactions on Neural Networks, 5(2):255--266, 1994.
[2].A. Gersho. “On the structure of vector an algorithm for vector quantizers.”IEEE Transactions on Information Theory, 28(2):157--166, 1982.
[3].A. M. Bianucci and A. Micheli and A. Sperduti and A. Starita. “Application of cascade correlation networks for structures to Chemistry,”Journal of Applied Intelligence, 12(1/2):117--146, 2000.
[4].A. M. Noll. “Cepstrum pitch determination.”J. Acoust. Soc. Amer., 42(2):293--309, 1967.
[5].A. R. Webb, “Functional approximation by feed-forward network: a least square approach to generalization,” IEEE Transactions on Neural Networks, 5(3):363—371, 1994.
[6].A. Sperduti and A. Starita. “Supervised neural networks for classification of structures.”IEEE Transactions on Neural Networks, 8(3):714--735, May, 1997.
[7].A. Sperduti. “Encoding of labeled graphs by labeling RAAM.”in advances in Neural Information Processing Systems, J. D. Cowan, G. Tesauro, and J. Alspector, Eds. San Mateo, CA: Morgan Kaufmann, 6:1125--1132, 1994.
[8].A. Weigend. “An overfitting and the effective number of hidden units.”In Proceedings of the 1993 Connectionist Models Summer School, pp. 335--342, July 1994.
[9].B. Hussain and M. R. Kabuka. “A novel feature recognition neural network and its application to character recognition.”IEEE Transactions on Pattern Analysis and Machine Intelligence, 16(1):98--106, January 1994.
[10].B. S. Atal and M. R. Schroeder. “Predictive coding of speech signals.”Proceeding of 6th Int. Cong. Acoust., 5(4), 1968.
[11].B. S. Atal and S. L. Hanauer. “Speech analysis and synthesis by linear prediction of the speech wave,”J. Acoust. Soc. Am., 50:637--665, 1971.
[12].C. Sung and D. Wilson.“Percognitron: Neocognitron coupled with perceptron.”Int. Joint Conf. Neural Net, 3:753--758, June 1990.
[13].C.-S. Ouyang, H.-L. Tsai, and S.-J. Lee, "Knowledge Acquisition from Input-Output Data by Fuzzy Neural Systems," Proceedings of IEEE International Conference on Systems, Man, and Cybernetics, pp. 1928-1933, San Diego, California, USA, 1998.
[14].D. DeSieno. “Adding a conscience to competitive learning.”IEEE International Conference on Neural Networks, 1:117--124, 1988.
[15].D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning internal representations by error propagation,” in Rumelhart, D.E. and McClelland, J.L. eds., Parallel Distributed Processing, MIT Press, London, England, 1986.
[16].D. E. Rumelhart, G. E. Hinton, and R. J. Williams. “Learning representations by back-propagating errors.”Nature, 323:35--41, 1986.
[17].D. F. Shanno,“Recent advances in numerical techniques for large-scale optimization,” in Neural Networks for Control, MIT Press, Cambridge, MA, 1990.
[18].D. H. Ackley, G. E. Hinton, and T. J. Sejnowski. “A learning algorithm for Boltzmann machines,” In J. A. Anderson and E. Rosenfeld, editors, Neurocomputing, MIT Press, Cambridge, MA, pp. 638—650, 1985.
[19].D. H. Hubel and T. N. Wiesel. “Receptive fields and functional architecture in two nonstriate visual area of the cat.”Neurophysiology, 28:229--289, 1965.
[20].D. H. Hubel and T. N. Wiesel. “Receptive fields, binocular interaction and functional architecture in cat''s visual cortex.”Physiology, 160:106--154, Jan. 1962.
[21].D. L. Ostapko, and S. S. Yau,“Realization of an arbitrary switching function with a two-level network of threshold and parity elements,” IEEE Transactions on Computers, 19:262—269, 1970.
[22].D. W. Patterson, Introduction to Artificial Intelligence and Expert Systems, Prentice-Hall, Inc., Englewood Cliffs, N.J., 1990.
[23].E. Behrman, L. Nash, J. Steck, V. Chandrashekar, and S. Skinner, Simulations of Quantum Neural networks, Information Sciences, 128(3-4):257-269, October 2000.
[24].E. Rich, and K. Knight, Artificial Intelligence, McGraw-Hill, NY, 1991.
[25].E. Rosenblatt, Principles of Neurodynamics, New York: Spartan, 1962.
[26].F. A. W. Lodewyk, and B. Etienne, “Avoiding false local minima by proper initialization of connections,” IEEE Transactions on Neural Network, 3:899—905, 1992.
[27].F. M. Ham and I. Kostanic.“Principles of Neurocomputing for Science and Engineering.”McGraw-Hill International, Singapore, 2000.
[28].G. A. Carpenter and S. Grossberg. “The ART of adaptive pattern recognition by a self-organizing neural network,”Computer, 21:77--88, March 1988.
[29].G. A. Carpenter and S. Grossberg. “The ART of adaptive pattern recognition by a self-organizing neural network.”Computer, pp. 35--41, March, 1988.
[30].G. A. Carpenter, S. Grossberg, and J. H. Reynolds. “ARTMAP: Supervised real-time learning and classification of nonstationary data by a self-organizing neural network.”Neural Networks, 4:565--588, 1991.
[31].G. A. Carpenter, S. Grossberg, N. Markuzon, J. H. Reynold, and D. B. Rosen. ”Fuzzy ARTMAP: A neural network architecture for incremental supervised learning of analog multidimensional maps.”IEEE Transactions on Neural Networks, 3(5):698--713, September 1992.
[32].G. A. Gallagher. “Information Theory and Reliable communication.”Wiley, New York, 1968.
[33].G. E. Hinton, and J. J. Sejnowski,“Learning and relearning in boltzmann machines,” Parallel Distributed Processing, 1:282—317, 1986.
[34].G. Tontini and A. A. de Queiroz. “RBF FUZZY-ARTMAP: a new fuzzy neural network for robust on-line learning and identification of patterns.”In IEEE International Conference on Systems, Man and Cybernetics, 2:1364--1369, 1996.
[35].G. W. Cottrell and P. Munro. “Principal components analysis of images via back propagation.”SPIE Vol. 10011, Visual Communication and Image Processing, pp. 1070--1077, 1988.
[36].G. W. Cottrell, P. Munro and D. Zipser. “Image compression by back propagation: an example of extensional programming.”ICS Report 8702, Institute for Cognitive Science, University of California, San Diego, 1987.
[37].H. H. Chang and H. Yang. “Analysis of stroke structures of handwritten Chinese characters,” IEEE Transactions on Systems, Man, and Cybernetics, 29(1):47--61, 1999.
[38].H. J. Kim, J. W. Jung, and S. K. Kim. “On-line Chinese character recognition using ART-based stroke classification.”Pattern Recognition Letters, 17(12):1311--1322, Oct. 1996.
[39].H. J. Zimmermann. “Fuzzy Set Theory and its Applications.”Kluwer Academic Publishers, second edition, 1991.
[40].H. M. Lee, and C. C. Hsu, “A neural network training algorithm with the topology generation ability for the classification problem,” The International Journal of Neural Networks, 3:3—16, 1992.
[41].H. Sakoe and M. Watari. “Clockwise propagating dp-matching algorithm for word recognition.”Trans. Committee on Speech Research, Acoust. Soc. Jap., pp. S81--65, 1981.
[42].H. Sakoe and S. Chiba. “Dynamic programming algorithm optimization for spoken word recognition.”IEEE Transaction on Acoustic, Speech, Signal processing, 26(1):43--49, 1978.
[43].H. Sakoe and S. Chiba. “Recognition of continuously spoken words based on time-normalization by dynamic programming.”J. Acoust. Soc. Jap., 27(9):483--500, 1971.
[44].H. Szu and R. Hartley, “Fast simulated annealing,”Physics letters A, 122:157--162, 1987.
[45].H.-L Tsai, S.-H Sun, and S.-J. Lee, "Image Compression Using ART-Based Neural Networks," Proceedings of National Computer Symposium, 1:B163-B168, Taichung, Taiwan, 1997.
[46].H.-L. Tsai and S.-J. Lee, "A Fuzzy Feature Recognition Neural Networks for Character Recognition," Proceedings of International Symposium on Artificial Neural Networks, pp. 174-179, Tainan, Taiwan, ROC, 1994.
[47].H.-L. Tsai and S.-J. Lee, "A Neural Network Model for Isolated Word Recognition," Proceedings of IEEE International Conference on Systems, Man, and Cybernetics, 4:4029-4034, Orlando, Florida, USA, 1997.
[48].H.-L. Tsai and S.-J. Lee, "An Improved Neural Algorithm for Automatic Test Pattern Generation," Proceedings of International Symposium on Artificial Neural Networks, pp. A2.07-A2.12, Hsinchu, Taiwan, 1995.
[49].H.-L. Tsai and S.-J. Lee, "Construction of Neural Networks on Structured Domain," Proceedings of the 5th International Conference on Computer Science and Informatics, Atlantic City, NJ, USA, 2000.
[50].H.-L. Tsai and S.-J. Lee, "Construction of Neural Networks on Structured Domains," Proceedings of 9th International Conference on Neural Information Processing, 1:50-54, Singapore, 2002.
[51].H.-L. Tsai and S.-J. Lee, “Entropy-based generation of supervised neural networks for classification of structured patterns, “IEEE Transactions on Neural Networks, 15(2):283-297,2004.
[52].H.-S. Park and S.-W. Lee. “Off-line recognition of large-set handwritten characters with multiple hidden Markov models.”Pattern Recognition, 29(2):231--244, Feb. 1996.
[53].J. A. Freeman, and D. M. Skapura, Neural Networks, Addison Wesley, 1991.
[54].J. Cheng, U. M. Fayyad, K. B. Irani, and Z. Qian,“Improved decision trees: A generalized version of ID3,” in Proceeding of the Fifth International Conference on Machine Learning, pp. 100—108, 1988.
[55].J. D. Markel and A. H. Gray Jr. “On autocorrelation equations as applied to speech analysis.”IEEE Transactions on Audio and Electroacoustics, AU-21:69—79, 1973.
[56].J. H. Holland. “Adaptive algorithms for discovering and using general patterns in growing knowledge bases.”International Journal of Policy Analysis and Information System, 4:217--240, 1980.
[57].J. Hu, M. K. Brown, and W. Turin. “HMM based on-line handwriting recognition.”IEEE Transactions on Pattern Analysis and Machine Intelligence, 18(10):1039--1044, Oct. 1996.
[58].J. J. Hopfield and D. W. Tank. “Neural computation of decision in optimization problems.”Biological Cybernetics, 52:141--152, 1985.
[59].J. L. McClelland and D. E. Rumehart. “Parallel Distributed Processing (Two Volumes).”MIT Press, Cambridge, MA, 1986.
[60].J. Makhoul. “Stable and efficient lattice methods for linear prediction.”IEEE Transactions on Acoustics, Speech, and Signal Processing, ASSP-25(5):423--428, 1977.
[61].J. McMurry. “Organic Chemistry, 5th ed.,”Brooks/Cole, 2000.
[62].J. P. Nadal. “New algorithms for feedforward networks,”In Theumann and Koberle, editors, Neural Networks and SPIN Classes, pp. 80--88. World Scientific, 1989.
[63].J. R. Quinlan. “Induction of decision trees,”Machine Learning, 1(1):81--106, 1986.
[64].J. W. Shavlik, and G. G. Towell, “An approach to combining explanation-based and neural learning algorithms,” Connection Science, 1(3):828—838, 1989.
[65].J.-W. Lin, S.-J. Lee and H.-T. Yang.“A stroke-based neuro-fuzzy system for handwritten Chinese character recognition.”Applied Artificial Intelligence, 15(6):561--586, 2001.
[66].K. Binder. “Monte Carlo methods in statistical physics,”Springer, New York, 1978.
[67].K. Fukushima and N. Wake. “Handwritten alphanumeric character recognition by the neocognitron.”IEEE Transaction on Neural Networks, 2(3), May 1991.
[68].K. Fukushima and S. Miyake. “Neocognitron: A neural network model for a mechanism of visual pattern recognition.”IEEE Transactions on Systems, Man and Cybernetics, SMC-13(5):826--834, Sept. 1983.
[69].K. Fukushima and S. Miyake. “Neocognitron: A new algorithm for pattern recognition tolerant of deformations and shifts in positions.”Pattern Recognition, 15(6):455--469, 1982.
[70].K. J. Cios and N. Liu. “A machine learning method for generation of a neural network architecture: A continuous ID3 algorithm,”IEEE Transactions on Neural Networks, 3:280--290, 1992.
[71].K. J. Lang and G. E. Hinton. “The development of the time-delay neural network architecture for speech recognition.”Technical Report CMU-CS-88-152, Carnegie Mellon University, Pittsburgh, PA. 1988.
[72].K. Lang and M. J. Witbrock. “Learning to tell two spirals apart,”In Proceedings of Connectionist Models Summer School, pp. 52--59, 1988.
[73].K. P. Chan and Y. S. Cheung. “Fuzzy-attribute graph with application to Chinese character recognition,”IEEE Transactions on Systems, Man, and Cybernetics, 22:153--160, 1992.
[74].K. S. Fu. “Syntactic Methods in Pattern Recognition,”New York: Academic Press, 1974.
[75].K. S. Fu. Syntactical Pattern Recognition and Applications,”Englewood Cliffs, NJ: Prentice-Hall 1982.
[76].K. Tutschku. “Recurrent multilayer perceptrons for identification and control: The road to application.”Research Report Series, University of Würzburg, Germany, 1995.
[77].L. B. Almedida. “A learning rule for asynchronous perceptrons with feedback in a combinatorial environment,”In Proc. IEEE Int. Conf. Neural Networks, pp. 609--618, New York, 1987.
[78].L. Breiman, J. H. Friedman, R. A. Olshen, and C. J. Stone. “Classification and Regression Trees,”Wadsworth, CA, 1984.
[79].L. Fu, Neural Networks in Computer Intelligence, McGraw-Hill, NY, 1994.
[80].L. Wessels, E. Barnard, and E. ven Rooyen, “The physical correlates of local minima,” in Proc. Int. Neural Networks Conf., Paris, France, pp. 985, 1990.
[81].L. Wessels, E. Barnard, and E. ven Rooyen. “The physical correlates of local minima,”In Proc. Int. Neural Networks Conf., Paris, France, pp. 985--992, July 1990.
[82].L. Zadeh. “Fuzzy sets.”Information and Control, 8:338--353, 1965.
[83].M. A. Abidi, S. Yasuki and P. B. Crilly. “Image compression using hybrid neural networks combining the auto-associative multi-layer perceptron and the self-organizing feature map.”IEEE Transactions on Consumer Electronics, 40(4):796--811, November, 1994.
[84].M. Bichsel and P. Seitz. “Minimum class entropy: A maximum information approach to layered networks,”Neural Networks, 2:133--141, 1989.
[85].M. Bichsel, and P. Seitz, “Minimum class entropy: A maximum information approach to layered networks,” Neural Networks, 2:133—141, 1989.
[86].M. Mohamed and P. Gader. “Handwritten word recognition using segmentation-free hidden Markov modeling and segmentation-based dynamic programming techniques.”IEEE Transactions on Pattern Analysis and Machine Intelligence, 18(5):548--553, May 1996.
[87].M. Mougeot, R. Azencott, and B. Angeniol. “Image compression with back propagation: improvement of the visual restoration different cost functions.”Neural Networks, 4:467--476, 1991.
[88].M. Smith. “Neural Networks for Statistical Modeling.”Boston: International Thomson Computer Press, 1996.
[89].M. V. Altaisky, “Quantum neural network,” Technical report, http://arxiv.org/PS_cache/quant-ph/pdf/0107/010712.pdf, 2001.
[90].M.-T. Jone,“Construction of Neural Networks for Supervised Learning,” Master''s thesis, National Sun Yat-Sen University, Taiwan, 1994.
[91].N. J. Nilsson, “Learning Machines: Foundations of Trainable Pattern Classifying Systems,”McGraw-Hill, New York, 1965.
[92].N. M. Nasrabadi and R. A. Feng. “Vector quantization of images based upon Konhnen self-organizing feature map.”IEEE International Conference on Neural Networks, 1:101--108, 1988.
[93].N. Markuzon, J. H. Reynold, G. A. Carpenter, S. Grossberg and D. B. Rosen. “Fuzzy artmap: A neural network architecture for incremental supervised learning of analog multidimensional maps.”IEEE Transactions on Neural Networks, 3(5):698--713, September 1992.
[94].N. Sonehara, M. Kawato, S. Miyake, and K. Nakane. “Image data compression using a neural network model.”Proceedings of IJCNN, Washington D.C., pp. 35--41, 1989.
[95].P. Clark, and T. Niblett, “The CN2 induction algorithm,” Machine Learning, 3:261—284, 1989.
[96].P. D. Wasserman, “A combined backpropagation/Cauchy machine network,”Journal of Neural Network Computing, pp. 34-40, Winter 1990.
[97].P. H. Winston, Artificial Intelligence, Addison-Wesley, Reading, MA, 1992.
[98].P. J. M. Laarhoven and E. H. L. Aarts. “Simulated annealing: theory and applications,”D. Reidel Publishing Company, 1987.
[99].P. Lippmann, “An introduction to computing with neural nets,” IEEE Acoustics, Speech and Signal Processing Magazine, 44:4—22, 1987.
[100].P. M. Murphy and D. W. Aha, “UCI repository of machine learning databases,”Dept. Inform. Comput. Sci., Univ. Calif., Irvine.
[101].P. W. Frey and D. J. Slate. “Letter recognition using Holland-type adaptive classifiers.”Machine Learning, 6:161--182, 1991.
[102].P. W. Shor, "Scheme for Reducing Decoherence in Quantum Computer Memory", Physical Review A, 52:2493-2496, 1995.
[103].R. A. Fisher,“The use of multiple measurements in taxonomic problems,” Annual Eugenics, 7:179—188, 1936.
[104].R. Bellman. “Dynamic programming.”Princeton Univ. Press, New Jersey, 1957.
[105].R. C. Gonzalez and M. G. Thomason. “Syntactical Pattern Recognition,” Reading, MA: Addison Wesley, 1978.
[106].R. Hecht-Nielsen, “Neurocomputing,” Addison Wesley, 1990.
[107].R. Hecht-Nielsen. “Counterpropagation networks.”IEEE International Conference on Neural Networks, 2:19--32, 1987.
[108].R. J. Williams and D. Zipser. “A learning algorithm for continually running fully recurrent neural networks.”Neural Computation, 1:270--280, 1989.
[109].R. J. Williams and J. Peng. “An efficient gradient-based algorithm for on-line training of recurrent network trajectories.”Neural Computation, 2:490--501, 1990.
[110].R. M. Goodman, C. M. Higgins, and J. W. Miller, “Rule-based neural networks for classification and probability estimation,” Neural Computation, 4:781—804, 1992.
[111].R. P. Feynman, Quantum Mechanical Computers, Found. Phys. 16:507-531, 1986.
[112].S. E. Fahlman and C. Lebiere. “The cascade-correlation learning architecture,”Technical Report CMU-CS-90-100, School of Computer Science, Carnegie Mellon University, 1990.
[113].S. E. Fahlman and C. Lebiere. “The cascade-correlation learning architecture,”In Advances in Neural Information Processing Systems, D. S. Toouretzky, Ed., San Mateo, CA: Morgan Kaufmann, 2:524--532, 1990.
[114].S. E. Fahlman, “The cascade-correlation architecture,” Technical Report CMU-CS-91-100, Carnegie Mellon University, Pittsburgh, PA., 1991.
[115].S. E. Fahlman, and C. Lebiere, “The cascade-correlation learning architecture,” Technical Report, Department of Computer Science, Carneige-Mellon University, Pittsburgh, PA, 1990.
[116].S. Ghosh, D. Basu, and A. K. Choudhury, “Multigate synthesis of general boolean functions by threshold logic elements,” IEEE Transactions on Computers, c-18:451—456, 1969.
[117].S. Grossberg G. A. Carpenter and J. H. Reynolds. “Artmap: Supervised real-time learning and classification of nonstationary data by a self-organizing neural network.”Neural Networks, 4:565--588, 1991.
[118].S. Gupta and R. Zia, Quantum Neural Networks, Technical report, http://www.arxiv.org/PS_cache/quant-ph/pdf/0201/0201144.pdf, 2002.
[119].S. Lee and J. C.-J. Pan. “Unconstrained handwritten numeral recognition based on radial basis competitive and cooperative networks with spatio-temporal feature representation.”IEEE Transactions on Neural Networks, 7(2):455--474, Mar. 1996.
[120].S. W. Lee. “Off-line recognition of totally unconstrained handwritten numerals using multilayer cluster neural network.”IEEE Transactions on Pattern Analysis and Machine Intelligence, 18(6):648--652, June 1996.
[121].S. Y. Kung and J. N. Hwang. “An algebraic projection analysis for optimal hidden units size and learning rate in back-propagation learning,”In Proc. IEEE Int. Conf. Neural Networks, 1:363--370, San Diego, July 1988.
[122].S. Y. Kung. Digital Neural Networks, Prentice Hall, International, Inc., 1993.
[123].S.-J. Lee and C.-L. Ho. “An ART-based construction of RBF networks.”IEEE Transactions on Neural Networks, 13(6):1308--1321, 2002.
[124].S.-J. Lee and H.-L. Tsai, "Pattern Fusion in Feature Recognition Neural Networks for Handwritten Character Recognition," IEEE Transactions on Systems, Man, and Cybernetics - Part B: Cybernetics, 28(4):612-617, 1998.
[125].S.-J. Lee and M.-T. Jone. “An extended procedure of constructing neural networks for supervised dichotomy.”IEEE Transactions on Systems, Man, and Cybernetics -- Part B: Cybernetics, 26(4):660--665, 1996.
[126].S.-J. Lee, M.-T. Jone, and H.-L. Tsai, "Constructing Neural Networks for Multi-Class Discretization Based on Information Entropy, " IEEE Transactions on Systems, Man, and Cybernetics - Part B: Cybernetics, 29(3): 445-453, 1999.
[127].S.-J. Lee, M.-T. Jone, and H.-L. Tsai, "Construction of Neural Networks from Decision Trees," Journal of Information Science and Engineering, 11(3):391-415, 1995.
[128].T. Kohonen. “Adaptive, associative, and self-organizing functions in neural Computing.”Applied Optics, 26(33):4910--4918, 1987.
[129].T. Menneer, Quantum Artificial Neural Networks, PhD thesis, University of Exeter, 1998.
[130].T. Pellizzari, S. A. Gardiner, J. I. Cirac and P. "Decoherence, Continuous Observation, and Quantum Computing: A Cavity QED Model", Zoller, Physical Review Letter, 75:3788-3791, 1995.
[131].U. M. Fayyad, and K. B. Irani,“On the handling of continuous-valued attributes in decision tree generation,” Machine Learning, 8:87—102, 1992.
[132].W. Hastings. “Monte Carlo sampling methods using Markov chains and their application.”Biometrika, 57:97--109, 1970.
[133].W. S. Sarle. “Stopped training and other remedies for overfitting.”In Proceedings of the 27th Symposium on the Interface of Computing Science and Statistics, pp. 352--360, July 1995.
[134].X. H. Yu, “Can backpropagation error surface not have local minima,” IEEE Transactions on Neural Networks, 3(6):1019—1021, 1992.
[135].Y. H. Pao. “Adaptive Pattern Recognition and Neural Networks.”Addison-Wesley, Reading, MA, 1989.
[136].Y. L. Dae, M. K. Byung and S. C. Hyung. “A self-organized RBF network combined with ART II,”In Proceedings of IEEE International Joint Conference on Neural Networks, 3:1963--1968, 1999.
[137].Y. Li, D. Lopresti, G. Nagy, and A. Tomkins. “Validation of image defects models for optical character recognition.”IEEE Transactions on Pattern Analysis and Machine Intelligence, 18(2):99--108, June 1996.
[138].Y. Linde, A. Buzo and R. M. Gray. “An algorithm for vector quantizer design.”IEEE Transactions on Commun., 28:84--95, 1980.
[139].Y.T. Hsu, “Learning in noisy domain with a generalized version space,” Master''s thesis, National Chiao Tung University, Taiwan, 1992.
[140].Z. Chi, M. Suters, and H. Yan. “Handwritten digit recognition using combined ID3-derived fuzzy rule and Markov chains.”Pattern Recognition, 29(11):1821--1834, Nov. 1996.
[141].Z. Kohavi, Threshold Logic, McGraw-Hill Book Company, 1983.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
1. 盧富美(民81)。談合作學習及其教學流程。教師之友,33,4,3—8。
2. 蔡翠華(民85)。國小數學學習障礙學生的學習型態與學習策略之相關研究。特殊教育研究學刊,14,157—177。
3. 楊坤堂(民79)。合作學習(下)。研習資訊,67期,4—7頁。
4. 楊坤堂(民79)。合作學習(上)。研習資訊,67期,12—15頁。
5. 黃榮真(民87)。合作學習在特殊教育上之應用。特殊教育季刊,67,36—40。
6. 張英鵬(民82)。增強策略在電腦輔助教學方案中對國小學習障礙兒童加法學習之影響。特殊教育與復健學報,3,39—68。
7. 陳榮華(民80b)。代幣增強方案對於增進中重度智能不足者職業技能之影響。教育心理學報,24,1--30。
8. 陳榮華(民79)。代幣增強方案對改善中重度智能不足兒童不良適應行為之成效研究。教育心理學報,23,13--48。
9. 陳榮華(民78)。代幣增強方案對教育中重度智能不足兒童生活自理技能成效之研究。教育心理學報,22,49--98。
10. 秦麗花(民84a)。數學學習障礙兒童解題錯誤類型分析。特殊教育季刊,55,33—38。
11. 胡永崇(民83)。輕度障礙學生安置於普通班之探討。載於特教園丁。第九卷,3,26—30。
12. 洪清一(民82)。學習障礙者之學業補救教學原則。特教園丁,第八卷,3,32—36頁。
13. 孟瑛如、吳東光(民88)。數學學習障礙與多媒體教材之發展應用。特殊教育季刊,72,13—18。
14. 林美和(民78)。學習障礙兒童案例初探。社會教育學刊,18,159--167。
15. 林坤燦(民85)。合作學習方案的認識與實施。國教園地,55/56,17—21。