跳到主要內容

臺灣博碩士論文加值系統

(44.213.63.130) 您好!臺灣時間:2023/02/03 15:03
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:劉浩萱
研究生(外文):Liu, Hao-Hsuan
論文名稱:AlphaZero演算法結合快贏策略或迫著空間實現於五子棋
論文名稱(外文):AlphaZero Algorithm Combined with Quick Win or Threat Space for Gomoku
指導教授:林順喜林順喜引用關係
指導教授(外文):Lin, Shun-Shii
學位類別:碩士
校院名稱:國立臺灣師範大學
系所名稱:資訊工程學系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2020
畢業學年度:108
語文別:中文
論文頁數:59
中文關鍵詞:AlpahZero類神經網路快贏策略迫著搜尋
外文關鍵詞:AlpahZeroNeural NetworkQuick WinThreats-space Search
相關次數:
  • 被引用被引用:1
  • 點閱點閱:139
  • 評分評分:
  • 下載下載:24
  • 收藏至我的研究室書目清單書目收藏:0
AlphaZero是一個通用的強化式學習之演算法,除了遊戲規則外毫無人類知識,經過訓練後會有極佳的結果。為了要讓此架構在訓練初期,就能夠成功學習到五子棋所需的獲勝資訊,本研究展示了快贏策略(Quick Win)與迫著空間。
快贏策略旨在讓類神經網路學習到快贏的價值,並且在各走步勝率相同時,能更傾向選擇可以快速獲得勝利的走步;迫著空間則是針對盤面的迫著做搜索,讓能產生迫著走步的資訊被類神經網路學習,以縮短訓練時間。
本研究以四種不同的實驗方式,包含線性距離權重、指數距離權重、結合迫著搜尋於距離權重,以及結合迫著搜尋於蒙地卡羅樹搜索法的方式,觀察AlphaZero為設計基礎的人工智慧模型,在對弈時是否因為選擇了更快獲勝的棋局走步或學會形成迫著,而有效增強棋力。
AlphaZero is a generic reinforcement learning algorithm that achieved superior results after training, given no domain knowledge except the game rules. To get the similar results and let the neural network learn winning information of Gomoku in the beginning of the training, this thesis deals with Quick Win and Threats-space Search methods.
Quick Win method aims to let the neural network learn how to win faster by choosing the fastest winning move when the walkable moves show the same winning possibilities. Threats-space Search method is to search for the threats for every move, letting the neural network learn how to create threats for shortening the training period.
In this thesis, we demonstrate four kinds of experiments applied to Gomoku including linear distance weight, exponential distance weight, combining Threats-space Search with distance weight and combining Threats-space Search with Monte Carlo Tree Search. We observe whether the implementations based on AlphaZero algorithm effectively enhances the winning ability because of choosing a faster winning move or a threat move during the game.
第一章 緒論 1
1.1 研究背景 1
1.2 研究目的 5
1.3 研究的意義 7

第二章 文獻探討 8
2.1 類神經網路 8
2.2 AlphaZero 10
2.3 五子棋 14
2.4 Quick Win 16
2.5 迫著搜尋 19

第三章 方法與步驟 22
3.1 Alpha-zero-general 23
3.2 線性增加距離權重 27
3.3 指數增加距離權重 29
3.4 結合迫著搜尋於距離權重 30
3.5 結合迫著搜尋於蒙地卡羅樹搜索法 33

第四章 實驗與結果 35
4.1 線性增加距離權重實驗結果 35
4.2 指數增加距離權重 37
4.3 結合迫著搜尋於距離權重 39
4.4 結合迫著搜尋於蒙地卡羅樹搜索法 41

第五章 結論與未來工作 43

參考文獻 45
附錄一:Alpha-zero-general 類別與方法 49
[1] J. Schaeffer, “The history heuristic and alpha-beta search enhancements in practice,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 11, no. 11, pp. 1203–1212, 1989.
[2] L. Kocsis and C. Szepesvári, “Bandit Based Monte-Carlo Planning,” Lecture Notes in Computer Science Machine Learning: ECML 2006, pp. 282–293, 2006.
[3] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. V. D. Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature, vol. 529, no. 7587, pp. 484–489, 2016.
[4] D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, Y. Chen, T. Lillicrap, F. Hui, L. Sifre, G. V. D. Driessche, T. Graepel, and D. Hassabis, “Mastering the game of Go without human knowledge,” Nature, vol. 550, no. 7676, pp. 354–359, 2017.
[5] “Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm” [Online]. Available: https://arxiv.org/pdf/1712.01815.pdf. [Accessed: 24-Feb-2020].
[6] S. Nair, Alpha-zero-general. December, 2017. [Online]. Available: https://github.com/junxiaosong/Alpha-zero-general. [Accessed Feb. 24, 2020].
[7] C. H. Chen, W. L. Wu, Y. H. Chen, and S. S. Lin, “Some improvements in Monte Carlo tree search algorithms for sudden death games,” ICGA Journal, vol. 40, no. 4, pp. 460–470, 2019.
[8] W. McCulloch and W. Pitts, “A logical calculus of the ideas immanent in nervous activity,” Bulletin of Mathematical Biology, vol. 52, no. 1-2, pp. 99–115, 1990.
[9] D. O. Hebb, The organization of behavior: a neuropsychological theory. New York: John Wiley & Sons, 1949.
[10] B. Farley and W. Clark, “Simulation of self-organizing systems by digital computer,” Transactions of the IRE Professional Group on Information Theory, vol. 4, no. 4, pp. 76–84, 1954.
[11] F. Rosenblatt, “The perceptron: A probabilistic model for information storage and organization in the brain.,” Psychological Review, vol. 65, no. 6, pp. 386–408, 1958.
[12] B. Widrow and M. A. Lehr, "30 years of adaptive neural networks: perceptron, madaline, and backpropagation". Proceedings of the IEEE. 78 (9): 1415–1442, 1990.
[13] M. Minsky and S. Papert, A review of "Perceptrons: an introduction to computational geometry". New York: Academic Press, 1970.
[14] P. J. Werbos, Roots of backpropagation: from ordered derivatives to neural networks and political forecasting. New York: Wiley, 1994.
[15] D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature, vol. 323, no. 6088, pp. 533–536, 1986.
[16] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel, “Backpropagation Applied to Handwritten Zip Code Recognition,” Neural Computation, vol. 1, no. 4, pp. 541–551, 1989.
[17] K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
[18] Renju International Federation, “The International Rules of Renju”. [Online]. Available: http://www.renju.net/study/rifrules.php. [Accessed Mar. 11, 2020].
[19] L. V. Allis, H. J. Herik, and M. P. H. Huntjens, “Go-Moku Solved By New Search Techniques,” Computational Intelligence, vol. 12, no. 1, pp. 7–23, 1996.
[20] S. S. Lin, “Outer-Open Gomoku (Using Lin's Rule,proposed in 2011)”. [Online]. Available: http://www.csie.ntnu.edu.tw/~linss/Lin-Rule-For-Five-in-a-row.htm. [Accessed Aug. 21, 2020].
[21] C. H. Chen, W. L. Wu, Y. H. Chen, and S. S. Lin, “Some Improvements in Monte Carlo Tree Search Algorithms for Sudden Death Games”, ICGA Journal, vol. 40, no. 4, pp. 460-470, 2018.
[22] 吳天宇,基於AlphaZero General Framework實現Breakthrough遊戲。國立臺灣師範大學碩士論文, 2019。
[23] L. Allis, H. van den Herik, and M. Huntjens, “Go-moku solved by new search techniques,” Computation Intelligence, vol. 12, pp. 7-23, 1996.
[24] 翁慈孝,電腦五子棋程式珠連設計與製作。國立東華大學碩士論文,2004。
[25] 劉雲青,六子棋中一個結合迫著搜尋的防禦性策略。國立臺灣師範大學碩士論文,2009。
[26] 張乃元,改進AlphaZero的大贏策略並應用於黑白棋。國立臺灣師範大學碩士論文,2019。
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top