(3.235.11.178) 您好!臺灣時間:2021/02/26 04:22
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

我願授權國圖
: 
twitterline
研究生:蔡明倫
研究生(外文):Ming Lan Tsai
論文名稱:基於適應性學習之目標演化於智慧型代理人
論文名稱(外文):Goal Evolution based on Adaptive Q-learning for Intelligent Agent
指導教授:許見章郭忠義郭忠義引用關係
指導教授(外文):Ph.Chien-Chang HsuPh.Kuo Jong Yih
學位類別:碩士
校院名稱:輔仁大學
系所名稱:資訊工程學系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2006
畢業學年度:94
語文別:中文
論文頁數:44
中文關鍵詞:智慧型代理人適應性Q學習BDI模型
外文關鍵詞:Intelligent Agentadaptive Q-learningBDI model
相關次數:
  • 被引用被引用:0
  • 點閱點閱:131
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
本篇論文提出以適應性學習方法完成智慧型代理人之目標演化,當代理人被建置時,代理人即具備一些目標及少數功能,每一種功能可能由一個或多個行為所組成,藉此些功能是採取行為以滿足目標,他們努力適應於僅有的功能。 強效式學習方法被用於演化代理人之目標,一種抽象代理人程式語言(An Abstract Agent Programming Language 3APL)被提出以建造代理人之心智狀態。
我們提出以強效式學習精煉最原始的目標(top-level goals)。 並以機器人足球比賽用來說明我們的方法。 而且,我們顯示如何精煉以強效式學習演化目標於足球員之心智狀態
This paper presents an adaptive approach to address the goal evolution of the intelligent agent. When agents are initially created, they have some goals and few capabilities. Each capability composes by one or more actions. These capabilities can perform some actions to satisfy their goals. They strive to adapt themselves to the low capabilities. Reinforcement learning method is used to the evolution of agent goal. An Abstract Agent Programming Language (3APL) is introduced to build the agent mental states. We propose reinforcement learning to refine the top-level goals. A robot soccer game is used to explain our approach. Moreover, we show how a refinement of the soccer player’s mental state is derived from the evolving goals by reinforcement learning.
Chapter 1 Introduction
1.1 MOTIVATION
1.2 OBJECTIVE
1.3 ORGANIZATION
Chapter2 Related Work
2.1 ROBOT SOCCER LEARNING
2.2 ROBOT SOCCER STRATEGY
Chapter 3 Agent Evolution
3.1 AGENT EVOLUTION MODEL
3.1.1 Agent Domain Knowledge
3.1.2 Agent Rule Bases
3.2 AGENT EVOLUTION PROCESS
Chapter 4 Case Study
4.1 SYSTEM DESIGN
4.1.1 SYSTEM ENVIRONMENT
4.1.1.1 Hardware Specification and Configuration
4.1.1.2 Software Specification and Configuration
4.1.1.3 Operational Environment
4.2 CASE STUDY
Chapter 5 Experiment Results
5.1 EXPERIMENT
5.2 DISCUSSION
Chapter 6 Conclusion
References
[1]A. Bonarini, “Evolutionary learning, reinforcement learning, and fuzzy rules for knowledge acquisition in agent-based systems”, Proceedings of the IEEE, 2001, Vol.89, Issue 9, pp.1334-1346.
[2]B. van Riemsdijk, M. Dastani., F. Dignum,. Meyer, J.-J. Ch., “Dynamics of Declarative Goals in Agent Programming”, Proceedings in Declarative Agent Languages and Technologies (DALT), New York, 2004.
[3]C. Castillo, M. Lurgi, I. Martinez, “Chimps: an evolutionary reinforcement learning approach for soccer agents” , IEEE International Conference on Systems, Man and Cybernetics, 2003, Vol. 1, pp.60-65.
[4]E. Alonso, M. D’Inverno, D. Kudenko, M. Luck, J. Noble, “Learning in Multi-Agent Systems”, The Knowledge Engineering Review, 2001, Vol.16, No.3, pp.277-284.
[5]M. Dastani, F. Dignum, J.J. Meyer, “Autonomy and Agent Deliberation”, Proceedings in The First International Workshop on Computatinal Autonomy-Potential, Risks, Solutions, 2003, Melbourne.
[6]M. Dastani, B. van Riemsdijk, F. Dignum, J.J. Meyer, “A Programming Language for Cognitive Agents: Goal Directed 3APL”, Proceedings of the First Workshop on Programming Multiagent Systems: Languages, frameworks, techniques, and tools, 2003, Melbourne.
[7] M. Dastani and L. van der Torre, “Programming BOID Agents: a deliberation language for conflicts between mental attitudes and plans”, N. R. Jennings, C. Sierra, L. Sonenberg, M. Tambe (eds.) Proceedings in the Third International Joint Conference on Autonomous Agents and Multi Agent Systems (AAMAS'04), ACM, p. 706-713, 2004.
[8]M. Dastani, J. Hulstijn, F. Dignum, Meyer, J-J. Ch., “Issues in Multiagent System Development”, N. R. Jennings, C. Sierra, L. Sonenberg, M. Tambe (eds.) Proceedings in the Third International Joint Conference on Autonomous Agents and Multi Agent Systems (AAMAS'04), ACM, p. 922-929, 2004.
[9]M. D'Inverno, K. Hindriks, M. Luck, “A Formal Architecture for the 3APL Agent Programming Language”, in ZB2000 ,Lecture Notes in Computer Science, Springer, 2000, pp.168-187.
[10]E. Gelenbe, E. Seref, Z. Xu, “Simulation with learning agents “, Proceedings of the IEEE, 2001, Vol. 89, Issue 2, pp.148-157.
[11]J. Hulstijn, F. d. Boer, M. Dastani, F. Dignum, M. Kroese, J.J. Meyer, “Agent-based Programming in 3APL”, Presented at the ICS Researchday, Conferentiecentrum Woudschoten, The Netherlands, 2003.
[12]K. S. Hwang; S.W. Tan; C.C. Chen, “Cooperative strategy based on adaptive Q-learning for robot soccer systems”, IEEE Transactions on Fuzzy Systems, 2004, Vol.12, Issue 4, pp.569-576.
[13]S. Kinoshita, Y. Yamamoto. “Team 11monkeys Description”, proceeding in Coradeschi et. al., editors, Proceeding on RoboCup-99: Team Descriptions, 1999, pp. 154-156.
[14]J. Y. Kuo, “A document-driven agent-based approach for business”, processes management. Information and Software Technology, 2004, Vol. 46, pp. 373-382.
[15]J. Y. Kuo, S.J. Lee and C.L. Wu, N.L. Hsueh, J. Lee. Evolutionary Agents for Intelligent Transport Systems, International Journal of Fuzzy Systems, 2005, Vol. 7, No. 2, pp.85-93.
[16]Y. Maeda, “Modified Q-learning method with fuzzy state division and adaptive rewards”, Proceedings of the IEEE World Congress on Computational Intelligence, FUZZ-IEEE2002, Vol. 2, pp.1556-1561.
[17]T. Nakashima, M. Takatani, M. Udo, H. Ishibuchi, “An evolutionary approach for strategy learning in RoboCup soccer Systems”, IEEE International Conference on Man and Cybernetics, 2004, Vol. 2, pp.2023-2028.
[18]S. Shen, G.M.P. O'Hare, R. Collier, “Decision-making of BDI agents, a fuzzy approach”, The Fourth International Conference on Computer and Information Technology, 2004, pp.1022-1027.
[19]M. Wooldridge, N. Jennings, “Agent theories, architectures and languages: a survey”. Lecture Notes in Artificial Intelligence890, pp.1-39.
[20]C.J.C.H Watkins, “Automatic learning of efficient behaviour”, First IEE International Conference on Conference on Artificial Neural Networks, 1989, No. 313, pp.395 – 398.
[21]T. Yamaguchi, R. Marukawa, “Interactive Multiagent Reinforcement Learning with Motivation Rules”, Proceeding on 4th International Conference on Computational Intelligence and Multimedia Applications, 2001, pp.128-132.
[22]J. Y. Kuo, M. L. Tsai, and N. L. Hsueh. 2006. “Goal Evolution based on Adaptive Q-learning for Intelligent Agent”, IEEE International Conference on Systems, Man and Cybernetics. Taipei, Taiwan.
[23] M. Yoshinaga, Y. Nakamura, E. Suzuki, “Mini-Car-Soccer as a Testbed for Granular Computing”, IEEE International Conference on Granular Computing, 2005, Vol. 1, 25-27, pp.92 – 97.
[24]Y. Sato, T. Kanno, “Event-driven hybrid learning classifier systems for online soccer games”, The 2005 IEEE Congress on Evolutionary Computation, 2005, Vol. 3, 2-5, pp.2091 – 2098.
[25]K. Wickramaratna, M. Chen, S.C. Chen, M. L. Shyu, “Neural network based framework for goal event detection in soccer videos”, Seventh IEEE International Symposium on Multimedia, 2005.
[26]S. Hirano, S. Tsumoto, “Grouping of soccer game records by multiscale comparison technique and rough clustering”, 2005. Fifth International Conference on Hybrid Intelligent Systems, 2005.
[27] D. Barrios-Aranibar, P. J. Alsina, “Recognizing behaviors patterns in a micro robot soccer game”, 2005. Fifth International Conference on Hybrid Intelligent Systems, 2005.
[28]B. R Liu; Y. Xie; Y. M. Yang; Y. M. Xia; Z. Z. Qiu, ”A Self-Localition Method with Monocular Vision for Autonomous Soccer Robot”, 2005. ICIT 2005. IEEE International Conference on Industrial Technology, 2005, pp.888 – 892.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
系統版面圖檔 系統版面圖檔