跳到主要內容

臺灣博碩士論文加值系統

(44.192.26.226) 您好!臺灣時間:2024/09/13 09:53
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:劉俊源
研究生(外文):Chun-YuanLiu
論文名稱:應用於幼兒機器人之適應性交配全域最佳引導式人工蜂群最佳化及Q-Learning為基礎之行為規劃演算法
論文名稱(外文):Adaptive Crossover Gbest-guided Artificial Bee Colony Optimization and Q-learning Behavior Planning Algorithm for Toddler-sized Humanoid Robot
指導教授:李祖聖
指導教授(外文):Tzuu-Hseng (Steve) Li
學位類別:碩士
校院名稱:國立成功大學
系所名稱:電機工程學系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2019
畢業學年度:107
語文別:英文
論文頁數:82
中文關鍵詞:人工蜂群演算法人形機器人基因演算法增強式學習法
外文關鍵詞:Artificial Bee Colony AlgorithmHumanoid RobotGenetic AlgorithmQ-learning
相關次數:
  • 被引用被引用:0
  • 點閱點閱:116
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:1
本論文主旨在於探討幼兒不同階段的演化歷程。首先探討幼兒身體的演化發展過程,再來為探討幼兒學習動作之發展過程,最後研究幼童如何探索並應用已知動作將其串聯由爬行最後行走站立。因此本論文透過機器人來模擬幼兒之整個演化歷程。身體演化發展過程方面,使用全域最佳引導式人工蜂群演算法來解決與機器人設計相關的優化問題,並加入基因演算法的概念協助改良人工蜂群演算法以提升設計速率。設計實現完成之機器人取名為路易。在確立整個機器人的硬體結構後,下一步開始探討學習動作之發展過程。同樣使用全域最佳引導式人工蜂群演算法針對幼童最常使用的四種動作解決其優化問題,四種動作分別為爬行、蹲、站以及行走。具備四大動作之能力後,將幼兒探索應用之問題轉化為一行為規劃地圖,並使用增強式學習法來解決該地圖問題。本論文先在模擬軟體Webots上先行驗證方法論之效果,之後再轉移至實體機器人上進行測試。最後,實驗結果顯示本論文所提出之方法能用於解決機器人設計問題,並同時能夠成功地使機器人能由爬到站自主完成行為規劃。
The main purpose of this thesis is devoted to discover evolution process of toddler in different phases. First, we study the evolution process of toddler’s body part and emphasize on the processes of motion learning, and then understand how a toddler combines all known motions to stand up from crawling. This thesis utilizes humanoid robot to mimic an entire toddler’s evolution process. In the evolution process of body parts problem, Gbest-guided artificial bee colony algorithm is employed to solve mechanism optimization problem and the crossover concept of genetic algorithm is applied into Gbest-guided artificial bee colony algorithm to improve efficiency. The designed and implemented toddler-sized humanoid robot is named Louis. In the next step, we want to study the processes of motion learning. Gbest-guided artificial bee colony algorithm is adopted to solve optimization problems of four most commonly used motions, which are crawling, squatting, standing up and walking. The problem of exploration transform into a behavior planning diagram should be examined once the robot has had the ability of four motions. This thesis utilizes Q-learning scheme to solve the behavior planning diagram. The effectiveness of the proposed methodology is testified with the simulation software, Webots. The real-time experiment is also made on Louis. Finally, all the results demonstrate that the methodology provided by this thesis can be successfully utilized in solving mechanism optimization and realizing the learning process from crawling to walking through the robot in real world.
Abstract I
Acknowledgement Ⅲ
Contents IV
List of Figures Ⅶ
List of Tables Ⅸ

Chapter 1 Introduction
1.1 Motivation 1
1.2 Related Work 2
1.3 Thesis Organization 4
Chapter 2 Hardware and Control System of Humanoid Robot
2.1 Introduction 6
2.2 The Configuration of Louis 7
2.2.1 Mechanism 7
2.2.2 Control System 10
2.3 Hardware Specifications 11
2.3.1 Materials 11
2.3.2 Actuators 12
2.3.3 Micro Control Unit and Circuit Board 13
2.3.4 Computer 17
2.3.5 Camera 20
2.3.6 Li-ion Battery 20
2.4 Summary 21

Chapter 3 Adaptive Crossover Gbest-Guided Artificial Bee Colony Algorithm
3.1 Introduction 22
3.2 Artificial Bee Colony Algorithm 23
3.3 Gbest-Guided Artificial Bee Colony Algorithm 24
3.4 Adaptive Crossover Gbest-Guided Artificial Bee Colony Algorithm 25
3.5 Mechanism Optimization for Toddler-sized Humanoid Robot 30
3.5.1 Motion “Crawling” 31
3.5.2 Motion “Squatting” 33
3.5.3 Motion “Standing Up” 34
3.5.4 Motion “Walking” 35
3.5.5 Establish Toddler-sized Humanoid Robot 37
3.6 Motion Optimization for Toddler-sized Humanoid Robot 37
3.6.1 Motion “Crawling” 38
3.6.2 Motion “Squatting” 39
3.6.3 Motion “Standing Up” 40
3.6.4 Motion “Walking” 41
3.7 Summary 41
Chapter 4 Q-learning Behavior Planning Algorithum
4.1 Introduction 43
4.2 Q-learning 44
4.3 Behavior Planning Diagram 46
4.4 Variation of Behavior Planning Diagram 49
4.4.1 Behavior Planning Diagram with Wisdom 49
4.4.2 Behavior Planning Diagram with Obstacle 49
4.5 Summary 50
Chapter 5 Simulations and Experimental Results
5.1 Introduction 51
5.2 Introduction of Simulation Environment 52
5.3 Result of Mechanism Optimization 53
5.4 Result of Motion Optimization 61
5.5 Result of Behavior Planning Diagram 66
5.6 Toddler-sized Robot Louis 70
5.7 Behavior Planning in Real World 70
5.8 Summary 75
Chapter 6 Conclusions and Future Work
6.1 Conclusions 77
6.2 Future Work 78
References 80
[1]“DARwIN-OP. [Online]. Available: http://support.robotis.com/en/product/darwin-op.htm.
[2]“HRP-4. [Online]. Available: http://global.kawada.jp/mechatronics/hrp4.html.
[3]L. F. Wu, Y. T. Ye, Y. F. Ho, P. H. Kuo, and T.-H. S. Li, “Design and implementation of teen-sized humanoid robot David Junior, in Proc. of 2016 Int. Conf. Adv. Robot. Intell. Syst. ARIS 2016, 2017.
[4]D. E. Goldberg and J. H. Holland, “Genetic algorithms and machine learning, Mach. Learn., vol. 3, no. 2–3, pp. 95–99, 1988.
[5]B. Chopard and M. Tomassini, “Particle swarm optimization, Nat. Comput. Ser., pp. 97–102, 2018.
[6]R. Eberhart and Y. Shi, “Particle swarm optimization: Developments, applications and resources, in Proc. of the 2001 Congress on Evolutionary Computation, pp. 81–86, 2001.
[7]R. Eberhart and J. Kennedy, “A new optimizer using particle swarm theory, MHS’95. Proc. Sixth Int. Symp. Micro Mach. Hum. Sci., pp. 39–43, 1995.
[8]M. Dorigo, V. Maniezzo, and A. Colorni, “Ant system: Optimization by a colony of cooperating agents, IEEE Trans. Syst. Man, Cybern. Part B Cybern., vol. 26, no. 1, pp. 29–41, 1996.
[9]D. Karaboga and B. Basturk, “A powerful and efficient algorithm for numerical function optimization: Artificial bee colony (ABC) algorithm, J. Glob. Optim., vol. 39, no. 3, pp. 459–471, 2007.
[10]D. Karaboga and B. Basturk, “Artificial bee colony (ABC) optimization algorithm for solving constrained optimization problems, Found. Fuzzy Log. Soft Comput., pp. 789–798, 2007.
[11]G. Zhu and S. Kwong, “Gbest-guided artificial bee colony algorithm for numerical function optimization, Appl. Math. Comput., vol. 217, no. 7, pp. 3166–3173, 2010.
[12]K. Liu, C. Wang, and S. Liu, “Artificial bee colony algorithm combined with previous successful search experience, IEEE Access, vol. 7, pp. 34318–34332, 2019.
[13]Y. Wang, J. You, J. Hang, C. Li, and L. Cheng, “An improved artificial bee colony (ABC) algorithm with advanced search ability, in Proc. 2018 IEEE 8th Int. Conf. Electron. Inf. Emerg. Commun. ICEIEC 2018, pp. 91–94, 2018.
[14]H. Gao, Y. Shi, C. M. Pun, and S. Kwong, “An improved artificial bee colony algorithm with its application, IEEE Trans. Ind. Informatics, vol. 15, no. 4, pp. 1853–1865, 2019.
[15]F. Dahan, H. Mathkour, and M. Arafah, “Two-step artificial bee colony algorithm enhancement for QoS-aware web service selection problem, IEEE Access, vol. 7, pp. 21787–21794, 2019.
[16]C. Fan, Q. Fu, G. Long, and Q. Xing, “Hybrid artificial bee colony algorithm with variable neighborhood search and memory mechanism, J. Syst. Eng. Electron., vol. 29, no. 2, pp. 405–414, 2018.
[17]L. Dos Santos Coelho and P. Alotto, “Gaussian artificial bee colony algorithm approach applied to Loney’s solenoid benchmark problem, IEEE Trans. Magn., vol. 47, no. 5, pp. 1326–1329, 2011.
[18]X. Zhang, X. Zhang, and L. Wang, “Antenna design by an adaptive variable differential artificial bee colony algorithm, IEEE Trans. Magn., vol. 54, no. 3, 2018.
[19]C. J. C. H. Watkins and P. Dayan, “Q-learning, Mach. Learn., vol. 8, no. 3–4, pp. 279–292, 1992.
[20]A. Konar, I. G. Chakraborty, S. J. Singh, L. C. Jain, and A. K. Nagar, “A deterministic improved q-learning for path planning of a mobile robot, IEEE Trans. Syst. Man, Cybern. Part ASystems Humans, vol. 43, no. 5, pp. 1141–1153, 2013.
[21]X. Gao, Y. Fang, and Y. Wu, “Fuzzy Q learning algorithm for dual-aircraft path planning to cooperatively detect targets by passive radars, J. Syst. Eng. Electron., vol. 24, no. 5, pp. 800–810, 2013.
[22]P. Rakshit et al., “Realization of an adaptive memetic algorithm using differential evolution and q-learning: A case study in multirobot path planning, IEEE Trans. Syst. Man, Cybern. Part ASystems Humans, vol. 43, no. 4, pp. 814–831, 2013.
[23]S. Yoon and K. J. Kim, “Deep Q networks for visual fighting game AI, in 2017 IEEE Conference on Computational Intelligence and Games, CIG 2017, 2017, pp. 306–308.
[24]C. Holmgard, A. Liapis, J. Togelius, and G. N. Yannakakis, “Evolving personas for player decision modeling, in Proc. of IEEE Conference on Computatonal Intelligence and Games, CIG, 2014.
[25]P. G. Patel, N. Carver, and S. Rahimi, “Tuning computer gaming agents using Q-learning, in Proc. of 2011 Federated Conference on Computer Science and Information Systems (FedCSIS), 2011, pp. 581–588.
[26]T.-H. S. Li, P. H. Kuo, Y. F. Ho, M. C. Kao, and L. H. Tai, “A biped gait learning algorithm for humanoid robots based on environmental impact assessed artificial bee colony, IEEE Access, vol. 3, pp. 13–26, 2015.
[27]T. Kishi et al., “Development of a humorous humanoid robot capable of quick-and-wide arm motion, IEEE Robot. Autom. Lett., vol. 1, no. 2, pp. 1081–1088, 2016.
[28]S. H. Hyon, D. Suewaka, Y. Torii, and N. Oku, “Design and experimental evaluation of a fast torque-controlled hydraulic humanoid robot, IEEE/ASME Trans. Mechatronics, vol. 22, no. 2, pp. 623–634, 2017.
[29]Y. Asano, K. Okada, and M. Inaba, “Design principles of a human mimetic humanoid: Humanoid platform to study human intelligence and internal body system, Sci. Robot., vol. 2, no. 13, p. eaaq0899, Dec. 2017.
[30]W. L. Xu, J. S. Pap, and J. Bronlund, “Design of a biologically inspired parallel robot for foods chewing, IEEE Trans. Ind. Electron., vol. 55, no. 2, pp. 832–841, 2008.
[31]M. Sreenivasa, P. Souères, and J. P. Laumond, “Walking to grasp: Modeling of human movements as invariants and an application to humanoid robotics, IEEE Trans. Syst. Man, Cybern. Part ASystems Humans, vol. 42, no. 4, pp. 880–893, 2012.
[32]C. Della Santina et al., “Learning from humans how to grasp: A data-driven architecture for autonomous grasping with anthropomorphic soft hands, IEEE Robot. Autom. Lett., vol. 4, no. 2, pp. 1533–1540, 2019.
[33]J.-J. Aucouturier, “Cheek to Chip: Dancing robots and AI’s future, IEEE Intell. Syst., vol. 23, no. 2, pp. 74–84, 2008.
[34]J. Or, “Computer simulations of a humanoid robot capable of walking like fashion models, IEEE Trans. Syst. Man Cybern. Part C Appl. Rev., vol. 42, no. 2, pp. 241–248, 2012.
[35]“FIRA RoboworldCup official website. [Online]. Available: http://www.firaworldcup.org/VisitorPages/default.aspx?itemid=3.
[36]“RoboCup federation official website. [Online]. Available: https://www.robocup.org/.
[37]“RoboCup HumanoidLeadgeRules2018. [Online]. Available: http://www.robocuphumanoid.org/wp-content/uploads/RCHL-2018-Rules-Proposal_changesMarked_final.pdf.
[38]“Axiomtek. [Online]. Available: http://www.axiomtek.com.tw/.
[39]“STM32-F103ZET6. [Online]. Available: https://www.st.com/en/microcontrollers-microprocessors/stm32f103.html.
[40]“ROBOTIS. [Online]. Available: http://www.robotis.us/.
[41]“Arduino Mega. [Online]. Available: https://www.arduino.cc/en/Main/Products.
[42]“D435i. [Online]. Available: https://www.intelrealsense.com/depth-camera-d435i/.
[43]“YUANTAI EA CO. [Online]. Available: http://www.eayuta.com/en/about.php.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊