|
1.曲建仲, 機器是如何學習與進步?人工智慧的核心技術與未來. 科學月刊, 2019. 2.Wagner, S., Reinforcement Learning and Supervised Learning: A brief comparison. 2018. 3.Horn, B., Robot Vision. 1986. 4.Kumar, R., et al. Object detection and recognition for a pick and place Robot. in Asia-Pacific World Congress on Computer Science and Engineering. 2014. 5.wikipedia, Template matching. 6.Araújo, S. and H. Kim, Ciratefi: An RST-invariant template matching with extension to color images. Vol. 18. 2011. 75-90. 7.Silver, D., et al., Mastering the game of Go with deep neural networks and tree search. Nature, 2016. 529: p. 484. 8.Kober, J. and J. Peters, Reinforcement Learning in Robotics: A Survey, in Learning Motor Skills: From Algorithms to Robot Experiments, J. Kober and J. Peters, Editors. 2014, Springer International Publishing: Cham. p. 9-67. 9.Pane, Y.P., et al., Reinforcement learning based compensation methods for robot manipulators. Engineering Applications of Artificial Intelligence, 2019. 78: p. 236-247. 10.Nagendra, S., et al. Comparison of reinforcement learning algorithms applied to the cart-pole problem. in 2017 International Conference on Advances in Computing, Communications and Informatics (ICACCI). 2017. 11.Kober, J. and J. Peters, Policy search for motor primitives in robotics. Machine Learning, 2011. 84(1): p. 171-203. 12.Craig, J.J., Introduction to Robotics: Mechanics and Control. 1989: Addison-Wesley Longman Publishing Co., Inc. 450. 13.Yoshida, S., T. Kanno, and K. Kawashima, Surgical Robot With Variable Remote Center of Motion Mechanism Using Flexible Structure. Journal of Mechanisms and Robotics, 2018. 10(3): p. 031011-031011-8. 14.Aghakhani, N., et al. Task control with remote center of motion constraint for minimally invasive robotic surgery. in 2013 IEEE International Conference on Robotics and Automation. 2013. 15.Corke, P., Robotics, Vision and Control: Fundamental Algorithms in MATLAB. 2013: Springer Publishing Company, Incorporated. 594.
|