|
[1] A. V. Ivanov, and A. A. Petrovsky, “First-order Markov Property of The Auditory Spiking Neuron Model Response,” Signal Processing Conference, Florence, Italy, 4-8 Sept. 2006. [2] K. I. Y. Inoto, H. Taguchi, and A. Gofuku, “A Study of Reinforcement Learning with Knowledge Sharing,” in Proc. of IEEE Int. Conf. on Robotics and Biomimetics, Okayama, Japan, pp. 175-179, Hong Kong, China, 22-26 Aug. 2004. [3] Z. Jin, W. Y. Liu, and J. Jin, “State-Clusters Shared Cooperative Multi-Agent Reinforcement Learning,” Asian Control Conference ASCC, pp. 129-135, 27-29 Aug. 2009. [4] M. N. Ahmadabadi, and M. Asadpour, “Ecpertness Based Cooperative Q-Learning,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 32, no. 1, pp. 1083-1094, Feb. 2002. [5] B. N. Araabi, S. Mastoureshgh, and M. N. Ahmadabadi, “A Study on Expertise of Agents and Its Effects on Cooperative Q-Learning,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 37, no. 2, pp. 1083-1094, Apr. 2007. [6] A. Anuntapat, A. Thammano, and O. Wongwirat, “Searching Optimization Route by Using Pareto Solution with Ant Algorithm for Mobile Robot in Rough Terrain Environment,” Control, Automation, Robotics and Vision (ICARCV), International Conference, Phuket, Thailand, 13-15 Nov. 2016. [7] J. Li, J. Cheng, Y. Zhao, F. Yang, Y. Huang, H. Chen, and R. Zhao, “A Comparison of General-Purpose Distributed Systems for Data Processing,” Big Data IEEE International Conference, pp. 378-383, Washington D.C., USA, 5-8 Dec. 2016. [8] K. Ito, A. Gofuku, Y. Imoto, and M. Takeshita, “A study of reinforcement learning with knowledge sharing for distributed autonomous system,” Computational Intelligence in Robotics and Automation, Proceedings IEEE, pp. 1120-1125, Kobe, Japan, 16-20 July. 2003. [9] J. Pinto, P. Jain, and T. Kumar, “Hadoop distributed computing clusters for fault prediction,” Computer Science and Engineering Conference ICSEC, Chiang Mai, Thailand, 14-17 Dec. 2016. [10] T. Tateyama, S. Kawata, and Y. Shimomura, “Parallel Reinforcement Learning Systems using Exploration Agents and Dyna-Q Algorithm,” in Proc. SICE Annu. Conf., Takamatsu, Japan, pp. 2774-2778, Takamatsu, Japan, 17-20 Sept. 2007. [11] M. Hussin, Y. C. Lee, and A. Y. Zomaya, “Efficient Energy Management using Adaptive Reinforcement Learning-based Scheduling in Large-Scale Distributed Systems,” in International Conf. on Parallel Proc., Sydney, Australia, pp. 385-393, Taipei City, Taiwan, 13-16 Sept. 2011. [12] H. Karaoğuz, and H. Bozma, “Merging Appearance-Based Spatial Knowledge in Multirobot Systems,” Intelligent Robots and Systems (IROS), IEEE/RSJ International Conference, pp. 5107-5112, Daejeon, Korea, 9-14 Oct. 2016. [13] K.S. Hwang, W. C. Jiang, and Y. J. Chen, “Model Learning and Knowledge Sharing for a Multiagent System with Dyna-Q Learning,” IEEE Transactions on Cybernetics, vol. 45, no. 5, pp. 964-976, May. 2015. [14] K.S. Hwang, W. C. Jiang, Y. J. Chen, and W. H. Wang, “Reinforcement Learning with Model Sharing for Multi-Agent Systems,” System Science and Engineering ICSSE, pp. 293-296, Budapest, Hungary, 4-6 July. 2013. [15] A. Lazarowska, “Parameters Influence on the Performance of an Ant Algorithm for Safe Ship Trajectory Planning,” Cybernetics (CYBCONF), IEEE International Conference, Gdynia, Poland, 24-26 June. 2015. [16] X. Huang, H. Zhou, and W. Wu, “Hadoop Job Scheduling Based on Mixed Ant-Genetic Algorithm,” Cyber-Enabled Distributed Computing and Knowledge Discovery (CyberC), International Conference, Xi''an, China, 17-19 Sept. 2015.
|