|
1. Alderfer, Clayton P. (1969), “An empirical test of a new theory of human needs,” Organizational-Behavior-and-Human-Performance, 4(2), pp: 142-175. 2. Adlam, Timothy D & Orpwood, Roger D (2004), “Taking the Gloucester Smart House from the Laboratory to the Living Room,” The 2nd International Workshop on Ubiquitous Computing for Pervasive Healthcare Applications (UbiHealth 2004) 3. Barto, A. G., Sutton, R. S., and Anderson, C. W. (1983) “Neuronlike Adaptive Elements That Can Solve Difficult Learning Control Problems,” IEEE Transactions on Systems, Man, and Cybernetics, SMC-13, pp: 834-846. 4. Bratman, Michael E. (1987), “Intention, Plans, and Practical Reason,” Harvard University Press, Cambridge, MA. 5. Bromiley, Philip. (1985), “Planning systems in large organizations: Garbage can approach with applications to defense PPBS,” Ambiguity and Command: Organizational Perspectives on Military Decision Making, pp: 120-139. 6. Busetta, P., Ronnquist, R., Hodgson, A., and Lucas, A. (1999), “JACK Intelligent Agents - Components for Intelligent Agents in Java,” AgentLink News Letter vol 2, Jan 1999, www.agent-software.com.au 7. Chang, Wei-Lun and Yuan, Soe-Tsyer (2005), “Ambient iCare e-Services for Quality Aging: Framework and Roadmap,” 7-th International IEEE Conference on E-Commerce Technology 2005, July, 19-22, Munich, Germany. 8. Clark, David L. (1980), “New Perspectives on Planning in Educational Organizations,” Far West Laboratory for Educational Research and Development. 9. Cohen, M., March, J., and Olson, J.(1972), “A garbage can model of organizational choice.” Administrative science quarterly, 17, pp: 1-25. 10. Crites, R. H., and Barto, A. G. (1996), “Improving elevator performance using reinforcement learning,” In Touretzky, D. S.; Mozer, M. C.; and Hasselmo, M. E., eds., Advances in Neural Information Processing Systems, volume 8, pp: 1017-1023. The MIT Press. 11. Glorennec, P. Y. (2000), “Reinforcement Learning: an Overview,” ESIT 2000, Aachen, Germany. 12. Isbell, Charles Lee and Shelton, Christian R. (2001), “A Social Reinforcement Learning Agent,” Proceedings in the Fifth International Conference on Autonomous Agents. 13. Kaelbling, L. P. (1996), “Reinforcement learning: A survey,” Journal of Artificial Intelligence Research, 4, pp: 237-285. 14. Kingdon, John W. (1984), “Agenda, Alternatives, and Public Policies,” New York: Harper Collins. 15. Kingdon, John W. (1995), “Agenda, Alternatives, and Public Policies 2nd ed.,” New York: Harper Collins. 16. Kinny, D., Georgeff, Michale P. and Rao, A. (1996), “A Methodology and Modelling Technique for System of BDI Agents,” Proceedings of the Seventh European Workshop on Modeling Autonomous Agents in a Multi-Agent World. 17. Lavitt, Barbara and Nass, Clifford (1989), “The Lid on the Garbage Can: Institutional Constraints on Decision Making in the Technical Core of College-Text Publishers,” Administrative Science Quarterly, Jun 1989, pp: 190-207. 18. Lin, Dongging, Wiggen, Thomas P. and Jo, Chang-Hyun (2003), “A Restaurant Finder Using Belief-Desire-Intention Agent Model and Java Technology,” Computers and their Application 2003, pp: 404-407. 19. Lipson, Michael (2004), “A Garbage Can Model of UN Peacekeeping,” paper prepared for presentation at the annual meeting of the Canadian Political Science Association, Winnipeg, Manitoba, June 3-5, 2004. 20. Mahadevan, S. (1996), “Average reward reinforcement learning: Foundations, algorithms, and empirical results,” Machine Learning, 22, 159--195. 21. Maslow, A. H. (1968), “Toward a psychology of being (2nd ed.),” New York: Van Nostrand Reinhold. 22. Romelaer , Pierre and Huault , Isabelle (2002), “International Career Management: The Relevance of the Garbage-Can Model,” University Paris Ix Dauphine Laboratory CREPA, working paper n°80, June 2002. 23. Rao, Anand S. and Georgeff, Michael P. (1995), “BDI Agents: From Theory to Practice,” Proceedings of the First International Conference on Multi-Agent Systems(ICMAS-95), USA. 24. Schwart, A. (1993), “A reinforcement learning method for maximizing undiscounted rewards,” In proceedings of the Tenth Machine Learning Conference. 25. Seo, J. W. & Park, K. S., (2004), “The Development of a Ubiquitous Health House in South Korea,” The 6th International Conference on Ubiquitous Computing, Nottingham, UK 26. Simon, H. A., (1981), "The Sciences of the Artificial", MIT Press. 27. Singh, Satinder P. (1994), “Reinforcement Learning Algorithms for Average-Payoff Markovian Decision Processes,” Proceedings of the twelfth National Conference on Artificial Intelligence, pp. 202-207. 28. Sproull, Lee, S. (1978), “Organizing an Anarchy: Belief, Bureaucracy, and Politics in the National Institute of Education,” University of Illinois Press. 29. Sutton , Richard S. (1996), “Generalization in Reinforcement Learning: Successful Examples Using Sparse Coarse Coding,” Advances in Neural Information Processing System 8, pp. 1038-1044, MIT Press. 30. Sutton, Richard S. and Barto, Andrew G. (1998), “Reinforcement Learning: An Introduction,” MIT Press, Cambridge, MA. 31. Tadepalli, P. & Ok, D. (1994), “H-learning: A Reinforcement Learning Method for Optimizing Undiscounted Average Reward,” Technical Report, 94-30-1, Dept. of Computer Science, Oregon State University. 32. Tadepalli, P. & Ok, D. (1996), “Auto-exploratory average reward reinforcement learning,” Proceedings of AAAI-96. 33. Takahashi, K. (1993), “Decision theory in Organizations,” Tokyo: Asakura Shoten. (in Japanese) 34. Takahashi, K. (1997), “A Single Garbage Can Model and the Degree of Anarchy in Japanese Firms,” Human Relations, Jan 1997, vol.50, pp: 91-108. 35. Watkins, C. J. C. H. (1992), “Q-learning,” Machine Learning, 8, pp: 279-292.
|