|
[1] Nazanin Mehrasa, et al. Deep Learning of Player Trajectory Representations for Team Activity Analysis. In MIT SLOAN Sports Analytics Conference, 2018 [2] Andrew C. Miller and Luke Bornn. Possession sketches: Mapping nba strategies. In MIT Sloan Sports Analytics Conference, 2017 [3] D Cervone, et al. NBA Court Realty. In MIT SLOAN Sports Analytics Conference, 2016 [4] Daniel Cervone, et al. A Multiresolution Stochastic Process Model for Predicting Basketball Possession Outcomes. Journal Of The American Statistical Association Vol. 111, Iss. 514, 2016 [5] C.Y. Chen, et al. Generating Defensive Plays in Basketball Games. In ACM International Conference on Multimedia, 2018 [6] John Hollinger. Pro Basketball Prospectus. University of Nebraska Press, 2003. [7] Dean Oliver. Basketball on Paper: Rules and Tools for Performance Analysis. University of Nebraska Press, 2004 [8] Alexander Franks, Andrew Miller, Luke Bornn, and Kirk Goldsberry. 2015. Counterpoints: Advanced defensive metrics for nba basketball. In 9th Annual MIT Sloan Sports Analytics Conference, Boston, MA [9] Peter Beshai. 2014. Buckets: Basketball Shot Visualization. University of British Columbia, published Dec (2014), 547–14. [10] Kuan-Chieh Wang and Richard Zemel. 2016. Classifying NBA offensive plays using neural networks. In Proc. of MIT Sloan Sports Analytics Conference. [11] Ching-Hang Chen, Tyng-Luh Liu, Yu-Shuen Wang, Hung-Kuo Chu, Nick C Tang, and Hong-Yuan Mark Liao. 2015. Spatio-Temporal Learning of Basketball Offensive Strategies. In Proceedings of ACM international conference on Multimedia. 1123–1126. [12] Andrew C. Miller, Luke Bornn. Possession Sketches: Mapping NBA Strategies. In MIT SLOAN Sports Analytics Conference, 2017 [13] Mark Harmon, Patrick Lucey, and Diego Klabjan. 2016. Predicting Shot Making in Basketball Learnt from Adversarial Multiagent Trajectories. arXiv preprint arXiv:1609.04849 (2016). [14] Rajiv Shah and Rob Romijnders. 2016. Applying deep learning to basketball trajectories. arXiv preprint arXiv:1608.03793 (2016) [15] Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, S. M. Ali Eslami, Martin Riedmiller, David Silver. Emergence of Locomotion Behaviours in Rich Environments. arXiv preprint arXiv:1707.02286 [16] Tejas D. Kulkarni, Karthik R. Narasimhan, Ardavan Saeedi, Joshua B. Tenenbaum. 2016. Hierarchical Deep Reinforcement Learning: Integrating Temporal Abstraction and Intrinsic Motivation. Advances in Neural Information Processing Systems 29 (NIPS 2016) [17] McDowell MA1, Fryar CD, Ogden CL. 2009. Anthropometric reference data for children and adults: United States, 1988-1994. Vital and Health Statistics. Series 11, Data From the National Health Survey [01 Apr 2009(249):1-68] [18] Zheng, S., Yue, Y., & Hobbs, J. (2016). Generating Long-term Trajectories Using Deep Hierarchical Networks. In Advances in Neural Information Processing Systems, (pp. 1543–1551) [19] Thomas Seidl, Aditya Cherukumudi, Andrew Hartnett, Peter Carr, and Patrick Lucey. 2018. Bhostgusters: Realtime Interactive Play Sketching with Synthesized NBA Defenses. (2018). [20] Hsin-Ying Hsieh, Chieh-Yu Chen, Yu-Shuen Wang, Jung-Hong Chuang. 2019. BasketballGAN: Generating Basketball Play Simulation Through Sketching. arXiv preprint arXiv:1909.07088 [21] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, Martin Riedmiller. 2013. Playing Atari with Deep Reinforcement Learning. In NIPS Deep Learning Workshop 2013. [22] V. R. Konda and J. N. Tsitsiklis, Actor-Critic Algorithms. In Advances in neural information processing systems, 2000, pp. 1008–1014. [23] Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. 2016. Asynchronous methods for deep reinforcement learning. In International conference on machine learning, pages 1928–1937, 2016. [24] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 [25] Rachit Dubey, Pulkit Agrawal, Deepak Pathak, Thomas L Griffiths, and Alexei A Efros. 2018. Investigating human priors for playing video games. arXiv preprint arXiv:1802.10217. [26] Guo, X., Singh, S., Lee, H., Lewis, R. L., and Wang, X. 2014. Deep learning for real-time Atari game play using offline montecarlo tree search planning. In NIPS, pp. 3338–3346. [27] Schulman, J., Levine, S., Abbeel, P., Jordan, M. I., and Moritz, P. 2015. Trust region policy optimization. In ICML, pp. 1889–1897. [28] Watter, M., Springenberg, J., Boedecker, J., and Riedmiller, M. 2015. Embed to control: A locally linear latent dynamics model for control from raw images. In NIPS, pp. 2728–2736. [29] Levine, S., Finn, C., Darrell, T., and Abbeel, P. 2015. End-to-end training of deep visuomotor policies. arXiv:1504.00702. [30] Heess, N., Wayne, G., Silver, D., Lillicrap, T., Erez, T., and Tassa, T. 2015. Learning continuous control policies by stochastic value gradients. In NIPS, pp. 2926–2934. [31] Zheng, S., Yue, Y., & Hobbs, J. 2016. Generating Long-term Trajectories Using Deep Hierarchical Networks. In Advances in Neural Information Processing Systems, pp. 1543–1551. [32] Danijar Hafner, James Davidson, Vincent Vanhoucke. 2017. TensorFlow Agents: Efficient Batched Reinforcement Learning in TensorFlow. arXiv preprint arXiv:1709.02878. [33] Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics. 249–256. [34] R. J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229–256, 1992. [35] J. Schulman, S. Levine, P. Moritz, M. I. Jordan, and P. Abbeel. “Trust region policy optimization”. In: CoRR, abs/1502.05477 (2015).
|