|
[1]BARBIERI, F., ANKE, L.E., BALLESTEROS, M., SOLER, J., and SAGGION, H., 2017. Towards the understanding of gaming audiences by modeling twitch emotes. In Proceedings of the 3rd Workshop on Noisy User-generated Text, 11-20. [2]BODLA, N., SINGH, B., CHELLAPPA, R., and DAVIS, L.S., 2017. Soft-NMS--Improving Object Detection With One Line of Code. In Proceedings of the IEEE International Conference on Computer Vision, 5561-5569. [3]CABA HEILBRON, F., CARLOS NIEBLES, J., and GHANEM, B., 2016. Fast temporal activity proposals for efficient detection of human actions in untrimmed videos. In Proceedings of the IEEE conference on computer vision and pattern recognition, 1914-1923. [4]CABA HEILBRON, F., ESCORCIA, V., GHANEM, B., and CARLOS NIEBLES, J., 2015. Activitynet: A large-scale video benchmark for human activity understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 961-970. [5]CHAO, Y.-W., VIJAYANARASIMHAN, S., SEYBOLD, B., ROSS, D.A., DENG, J., and SUKTHANKAR, R., 2018. Rethinking the faster r-cnn architecture for temporal action localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1130-1139. [6]DEVLIN, J., CHANG, M.-W., LEE, K., and TOUTANOVA, K., 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. [7]DIBA, A., SHARMA, V., and VAN GOOL, L., 2017. Deep temporal linear encoding networks. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, 2329-2338. [8]ESCORCIA, V., HEILBRON, F.C., NIEBLES, J.C., and GHANEM, B., 2016. Daps: Deep action proposals for action understanding. In European Conference on Computer Vision Springer, 768-784. [9]ESULI, A. and SEBASTIANI, F., 2006. Sentiwordnet: A publicly available lexical resource for opinion mining. In LREC Citeseer, 417-422. [10]FU, C.-Y., LEE, J., BANSAL, M., and BERG, A.C., 2017. Video highlight prediction using audience chat reactions. arXiv preprint arXiv:1707.08559. [11]GAO, J., CHEN, K., and NEVATIA, R., 2018. Ctap: Complementary temporal action proposal generation. In Proceedings of the European Conference on Computer Vision (ECCV), 68-83. [12]GAO, J., YANG, Z., CHEN, K., SUN, C., and NEVATIA, R., 2017. Turn tap: Temporal unit regression network for temporal action proposals. In Proceedings of the IEEE International Conference on Computer Vision, 3628-3636. [13]HU, M. and LIU, B., 2004. Mining opinion features in customer reviews. In AAAI, 755-760. [14]JIAO, Y., LI, Z., HUANG, S., YANG, X., LIU, B., and ZHANG, T., 2018. Three-Dimensional Attention-Based Deep Ranking Model for Video Highlight Detection. IEEE Transactions on Multimedia 20, 10, 2693-2705. [15]JUHLIN, O., ENGSTRProceedings of the 12th international conference on Human computer interaction with mobile devices and services ACM, 35-44. [16]KHAN, A., SOHAIL, A., ZAHOORA, U., and QURESHI, A.S., 2019. A survey of the recent architectures of deep convolutional neural networks. arXiv preprint arXiv:1901.06032. [17]LIN, T., ZHAO, X., SU, H., WANG, C., and YANG, M., 2018. Bsn: Boundary sensitive network for temporal action proposal generation. In Proceedings of the European Conference on Computer Vision (ECCV), 3-19. [18]LIPTON, Z.C., BERKOWITZ, J., and ELKAN, C., 2015. A critical review of recurrent neural networks for sequence learning. arXiv preprint arXiv:1506.00019. [19]MAINIERI, B.O., BRAGA, P.H.C., DA SILVA, L.A., and OMAR, N. Text Mining of Audience Opinion in eSports Events. [20]METTES, P., VAN GEMERT, J.C., CAPPALLO, S., MENSINK, T., and SNOEK, C.G., 2015. Bag-of-fragments: Selecting and encoding video fragments for event detection and recounting. In Proceedings of the 5th ACM on International Conference on Multimedia Retrieval ACM, 427-434. [21]MIKOLOV, T., SUTSKEVER, I., CHEN, K., CORRADO, G.S., and DEAN, J., 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, 3111-3119. [22]OTSUKA, I., NAKANE, K., DIVAKARAN, A., HATANAKA, K., and OGAWA, M., 2005. A highlight scene detection and video summarization system using audio feature for a personal video recorder. IEEE Transactions on Consumer Electronics 51, 1, 112-116. [23]PAK, A. and PAROUBEK, P., 2010. Twitter as a corpus for sentiment analysis and opinion mining. In LREc, 1320-1326. [24]PANG, B. and LEE, L., 2008. Opinion mining and sentiment analysis. Foundations and Trends® in Information Retrieval 2, 1–2, 1-135. [25]PENG, X., ZOU, C., QIAO, Y., and PENG, Q., 2014. Action recognition with stacked fisher vectors. In European Conference on Computer Vision Springer, 581-595. [26]PRENSKY, M., 2001. Digital natives, digital immigrants part 1. On the horizon 9, 5, 1-6. [27]REN, S., HE, K., GIRSHICK, R., and SUN, J., 2015. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems, 91-99. [28]ROCHAN, M., YE, L., and WANG, Y., 2018. Video summarization using fully convolutional sequence networks. In Proceedings of the European Conference on Computer Vision (ECCV), 347-363. [29]RUI, Y., GUPTA, A., and ACERO, A., 2000. Automatically extracting highlights for TV baseball programs. In Proceedings of the eighth ACM international conference on Multimedia ACM, 105-115. [30]SHOU, Z., WANG, D., and CHANG, S.-F., 2016. Temporal action localization in untrimmed videos via multi-stage cnns. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1049-1058. [31]SIMONYAN, K. and ZISSERMAN, A., 2014. Two-stream convolutional networks for action recognition in videos. In Advances in neural information processing systems, 568-576. [32]SIMONYAN, K. and ZISSERMAN, A., 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. [33]TRAN, D., BOURDEV, L., FERGUS, R., TORRESANI, L., and PALURI, M., 2015. Learning spatiotemporal features with 3d convolutional networks. In Proceedings of the IEEE international conference on computer vision, 4489-4497. [34]VASWANI, A., SHAZEER, N., PARMAR, N., USZKOREIT, J., JONES, L., GOMEZ, A.N., KAISER, Ł., and POLOSUKHIN, I., 2017. Attention is all you need. In Advances in neural information processing systems, 5998-6008. [35]WANG, H. and SCHMID, C., 2013. Action recognition with improved trajectories. In Proceedings of the IEEE international conference on computer vision, 3551-3558. [36]WANG, L., XIONG, Y., WANG, Z., QIAO, Y., LIN, D., TANG, X., and VAN GOOL, L., 2016. Temporal segment networks: Towards good practices for deep action recognition. In European conference on computer vision Springer, 20-36. [37]XIONG, Y., ZHAO, Y., WANG, L., LIN, D., and TANG, X., 2017. A pursuit of temporal accuracy in general activity detection. arXiv preprint arXiv:1703.02716. [38]XU, C., WANG, J., WAN, K., LI, Y., and DUAN, L., 2006. Live sports event detection based on broadcast video and web-casting text. In Proceedings of the 14th ACM international conference on Multimedia ACM, 221-230. [39]XU, H., DAS, A., and SAENKO, K., 2017. R-c3d: Region convolutional 3d network for temporal activity detection. In Proceedings of the IEEE international conference on computer vision, 5783-5792. [40]YAO, T., MEI, T., and RUI, Y., 2016. Highlight detection with pairwise deep ranking for first-person video summarization. In Proceedings of the IEEE conference on computer vision and pattern recognition, 982-990. [41]YU, Y., LEE, S., NA, J., KANG, J., and KIM, G., 2018. A Deep Ranking Model for Spatio-Temporal Highlight Detection from a 360◦ Video. In Thirty-Second AAAI Conference on Artificial Intelligence. [42]ZHANG, L., WANG, S., and LIU, B., 2018. Deep learning for sentiment analysis: A survey. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 8, 4, e1253. [43]ZHAO, J., LIU, K., and XU, L., 2016. Sentiment analysis: mining opinions, sentiments, and emotions MIT Press. [44]ZHU, W., HU, J., SUN, G., CAO, X., and QIAO, Y., 2016. A key volume mining deep framework for action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1991-1999.
|