|
[1]Von Ahn, L. (2006). Games with a purpose. Computer, 39(6): 92-94. [2]Von Ahn, L., Maurer, B., McMillen, C., Abraham, D., & Blum, M. (2008). reCAPTCHA: Human-based character recognition via web security measures. Science, 321(5895): 1465-1468. [3]Von Ahn, L., & Dabbish, L. (2004, April). Labeling images with a computer game. In Proceedings of the SIGCHI conference on Human factors in computing systems (pp. 319-326). ACM. [4]Von Ahn, L., Liu, R., & Blum, M. (2006, April). Peekaboom: a game for locating objects in images. In Proceedings of the SIGCHI conference on Human Factors in computing systems (pp. 55-64). ACM. [5]Hu, X., & Downie, J. S. (2007, September). Exploring Mood Metadata: Relationships with Genre, Artist and Usage Metadata. In ISMIR (pp. 67-72). [6]Russell, J. A. (1980). A circumplex model of affect. Journal of personality and social psychology, 39(6): 1161. [7]Ho, C. J., Chang, T. H., Lee, J. C., Hsu, J. Y. J., & Chen, K. T. (2009, June). KissKissBan: a competitive human computation game for image annotation. In Proceedings of the acm sigkdd workshop on human computation (pp. 11-14). ACM. [8]Mandel, M. I., & Ellis, D. P. (2008). A web-based game for col-lecting music metadata. Journal of New Music Research, 37(2): 151-165. [9]Turnbull, D., Liu, R., Barrington, L., & Lanckriet, G. R. (2007, September). A Game-Based Approach for Collecting Semantic Annotations of Music. In ISMIR (Vol. 7, pp. 535-538). [10]Law, E. L., Von Ahn, L., Dannenberg, R. B., & Crawford, M. (2007, September). TagATune: A Game for Music and Sound Annotation. In ISMIR (Vol. 3, p. 2). [11]Kim, Y. E., Schmidt, E. M., & Emelle, L. 2008, February. MoodSwings: A Collaborative Game for Music Mood Label Collection. In ISMIR (Vol. 8, pp. 231-236). [12]Scharl, A., & Weichselbraun, A. (2008). An automated approach to investigating the online media coverage of US presidential elections. Journal of Information Technology & Politics, 5(1), 121-132. [13]Aras, H., Krause, M., Haller, A., & Malaka, R. (2010, July). Webpardy: harvesting QA by HC. In Proceedings of the ACM SIGKDD Workshop on Human Computation (pp. 49-52). ACM. [14]Yang, Y. H., Lin, Y. C., Cheng, H. T., Liao, I. B., Ho, Y. C., & Chen, H. H. (2008). Toward multi-modal music emotion classification. In Advances in Multimedia Information Processing-PCM 2008 (pp. 70-79). Springer Berlin Heidelberg. [15]Mehrabian, A., & Russell, J. A. (1974). An approach to environmental psychology. the MIT Press. [16]Bradley, M. M., & Lang, P. J. (1999). Affective norms for English words (ANEW): Instruction manual and affective ratings (pp. 1-45). Technical Report C-1, The Center for Research in Psychophysiology, University of Florida. [17]Hu, Y., Chen, X., & Yang, D. (2009, August). Lyric-based Song Emotion Detection with Affective Lexicon and Fuzzy Clustering Method. In ISMIR (pp. 123-128). [18]Hu, X., Downie, J. S., & Ehmann, A. F. (2009). Lyric text mining in music mood classification. American music, 183(5,049), 2-209. [19]Van Zaanen, M., & Kanters, P. (2010, August). Automatic Mood Classification Using TF* IDF Based on Lyrics. In ISMIR (pp. 75-80). [20]Lu, Q., Chen, X., Yang, D. & Wang, J. (2010). Boosting for Multi-Modal Music Emotion Classification.. In J. S. Downie & R. C. Veltkamp (eds.), ISMIR (p./pp. 105-110), : International Society for Music Information Retrieval. ISBN: 978-90-393-53813 [21]Ku, L. W., Liang, Y. T., & Chen, H. H. (2006, March). Opinion Extraction, Summarization and Tracking in News and Blog Corpora. In AAAI spring symposium: Computational approaches to analyzing weblogs (Vol. 100107).
|