|
[1] F.-Y. Cherng, W.-C. Lin, J.-T. King, and Y.-C. Lee, “An eeg-based approach for evaluating graphic icons from the perspective of semantic distance,” in Proceedings of the 2016 chi conference on human factors in computing systems. ACM, 2016, pp. 4378–4389. [2] F.-Y. Cherng, Y.-C. Lee, J.-T. King, and W.-C. Lin, “Measuring the influences of musical parameters on cognitive and behavioral responses to audio notifications using eeg and large-scale online studies,” in Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, 2019, p. 409. [3] Y.-C. Lee, W.-C. Lin, J.-T. King, L.-W. Ko, Y.-T. Huang, and F.-Y. Cherng, “An eeg-based approach for evaluating audio notifications under ambient sounds,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2014, pp. 3817–3826. [4] T. O. Zander, C. Kothe, S. Jatzev, and M. Gaertner, “Enhancing human-computer inter-action with input from active and passive brain-computer interfaces,” in Brain-computer interfaces. Springer, 2010, pp. 181–199. [5] B. F. Yuksel, K. B. Oleson, L. Harrison, E. M. Peck, D. Afergan, R. Chang, and R. J. Jacob, “Learn piano with bach: An adaptive learning interface that adjusts task difficulty based on brain state,” in Proceedings of the 2016 chi conference on human factors in computing systems. ACM, 2016, pp. 5372–5384. [6] C. T. Vi, I. Jamil, D. Coyle, and S. Subramanian, “Error related negativity in observing interactive tasks,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2014, pp. 3787–3796. [7] C. C. Yang, H. Chen, and K. Hong, “Visualization of large category map for internet browsing,” Decision Support Systems, vol. 35, no. 1, pp. 89–102, 2003. [8] G. Murphy and C. M. Greene, “Perceptual load induces inattentional blindness in drivers,” Applied Cognitive Psychology, vol. 30, no. 3, pp. 479–483, 2016. [9] S. J. Isherwood, S. J. McDougall, and M. B. Curry, “Icon identification in context: The changing role of icon characteristics with user experience,” Human Factors: The Journal of the Human Factors and Ergonomics Society, vol. 49, no. 3, pp. 465–476, 2007. [10] V. Setlur and J. D. Mackinlay, “Automatic generation of semantic icon encodings for visualizations,” in Proceedings of the 32nd annual ACM conference on Human factors in computing systems, 2014, pp. 541–550. [11] D. Warnock, M. McGee-Lennon, and S. Brewster, “Multiple notification modalities and older users,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2013, pp. 1091–1094. [12] M. Liljedahl and J. Fagerlönn, “Methods for sound design: a review and implications for research and practice,” in Proceedings of the 5th Audio Mostly Conference: A Conference on Interaction with Sound. ACM, 2010, p. 2. [13] C. Frauenberger and T. Stockman, “Auditory display design - an investigation of a design pattern approach,” International Journal of Human-Computer Studies, vol. 67, no. 11, pp. 907–922, 2009. [14] S.-C. Huang, R. G. Bias, and D. Schnyer, “How are icons processed by the brain? neuroimaging measures of four types of visual stimuli used in information systems,” Journal of the Association for Information Science and Technology, vol. 66, no. 4, pp. 702–720, 2015. [15] A. A. Ghosh, T. E. Lockhart, and J. Liu, “Aging effect on detectability, criticality and urgency under various auditory conditions,” Transportation Research Part F: Traffic Psychology and Behaviour, vol. 31, pp. 25–35, 2015. [16] S. Garzonis, S. Jones, T. Jay, and E. O’Neill, “Auditory icon and earcon mobile service notifications: intuitiveness, learnability, memorability and preference,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2009, pp. 1513–1522. [17] S. Garzonis, “Mobile service awareness via auditory notifications,” Ph.D. dissertation, University of Bath, 2010. [18] J. Edworthy, S. Loxley, and I. Dennis, “Improving auditory warning design: Relationship between warning sound parameters and perceived urgency,” Human Factors, vol. 33, no. 2, pp. 205–231, 1991. [19] C. L. Krumhansl, “Rhythm and pitch in music cognition.” Psychological bulletin, vol. 126, no. 1, p. 159, 2000. [20] G. Kramer, B. Walker, T. Bonebright, P. Cook, J. H. Flowers, N. Miner, and J. Neuhoff, “Sonification report: Status of the field and research agenda,” Faculty Publications, Department of Psychology, 2010. [21] M. Liljedahl and N. Papworth, “Using sound to enhance users’ experiences of mobile applications,” in Proceedings of the 7th Audio Mostly Conference: A Conference on In-teraction with Sound. ACM, 2012, pp. 24–31. [22] S. A. Brewster, P. C. Wright, and A. D. N. Edwards, “Experimentally derived guide-lines for the creation of earcons,” in Adjunct Proceedings of the British Computer Society Conference on Human-Computer Interaction, 1995, pp. 155–159. [23] R. Jung, “Non-intrusive audio notification in emotion classified background music,” in Proceedings of Meetings on Acoustics, vol. 9, no. 1. Acoustical Society of America, 2015, p. 050001. [24] G. Kreutz, U. Ott, D. Teichmann, P. Osawa, and D. Vaitl, “Using music to induce emo-tions: Influences of musical preference and absorption,” Psychology of music, 2007. [25] G. Husain, W. F. Thompson, and E. G. Schellenberg, “Effects of musical tempo and mode on arousal, mood, and spatial abilities,” Music Perception: An Interdisciplinary Journal, vol. 20, no. 2, pp. 151–171, 2002. [26] T. Komatsu, S. Yamada, K. Kobayashi, K. Funakoshi, and M. Nakano, “Artificial subtle expressions: intuitive notification methodology of artifacts,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2010, pp. 1941–1944. [27] B. C. Moore, An introduction to the psychology of hearing. Brill, 2012. [28] E. B. Slawinski and J. F. MacNeil, “Age, music, and driving performance: Detection of external warning sounds in vehicles.” Psychomusicology: A Journal of Research in Music Cognition, vol. 18, no. 1-2, p. 123, 2002. [29] S. Koelsch, T. Grossmann, T. C. Gunter, A. Hahne, E. Schröger, and A. D. Friederici, “Children processing music: electric brain responses reveal musical competence and gender differences,” Journal of Cognitive Neuroscience, vol. 15, no. 5, pp. 683–693, 2003. [30] W. W. Gaver, “How do we hear in the world? explorations in ecological acoustics,” Ecological psychology, vol. 5, no. 4, pp. 285–313, 1993. [31] D. J. Levitin, This is your brain on music: Understanding a human obsession. Atlantic Books Ltd, 2011. [32] E. M. M. Peck, B. F. Yuksel, A. Ottley, R. J. Jacob, and R. Chang, “Using fnirs brain sensing to evaluate information visualization interfaces,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2013, pp. 473–482. [33] Y.-P. Lim and P. C. Woods, “Experimental color in computer icons,” in Visual Information Communication. Springer, 2010, pp. 149–158. [34] E. M. Palmer, C. M. Brown, C. F. Bates, P. J. Kellman, and T. C. Clausner, “Perceptual cues and imagined viewpoints modulate visual search in air traffic control displays,” in Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 53, no. 17. Sage Publications, 2009, pp. 1111–1115. [35] C. Harrison, G. Hsieh, K. D. Willis, J. Forlizzi, and S. E. Hudson, “Kineticons: using iconographic motion in graphical user interface design,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2011, pp. 1999–2008. [36] S. J. Mcdougall, M. B. Curry, and O. de Bruijn, “Measuring symbol and icon characteris-tics: Norms for concreteness, complexity, meaningfulness, familiarity, and semantic dis-tance for 239 symbols,” Behavior Research Methods, Instruments, & Computers, vol. 31, no. 3, pp. 487–519, 1999. [37] V. Setlur, C. Albrecht-Buehler, A. A. Gooch, S. Rossoff, and B. Gooch, “Semanticons: Visual metaphors as file icons,” in Computer Graphics Forum, vol. 24, 2005, pp. 647–656. [38] R. Leung, J. McGrenere, and P. Graf, “Age-related differences in the initial usability of mobile device icons,” Behaviour & Information Technology, vol. 30, no. 5, pp. 629–642, 2011. [39] K. Rızvanoğlu and Ö. Öztürk, “Cross-cultural understanding of the dual structure of metaphorical icons: An explorative study with french and turkish users on an e-learning site,” in Internationalization, Design and Global Development, ser. Lecture Notes in Computer Science. Springer Berlin Heidelberg, 2009, pp. 89–98. [40] T. Davies and A. Beeharee, “The case of the missed icon: change blindness on mobile devices,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2012, pp. 1451–1460. [41] S. McDougall and S. Isherwood, “What’s in a name? the role of graphics, functions, and their interrelationships in icon identification,” Behavior research methods, vol. 41, no. 2, pp. 325–336, 2009. [42] R. Leung, “Improving the learnability of mobile device applications for older adults,” in CHI’09 Extended Abstracts on Human Factors in Computing Systems. ACM, 2009, pp. 3125–3128. [43] E. Solovey, P. Schermerhorn, M. Scheutz, A. Sassaroli, S. Fantini, and R. Jacob, “Brain-put: enhancing interactive systems with streaming fnirs brain input,” in Proceedings of the SIGCHI conference on Human Factors in Computing Systems. ACM, 2012, pp. 2193–2202. [44] T. O. Zander and C. Kothe, “Towards passive brain-computer interfaces: applying brain-computer interface technology to human-machine systems in general,” Journal of neural engineering, vol. 8, no. 2, p. 025005, 2011. [45] R. Näätänen, P. Paavilainen, T. Rinne, and K. Alho, “The mismatch negativity (mmn) in basic research of central auditory processing: a review,” Clinical Neurophysiology, vol. 118, no. 12, pp. 2544–2590, 2007. [46] R. K. Mehta and R. Parasuraman, “Neuroergonomics: a review of applications to physical and cognitive work,” Frontiers in human neuroscience, vol. 7, 2013. [47] K. Lukanov, H. A. Maior, and M. L. Wilson, “Using fnirs in usability testing: understanding the effect of web form layout on mental workload,” in Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. ACM, 2016, pp. 4011–4016. [48] J. L. Burt, D. S. Bartolome, D. W. Burdette, and J. R. Comstock Jr, “A psychophysiological evaluation of the perceived urgency of auditory warning signals,” Ergonomics, vol. 38, no. 11, pp. 2327–2340, 1995. [49] C. Glatz, S. S. Krupenia, H. H. Bülthoff, and L. L. Chuang, “Use the right sound for the right job: verbal commands and auditory icons for a task-management system favor different information processes in the brain,” in Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 2018, p. 472. [50] D. L. Schacter, “Psychology second edition, 41 madison avenue, new york, NY 10010,” 2011. [51] A. Johnson and R. W. Proctor, Attention: Theory and Practice. Sage Publications, 2004. [52] C. Escera, K. Alho, E. Schröger, and I. Winkler, “Involuntary attention and distractibility as evaluated with event-related brain potentials,” Audiology and Neurotology, vol. 5, no. 3- 4, pp. 151–166, 2000. [53] J. Polich, “Updating p300: an integrative theory of p3a and p3b,” Clinical neurophysiology, vol. 118, no. 10, pp. 2128–2148, 2007. [54] J. Ward, The student’s guide to cognitive neuroscience. Psychology Press, 2015. [55] G. W. Humphreys, C. J. Price, and M. J. Riddoch, “From objects to names: A cognitive neuroscience approach,” Psychological research, vol. 62, no. 2-3, pp. 118–130, 1999. [56] Y.-Y. Yeh, D.-S. Lee, and Y.-H. Ko, “Color combination and exposure time on legibility and eeg response of icon presented on visual display terminal,” Displays, vol. 34, no. 1, pp. 33–38, 2013. [57] T. Althoff, E. Horvitz, R. W. White, and J. Zeitzer, “Harnessing the web for population-scale physiological sensing: A case study of sleep and performance,” in Proceedings of the 26th International Conference on World Wide Web. International World Wide Web Conferences Steering Committee, 2017, pp. 113–122. [58] K. Reinecke and K. Z. Gajos, “Labinthewild: Conducting large-scale online experiments with uncompensated samples,” in Proceedings of the 18th ACM conference on computer supported cooperative work & social computing. ACM, 2015, pp. 1364–1378. [59] M. Eitz, J. Hays, and M. Alexa, “How do humans sketch objects?” ACM Trans. Graph., vol. 31, no. 4, pp. 44–1, 2012. [60] P. Sangkloy, N. Burnell, C. Ham, and J. Hays, “The sketchy database: learning to retrieve badly drawn bunnies,” ACM Transactions on Graphics (TOG), vol. 35, no. 4, p. 119, 2016. [61] A. Jahanian, P. Isola, and D. Wei, “Mining visual evolution in 21 years of web design,” in Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems. ACM, 2017, pp. 2676–2682. [62] S. Dey, K. Karahalios, and W.-T. Fu, “Understanding the effects of endorsements in scientific crowdfunding,” in Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. ACM, 2017, pp. 2376–2381. [63] K. Reinecke, T. Yeh, L. Miratrix, R. Mardiko, Y. Zhao, J. Liu, and K. Z. Gajos, “Predict-ing users’ first impressions of website aesthetics with a quantification of perceived visual complexity and colorfulness,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2013, pp. 2049–2058. [64] L. Fridman, B. Reimer, B. Mehler, and W. T. Freeman, “Cognitive load estimation in the wild,” in Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 2018, p. 652. [65] R. D. Patterson, Guidelines for auditory warning systems on civil aircraft. Civil Aviation Authority, 1982. [66] G. Leplaitre and I. McGregor, “How to tackle auditory interface aesthetics? discussion and case study,” 2004. [67] R.-C. Ye, C.-T. Lin, and C.-F. Huang, “Exploring eeg spectral dynamics of music-induced emotions,” 2010. [68] S. Makeig, T.-P. Jung, D. G. Ghahremani, and T. J. Sejnowski, “Independent component analysis of simulated erp data,” Institute for Neural Computation, University of California: technical report INC-9606, 1996. [69] C. Amezcua, M. A. Guevara, and J. Ramos-Loyo, “Effects of musical tempi on visual attention erps,” International journal of neuroscience, vol. 115, no. 2, pp. 193–206, 2005. [70] Z. Fu, G. Lu, K. M. Ting, and D. Zhang, “A survey of audio-based music classification and annotation,” IEEE Transactions on Multimedia, vol. 13, no. 2, pp. 303–319, 2011. [71] J. Heer and M. Bostock, “Crowdsourcing graphical perception: using mechanical turk to assess visualization design,” in Proceedings of the SIGCHI conference on human factors in computing systems. ACM, 2010, pp. 203–212. [72] E. Van Eyken, G. Van Camp, and L. Van Laer, “The complexity of age-related hearing im-pairment: contributing environmental and genetic factors,” Audiology and Neurotology, vol. 12, no. 6, pp. 345–358, 2007. [73] C. L. Baldwin, Auditory cognition and human performance: Research and applications. CRC Press, 2016. [74] N. J. Salkind, Encyclopedia of Research Design. Sage, 2010, vol. 1. [75] A. Bulling and T. O. Zander, “Cognition-aware computing,” Pervasive Computing, IEEE, vol. 13, no. 3, pp. 80–83, 2014. [76] C. Vi and S. Subramanian, “Detecting error-related negativity for interaction design,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2012, pp. 493–502. [77] D. Afergan, E. M. Peck, E. T. Solovey, A. Jenkins, S. W. Hincks, E. T. Brown, R. Chang, and R. J. Jacob, “Dynamic difficulty using brain metrics of workload,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2014, pp. 3797–3806. [78] R. J. Jacob, “Phylter: A system for modulating notifications in wearables using physio-logical sensing,” in Foundations of Augmented Cognition: 9th International Conference, AC 2015, Held as Part of HCI International 2015, Los Angeles, CA, USA, August 2-7, 2015, Proceedings, vol. 9183. Springer, 2015, p. 167. [79] M. F. Pike, H. A. Maior, M. Porcheron, S. C. Sharples, and M. L. Wilson, “Measuring the effect of think aloud protocols on workload using fnirs,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2014, pp. 3807–3816. [80] S. J. McDougall, M. B. Curry, and O. de Bruijn, “The effects of visual information on users’ mental models: An evaluation of pathfinder analysis as a measure of icon usabil-ity,” International Journal of Cognitive Ergonomics, vol. 5, no. 1, pp. 59–84, 2001. [81] B. A. Taylor, D. M. Roberts, and C. L. Baldwin, “The role of age-related neural timing variability in speech processing,” in Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 55, no. 1. SAGE Publications, 2011, pp. 162–166. [82] S. G. Goto, Y. Ando, C. Huang, A. Yee, and R. S. Lewis, “Cultural differences in the vi-sual processing of meaning: Detecting incongruities between background and foreground objects using the n400,” Social cognitive and affective neuroscience, vol. 5, no. 2-3, pp. 242–253, 2010. [83] Q. Ma, X. Wang, S. Dai, and L. Shu, “Event-related potential n270 correlates of brand extension,” Neuroreport, vol. 18, no. 10, pp. 1031–1034, 2007. [84] Y. Wang, J. Kong, X. Tang, D. Zhuang, and S. Li, “Event-related potential n270 is elicited by mental conflict processing in human brain,” Neuroscience letters, vol. 293, no. 1, pp. 17–20, 2000. [85] J. R. Folstein and C. Van Petten, “Influence of cognitive control and mismatch on the n2 component of the erp: a review,” Psychophysiology, vol. 45, no. 1, pp. 152–170, 2008. [86] C.-J. Chou, H.-W. Huang, C.-L. Lee, and C.-Y. Lee, “Effects of semantic constraint and cloze probability on chinese classifier-noun agreement,” Journal of Neurolinguistics, vol. 31, pp. 42–54, 2014. [87] C. Herbert, B. M. Herbert, T. Ethofer, and P. Pauli, “His or mine? the time course of self–other discrimination in emotion processing,” Social neuroscience, vol. 6, no. 3, pp. 277–288, 2011. [88] Y. C. Wu and S. Coulson, “Iconic gestures prime related concepts: An erp study,” Psy-chonomic Bulletin & Review, vol. 14, no. 1, pp. 57–63, 2007. [89] M. Kutas and K. D. Federmeier, “Thirty years and counting: Finding meaning in the n400 component of the event related brain potential (erp),” Annual review of psychology, vol. 62, p. 621, 2011. [90] Y. N. Yum, P. J. Holcomb, and J. Grainger, “Words and pictures: An electrophysiologi-cal investigation of domain specific processing in native chinese and english speakers,” Neuropsychologia, vol. 49, no. 7, pp. 1910–1922, 2011. [91] S. G. Goto, A. Yee, K. Lowenberg, and R. S. Lewis, “Cultural differences in sensitivity to social context: Detecting affective incongruity using the n400,” Social neuroscience, vol. 8, no. 1, pp. 63–74, 2013. [92] S. Fondevila, M. Martín-Loeches, L. Jiménez-Ortega, P. Casado, A. Sel, A. Fernández-Hernández, and W. Sommer, “The sacred and the absurd—-an electrophysiological study of counterintuitive ideas (at sentence level),” Social neuroscience, vol. 7, no. 5, pp. 445–457, 2012. [93] M. Y. Robert and D. John, “The relation of strength of stimulus to rapidity of habit-formation,” Journal of comparative neurology and psychology, vol. 18, pp. 459–482, 1908. [94] I. Burmistrov, T. Zlokazova, A. Izmalkova, and A. Leonova, “Flat design vs traditional design: Comparative experimental study,” in Human-Computer Interaction–INTERACT 2015. Springer, 2015, pp. 106–114. [95] N. Dell, V. Vaidyanathan, I. Medhi, E. Cutrell, and W. Thies, “Yours is better!: participant response bias in hci,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2012, pp. 1321–1330. [96] M. Böhmer and A. Krüger, “A study on icon arrangement by smartphone users,” in Pro-ceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2013, pp. 2137–2146. [97] T. F. Liu, M. Craft, J. Situ, E. Yumer, R. Mech, and R. Kumar, “Learning design semantics for mobile apps,” in The 31st Annual ACM Symposium on User Interface Software and Technology. ACM, 2018, pp. 569–579. [98] D. J. Ketchen and C. L. Shook, “The application of cluster analysis in strategic manage-ment research: an analysis and critique,” Strategic management journal, vol. 17, no. 6, pp. 441–458, 1996. [99] M. Lagunas, E. Garces, and D. Gutierrez, “Learning icons appearance similarity,” Mul-timedia Tools and Applications, pp. 1–19, 2018. [100] Z. Wu, T. Kim, Q. Li, and X. Ma, “Understanding and modeling user-perceived brand personality from mobile application uis,” in Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, 2019, p. 213. [101] A. Swearngin and Y. Li, “Modeling mobile interface tappability using crowdsourcing and deep learning,” in Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, 2019, p. 75. [102] L. F. Laursen, Y. Koyama, H.-T. Chen, E. Garces, D. Gutierrez, R. Harper, and T. Igarashi, “Icon set selection via human computation,” in Pacific Graphics Short Papers. Goslar, Germany, 2016. [103] J. Bromley, I. Guyon, Y. LeCun, E. Säckinger, and R. Shah, “Signature verification using a” siamese” time delay neural network,” in Advances in neural information processing systems, 1994, pp. 737–744. [104] S. Chopra, R. Hadsell, Y. LeCun et al., “Learning a similarity metric discriminatively, with application to face verification,” in CVPR (1), 2005, pp. 539–546. [105] V. Nair and G. E. Hinton, “Rectified linear units improve restricted boltzmann machines,” in Proceedings of the 27th international conference on machine learning (ICML-10), 2010, pp. 807–814. [106] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014. [107] L. v. d. Maaten and G. Hinton, “Visualizing data using t-sne,” Journal of machine learning research, vol. 9, no. Nov, pp. 2579–2605, 2008. [108] B. Jin, M. V. O. Segovia, and S. Süsstrunk, “Image aesthetic predictors based on weighted cnns,” in 2016 IEEE International Conference on Image Processing (ICIP). Ieee, 2016, pp. 2291–2295. [109] F. Babiloni and L. Astolfi, “Social neuroscience and hyperscanning techniques: past, present and future,” Neuroscience & Biobehavioral Reviews, vol. 44, pp. 76–93, 2014. [110] Y.-K. Ou and Y.-C. Liu, “Effects of sign design features and training on comprehension of traffic signs in taiwanese and vietnamese user groups,” International Journal of Industrial Ergonomics, vol. 42, no. 1, pp. 1–7, 2012. [111] F. Schroff, D. Kalenichenko, and J. Philbin, “Facenet: A unified embedding for face recognition and clustering,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 815–823.
|