跳到主要內容

臺灣博碩士論文加值系統

(44.200.122.214) 您好!臺灣時間:2024/10/07 23:31
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:程芙茵
研究生(外文):Cherng, Fu-Yin
論文名稱:通過基於腦電圖的⽅法和⼤規模在線研究理解提⽰⾳和圖形圖標的使⽤性
論文名稱(外文):Understanding the Usability of Audio Notifications and Graphic Icons by EEG-based Approach and Large-scale Online Studies
指導教授:林文杰林文杰引用關係
指導教授(外文):Lin, Wen-Chieh
口試委員:張永儒歐陽明金榮泰陳一平陳炳宇曾元琦胡敏君林文杰
口試委員(外文):Chang, Yung-JuOuhyoung, MingKing, Jung-TaiChen, I-PingChen, Bing-YuTseng, Yuan-ChiHu, Min-ChunLin, Wen-Chieh
口試日期:2019-11-05
學位類別:博士
校院名稱:國立交通大學
系所名稱:資訊科學與工程研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2019
畢業學年度:108
語文別:英文
論文頁數:112
中文關鍵詞:人機互動認知神經人體工程學腦機介面提示音圖形圖標群眾外包資料驅動設計
外文關鍵詞:Human-computer InteractionNeuroergonomicsBrain-computer InterfaceAudio NotificationsGraphic IconsCrowdsourcingData-driven Design
相關次數:
  • 被引用被引用:0
  • 點閱點閱:662
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
了解使用者的認知狀態可以提供人機互動的研究者與設計師額外的訊息以便知道特 定設計對於使用者內部認知層面上的影響。信息過載的問題也顯示了在評估資訊設計 時,得到使用者細微的認知以及行為表現的重要性的,因為由此可讓研究者與設計師 得以更好地挑選要將什麼樣的訊息、哪個訊息納入互動中。在所有信息中,我們專注 於評估提示音與圖形圖標的可用性,因為它們時常在我們生活中被使用。本論文的目 標是利用研究使用者的認知回饋以及大量行為表現以增加對於提示音以及圖形圖標使 用性的了解。首先,我們展示利用腦電波以及大型線上實驗對驗證有不同設計操弄的 提示音的好處。接下來,我們應用這些方法在驗證不同圖形圖標的使用性。最後,透 過圖形圖標的大型線上實驗,我們搜集到了 2,698 位線上工作者對於兩千個圖形圖標的 評分資料。我們利用這份資料去訓練深度學習網路。這些深度學習模型的結果幫助我 們探索圖標之間的視覺關係以及提供使用者對於新圖標感知的預測。經由從腦電波以 及大型線上實驗的發現,本論文不只提供了關於不同提示音和圖形圖標如何影響使用 者感知的全面觀察,也提供了如何進行認知神經科學以及人機互動的跨領域研究。
Understanding users’ cognitive states provide human-computer interaction researchers and designers extra channels to know the effects of specific design intervention on users’ internal states. The problem of information overload also reveals the importance of obtaining nuanced users’ cognitive and behavioral performance in the evaluation, which allows researchers and designers finely select what and which information can be introduced in the interaction. Among all the information, we focus on evaluating the usability of audio notifications and graphic icons because of their frequent usage. The goal of this thesis is to advance the understanding of the usability of audio notifications and graphic icons by investigating users’ cognitive responses and behavioral performance at a massive scale. First, we demonstrated the benefits of using the Electroencephalography (EEG)-based method and large-scale online studies in the evaluation of the audio notifications with different combinations of design features. Second, we applied these two methods to the evaluation of different graphic icons’ usability. Finally, through the online studies of graphic icons, we collected a dataset with 2,698 workers rating on 2,000 icons. We used the dataset to train deep neural networks. The results of these deep-learning models help us explore the visual relations between icons as well as predict users’ perceptions of new icons. Through the triangulation with the findings from the EEG-based and large-scale online studies, the thesis provides not only comprehensive observations of how different audio notifications and graphic icons affect users’ perceptions but also insights into how to conduct the interdisciplinary research between the fields of cognitive neuroscience and human-computer interaction.
摘要............................................. i
Abstract.......................................... ii
Acknowledgment..................................... iii
Table of Contents ..................................... iv
List of Tables........................................ vii
List of Figures ....................................... viii
1 Introduction....................................... 1
2 RelatedWork...................................... 4
2.1 Audio-notification Design ............................ 4
2.2 Graphic-icon Design ............................... 5
2.3 Brain-computer Interface on Human-computer Interaction . . . . . . . . . . . 6
2.3.1 EEG Measurements of Audio Notifications . . . . . . . . . . . 6
2.3.2 Cognitive Information Processing for Graphic Icons . . . . . . . . . . 8
2.4 Large-Scale Online Studies............................ 8
2.4.1 Large-Scale Online Studies on Psychology . . . . . . . . . . . . . . . 8
2.4.2 Large-Scale Online Studies on Human-computer Interaction . . . . . . 9
3 Audio Notifications................................... 11
3.1 EEG Study .................................... 13
3.1.1 Participants and Device.......................... 13
3.1.2 Material Preparation ........................... 14
3.1.3 Methodology ............................... 14
3.1.4 EEG-data Recording and Processing................... 15
3.1.5 Results .................................. 16
3.1.6 Discussion of EEG Study ........................ 18
3.2 Large-Scale Online Studies............................ 21
3.2.1 Task.................................... 21
3.2.2 Participants and Procedure........................ 21
3.2.3 Behavioral-data Recording and Processing . . . . . . . . . . . . . . . 22
3.2.4 Results .................................. 24
3.2.5 Discussion of Large-scale Online Studies . . . . . . . . . . . . . . . . 28
4 Graphic Icons...................................... 30
4.1 EEG Experiment Design ............................. 32
4.1.1 Material: Graphic Icons ......................... 33
4.1.2 Participants and Procedure........................ 35
4.1.3 EEG recording and processing...................... 36
4.2 EEG Experiment1: Function-icon Matching................... 36
4.2.1 Task.................................... 37
4.2.2 Data Analysis............................... 38
4.2.3 Results .................................. 38
4.2.4 Experiment 1 Findings and Discussion.................. 40
4.3 EEG Experiment 2: Icon Selection Under Sliding . . . . . . . . . . . . . . . . 41
4.3.1 Task.................................... 42
4.3.2 Data Analysis............................... 43
4.3.3 Results .................................. 43
4.3.4 Experiment 2 Findings and Discussion.................. 45
4.4 Behavioral Experiment 3: Icon Selection From Within a Grid . . . . . . . . . . 47
4.4.1 Taskand Procedure............................ 48
4.4.2 Measurement and Data Analysis..................... 49
4.4.3 Results .................................. 49
4.4.4 Findings and Discussion ......................... 50
4.5 Discussion of EEG and Behavioral Experiments . . . . . . . . . . . . . . . . . 52
4.6 Large-Scale Online Studies............................ 54
4.6.1 Icon Dataset Collection.......................... 54
4.6.2 Crowdsourcing Data Collections..................... 55
4.6.3 Results of Exploratory Analysis ..................... 56
4.6.4 Effects of user-specific Factors...................... 60
5 Building Computational Models on Icon Dataset .................. 63
5.1 Siamese Network: Feature Space for Icons with Different Functions . . . . . . 66
5.1.1 Results and Findings of Siamese Network . . . . . . . . . . . . . . . . 67
5.2 CNN Classification and Regression: Predict Semantic Distance and Familiarity of Icons...................................... 72
5.2.1 CNN Classification............................ 74
5.2.2 CNN Regression ............................. 77
5.3 Potential Applications............................... 82
6 Discussion........................................ 86
6.1 Advantages and Limitations of EEG-based Method . . . . . . . . . . . . . . . 86
6.2 Advantages and Limitations of Large-scale Online Study . . . . . . . . . . . . 87
6.3 Complements between EEG and Online Studies. . . . . . . . . . . . . . . . . 87
7 Limitations and Future Works ......................... 89
7.1 Audio Notifications................................ 89
7.2 Graphic Icons................................... 90
7.3 Building Computational Models....................... 91
7.4 Overall Future Works............................... 91
8 Conclusion ....................................... 93
References ......................................... 95
Curriculum Vitae ..................................... 108
[1] F.-Y. Cherng, W.-C. Lin, J.-T. King, and Y.-C. Lee, “An eeg-based approach for evaluating graphic icons from the perspective of semantic distance,” in Proceedings of the 2016 chi conference on human factors in computing systems. ACM, 2016, pp. 4378–4389.
[2] F.-Y. Cherng, Y.-C. Lee, J.-T. King, and W.-C. Lin, “Measuring the influences of musical parameters on cognitive and behavioral responses to audio notifications using eeg and large-scale online studies,” in Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, 2019, p. 409.
[3] Y.-C. Lee, W.-C. Lin, J.-T. King, L.-W. Ko, Y.-T. Huang, and F.-Y. Cherng, “An eeg-based approach for evaluating audio notifications under ambient sounds,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2014, pp. 3817–3826.
[4] T. O. Zander, C. Kothe, S. Jatzev, and M. Gaertner, “Enhancing human-computer inter-action with input from active and passive brain-computer interfaces,” in Brain-computer interfaces. Springer, 2010, pp. 181–199.
[5] B. F. Yuksel, K. B. Oleson, L. Harrison, E. M. Peck, D. Afergan, R. Chang, and R. J. Jacob, “Learn piano with bach: An adaptive learning interface that adjusts task difficulty based on brain state,” in Proceedings of the 2016 chi conference on human factors in computing systems. ACM, 2016, pp. 5372–5384.
[6] C. T. Vi, I. Jamil, D. Coyle, and S. Subramanian, “Error related negativity in observing interactive tasks,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2014, pp. 3787–3796.
[7] C. C. Yang, H. Chen, and K. Hong, “Visualization of large category map for internet browsing,” Decision Support Systems, vol. 35, no. 1, pp. 89–102, 2003.
[8] G. Murphy and C. M. Greene, “Perceptual load induces inattentional blindness in drivers,” Applied Cognitive Psychology, vol. 30, no. 3, pp. 479–483, 2016.
[9] S. J. Isherwood, S. J. McDougall, and M. B. Curry, “Icon identification in context: The changing role of icon characteristics with user experience,” Human Factors: The Journal of the Human Factors and Ergonomics Society, vol. 49, no. 3, pp. 465–476, 2007.
[10] V. Setlur and J. D. Mackinlay, “Automatic generation of semantic icon encodings for visualizations,” in Proceedings of the 32nd annual ACM conference on Human factors in computing systems, 2014, pp. 541–550.
[11] D. Warnock, M. McGee-Lennon, and S. Brewster, “Multiple notification modalities and older users,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2013, pp. 1091–1094.
[12] M. Liljedahl and J. Fagerlönn, “Methods for sound design: a review and implications for research and practice,” in Proceedings of the 5th Audio Mostly Conference: A Conference on Interaction with Sound. ACM, 2010, p. 2.
[13] C. Frauenberger and T. Stockman, “Auditory display design - an investigation of a design pattern approach,” International Journal of Human-Computer Studies, vol. 67, no. 11, pp. 907–922, 2009.
[14] S.-C. Huang, R. G. Bias, and D. Schnyer, “How are icons processed by the brain? neuroimaging measures of four types of visual stimuli used in information systems,” Journal of the Association for Information Science and Technology, vol. 66, no. 4, pp. 702–720, 2015.
[15] A. A. Ghosh, T. E. Lockhart, and J. Liu, “Aging effect on detectability, criticality and urgency under various auditory conditions,” Transportation Research Part F: Traffic Psychology and Behaviour, vol. 31, pp. 25–35, 2015.
[16] S. Garzonis, S. Jones, T. Jay, and E. O’Neill, “Auditory icon and earcon mobile service notifications: intuitiveness, learnability, memorability and preference,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2009, pp. 1513–1522.
[17] S. Garzonis, “Mobile service awareness via auditory notifications,” Ph.D. dissertation, University of Bath, 2010.
[18] J. Edworthy, S. Loxley, and I. Dennis, “Improving auditory warning design: Relationship between warning sound parameters and perceived urgency,” Human Factors, vol. 33, no. 2, pp. 205–231, 1991.
[19] C. L. Krumhansl, “Rhythm and pitch in music cognition.” Psychological bulletin, vol. 126, no. 1, p. 159, 2000.
[20] G. Kramer, B. Walker, T. Bonebright, P. Cook, J. H. Flowers, N. Miner, and J. Neuhoff, “Sonification report: Status of the field and research agenda,” Faculty Publications, Department of Psychology, 2010.
[21] M. Liljedahl and N. Papworth, “Using sound to enhance users’ experiences of mobile applications,” in Proceedings of the 7th Audio Mostly Conference: A Conference on In-teraction with Sound. ACM, 2012, pp. 24–31.
[22] S. A. Brewster, P. C. Wright, and A. D. N. Edwards, “Experimentally derived guide-lines for the creation of earcons,” in Adjunct Proceedings of the British Computer Society Conference on Human-Computer Interaction, 1995, pp. 155–159.
[23] R. Jung, “Non-intrusive audio notification in emotion classified background music,” in Proceedings of Meetings on Acoustics, vol. 9, no. 1. Acoustical Society of America, 2015, p. 050001.
[24] G. Kreutz, U. Ott, D. Teichmann, P. Osawa, and D. Vaitl, “Using music to induce emo-tions: Influences of musical preference and absorption,” Psychology of music, 2007.
[25] G. Husain, W. F. Thompson, and E. G. Schellenberg, “Effects of musical tempo and mode on arousal, mood, and spatial abilities,” Music Perception: An Interdisciplinary Journal, vol. 20, no. 2, pp. 151–171, 2002.
[26] T. Komatsu, S. Yamada, K. Kobayashi, K. Funakoshi, and M. Nakano, “Artificial subtle expressions: intuitive notification methodology of artifacts,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2010, pp. 1941–1944.
[27] B. C. Moore, An introduction to the psychology of hearing. Brill, 2012.
[28] E. B. Slawinski and J. F. MacNeil, “Age, music, and driving performance: Detection of external warning sounds in vehicles.” Psychomusicology: A Journal of Research in Music Cognition, vol. 18, no. 1-2, p. 123, 2002.
[29] S. Koelsch, T. Grossmann, T. C. Gunter, A. Hahne, E. Schröger, and A. D. Friederici, “Children processing music: electric brain responses reveal musical competence and gender differences,” Journal of Cognitive Neuroscience, vol. 15, no. 5, pp. 683–693, 2003.
[30] W. W. Gaver, “How do we hear in the world? explorations in ecological acoustics,” Ecological psychology, vol. 5, no. 4, pp. 285–313, 1993.
[31] D. J. Levitin, This is your brain on music: Understanding a human obsession. Atlantic Books Ltd, 2011.
[32] E. M. M. Peck, B. F. Yuksel, A. Ottley, R. J. Jacob, and R. Chang, “Using fnirs brain sensing to evaluate information visualization interfaces,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2013, pp. 473–482.
[33] Y.-P. Lim and P. C. Woods, “Experimental color in computer icons,” in Visual Information Communication. Springer, 2010, pp. 149–158.
[34] E. M. Palmer, C. M. Brown, C. F. Bates, P. J. Kellman, and T. C. Clausner, “Perceptual cues and imagined viewpoints modulate visual search in air traffic control displays,” in Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 53, no. 17. Sage Publications, 2009, pp. 1111–1115.
[35] C. Harrison, G. Hsieh, K. D. Willis, J. Forlizzi, and S. E. Hudson, “Kineticons: using iconographic motion in graphical user interface design,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2011, pp. 1999–2008.
[36] S. J. Mcdougall, M. B. Curry, and O. de Bruijn, “Measuring symbol and icon characteris-tics: Norms for concreteness, complexity, meaningfulness, familiarity, and semantic dis-tance for 239 symbols,” Behavior Research Methods, Instruments, & Computers, vol. 31, no. 3, pp. 487–519, 1999.
[37] V. Setlur, C. Albrecht-Buehler, A. A. Gooch, S. Rossoff, and B. Gooch, “Semanticons: Visual metaphors as file icons,” in Computer Graphics Forum, vol. 24, 2005, pp. 647–656.
[38] R. Leung, J. McGrenere, and P. Graf, “Age-related differences in the initial usability of mobile device icons,” Behaviour & Information Technology, vol. 30, no. 5, pp. 629–642, 2011.
[39] K. Rızvanoğlu and Ö. Öztürk, “Cross-cultural understanding of the dual structure of metaphorical icons: An explorative study with french and turkish users on an e-learning site,” in Internationalization, Design and Global Development, ser. Lecture Notes in Computer Science. Springer Berlin Heidelberg, 2009, pp. 89–98.
[40] T. Davies and A. Beeharee, “The case of the missed icon: change blindness on mobile devices,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2012, pp. 1451–1460.
[41] S. McDougall and S. Isherwood, “What’s in a name? the role of graphics, functions, and their interrelationships in icon identification,” Behavior research methods, vol. 41, no. 2, pp. 325–336, 2009.
[42] R. Leung, “Improving the learnability of mobile device applications for older adults,” in CHI’09 Extended Abstracts on Human Factors in Computing Systems. ACM, 2009, pp. 3125–3128.
[43] E. Solovey, P. Schermerhorn, M. Scheutz, A. Sassaroli, S. Fantini, and R. Jacob, “Brain-put: enhancing interactive systems with streaming fnirs brain input,” in Proceedings of the SIGCHI conference on Human Factors in Computing Systems. ACM, 2012, pp. 2193–2202.
[44] T. O. Zander and C. Kothe, “Towards passive brain-computer interfaces: applying brain-computer interface technology to human-machine systems in general,” Journal of neural engineering, vol. 8, no. 2, p. 025005, 2011.
[45] R. Näätänen, P. Paavilainen, T. Rinne, and K. Alho, “The mismatch negativity (mmn) in basic research of central auditory processing: a review,” Clinical Neurophysiology, vol. 118, no. 12, pp. 2544–2590, 2007.
[46] R. K. Mehta and R. Parasuraman, “Neuroergonomics: a review of applications to physical and cognitive work,” Frontiers in human neuroscience, vol. 7, 2013.
[47] K. Lukanov, H. A. Maior, and M. L. Wilson, “Using fnirs in usability testing: understanding the effect of web form layout on mental workload,” in Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. ACM, 2016, pp. 4011–4016.
[48] J. L. Burt, D. S. Bartolome, D. W. Burdette, and J. R. Comstock Jr, “A psychophysiological evaluation of the perceived urgency of auditory warning signals,” Ergonomics, vol. 38, no. 11, pp. 2327–2340, 1995.
[49] C. Glatz, S. S. Krupenia, H. H. Bülthoff, and L. L. Chuang, “Use the right sound for the right job: verbal commands and auditory icons for a task-management system favor different information processes in the brain,” in Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 2018, p. 472.
[50] D. L. Schacter, “Psychology second edition, 41 madison avenue, new york, NY 10010,” 2011.
[51] A. Johnson and R. W. Proctor, Attention: Theory and Practice. Sage Publications, 2004.
[52] C. Escera, K. Alho, E. Schröger, and I. Winkler, “Involuntary attention and distractibility as evaluated with event-related brain potentials,” Audiology and Neurotology, vol. 5, no.
3- 4, pp. 151–166, 2000.
[53] J. Polich, “Updating p300: an integrative theory of p3a and p3b,” Clinical neurophysiology, vol. 118, no. 10, pp. 2128–2148, 2007.
[54] J. Ward, The student’s guide to cognitive neuroscience. Psychology Press, 2015.
[55] G. W. Humphreys, C. J. Price, and M. J. Riddoch, “From objects to names: A cognitive neuroscience approach,” Psychological research, vol. 62, no. 2-3, pp. 118–130, 1999.
[56] Y.-Y. Yeh, D.-S. Lee, and Y.-H. Ko, “Color combination and exposure time on legibility and eeg response of icon presented on visual display terminal,” Displays, vol. 34, no. 1, pp. 33–38, 2013.
[57] T. Althoff, E. Horvitz, R. W. White, and J. Zeitzer, “Harnessing the web for population-scale physiological sensing: A case study of sleep and performance,” in Proceedings of the 26th International Conference on World Wide Web. International World Wide Web Conferences Steering Committee, 2017, pp. 113–122.
[58] K. Reinecke and K. Z. Gajos, “Labinthewild: Conducting large-scale online experiments with uncompensated samples,” in Proceedings of the 18th ACM conference on computer supported cooperative work & social computing. ACM, 2015, pp. 1364–1378.
[59] M. Eitz, J. Hays, and M. Alexa, “How do humans sketch objects?” ACM Trans. Graph., vol. 31, no. 4, pp. 44–1, 2012.
[60] P. Sangkloy, N. Burnell, C. Ham, and J. Hays, “The sketchy database: learning to retrieve badly drawn bunnies,” ACM Transactions on Graphics (TOG), vol. 35, no. 4, p. 119, 2016.
[61] A. Jahanian, P. Isola, and D. Wei, “Mining visual evolution in 21 years of web design,” in Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems. ACM, 2017, pp. 2676–2682.
[62] S. Dey, K. Karahalios, and W.-T. Fu, “Understanding the effects of endorsements in scientific crowdfunding,” in Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. ACM, 2017, pp. 2376–2381.
[63] K. Reinecke, T. Yeh, L. Miratrix, R. Mardiko, Y. Zhao, J. Liu, and K. Z. Gajos, “Predict-ing users’ first impressions of website aesthetics with a quantification of perceived visual complexity and colorfulness,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2013, pp. 2049–2058.
[64] L. Fridman, B. Reimer, B. Mehler, and W. T. Freeman, “Cognitive load estimation in the wild,” in Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 2018, p. 652.
[65] R. D. Patterson, Guidelines for auditory warning systems on civil aircraft. Civil Aviation Authority, 1982.
[66] G. Leplaitre and I. McGregor, “How to tackle auditory interface aesthetics? discussion and case study,” 2004.
[67] R.-C. Ye, C.-T. Lin, and C.-F. Huang, “Exploring eeg spectral dynamics of music-induced emotions,” 2010.
[68] S. Makeig, T.-P. Jung, D. G. Ghahremani, and T. J. Sejnowski, “Independent component analysis of simulated erp data,” Institute for Neural Computation, University of California: technical report INC-9606, 1996.
[69] C. Amezcua, M. A. Guevara, and J. Ramos-Loyo, “Effects of musical tempi on visual attention erps,” International journal of neuroscience, vol. 115, no. 2, pp. 193–206, 2005.
[70] Z. Fu, G. Lu, K. M. Ting, and D. Zhang, “A survey of audio-based music classification and annotation,” IEEE Transactions on Multimedia, vol. 13, no. 2, pp. 303–319, 2011.
[71] J. Heer and M. Bostock, “Crowdsourcing graphical perception: using mechanical turk to assess visualization design,” in Proceedings of the SIGCHI conference on human factors in computing systems. ACM, 2010, pp. 203–212.
[72] E. Van Eyken, G. Van Camp, and L. Van Laer, “The complexity of age-related hearing im-pairment: contributing environmental and genetic factors,” Audiology and Neurotology, vol. 12, no. 6, pp. 345–358, 2007.
[73] C. L. Baldwin, Auditory cognition and human performance: Research and applications. CRC Press, 2016.
[74] N. J. Salkind, Encyclopedia of Research Design. Sage, 2010, vol. 1.
[75] A. Bulling and T. O. Zander, “Cognition-aware computing,” Pervasive Computing, IEEE, vol. 13, no. 3, pp. 80–83, 2014.
[76] C. Vi and S. Subramanian, “Detecting error-related negativity for interaction design,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2012, pp. 493–502.
[77] D. Afergan, E. M. Peck, E. T. Solovey, A. Jenkins, S. W. Hincks, E. T. Brown, R. Chang, and R. J. Jacob, “Dynamic difficulty using brain metrics of workload,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2014, pp. 3797–3806.
[78] R. J. Jacob, “Phylter: A system for modulating notifications in wearables using physio-logical sensing,” in Foundations of Augmented Cognition: 9th International Conference, AC 2015, Held as Part of HCI International 2015, Los Angeles, CA, USA, August 2-7, 2015, Proceedings, vol. 9183. Springer, 2015, p. 167.
[79] M. F. Pike, H. A. Maior, M. Porcheron, S. C. Sharples, and M. L. Wilson, “Measuring the effect of think aloud protocols on workload using fnirs,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2014, pp. 3807–3816.
[80] S. J. McDougall, M. B. Curry, and O. de Bruijn, “The effects of visual information on users’ mental models: An evaluation of pathfinder analysis as a measure of icon usabil-ity,” International Journal of Cognitive Ergonomics, vol. 5, no. 1, pp. 59–84, 2001.
[81] B. A. Taylor, D. M. Roberts, and C. L. Baldwin, “The role of age-related neural timing variability in speech processing,” in Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 55, no. 1. SAGE Publications, 2011, pp. 162–166.
[82] S. G. Goto, Y. Ando, C. Huang, A. Yee, and R. S. Lewis, “Cultural differences in the vi-sual processing of meaning: Detecting incongruities between background and foreground objects using the n400,” Social cognitive and affective neuroscience, vol. 5, no. 2-3, pp. 242–253, 2010.
[83] Q. Ma, X. Wang, S. Dai, and L. Shu, “Event-related potential n270 correlates of brand extension,” Neuroreport, vol. 18, no. 10, pp. 1031–1034, 2007.
[84] Y. Wang, J. Kong, X. Tang, D. Zhuang, and S. Li, “Event-related potential n270 is elicited by mental conflict processing in human brain,” Neuroscience letters, vol. 293, no. 1, pp. 17–20, 2000.
[85] J. R. Folstein and C. Van Petten, “Influence of cognitive control and mismatch on the n2 component of the erp: a review,” Psychophysiology, vol. 45, no. 1, pp. 152–170, 2008.
[86] C.-J. Chou, H.-W. Huang, C.-L. Lee, and C.-Y. Lee, “Effects of semantic constraint and cloze probability on chinese classifier-noun agreement,” Journal of Neurolinguistics, vol. 31, pp. 42–54, 2014.
[87] C. Herbert, B. M. Herbert, T. Ethofer, and P. Pauli, “His or mine? the time course of self–other discrimination in emotion processing,” Social neuroscience, vol. 6, no. 3, pp. 277–288, 2011.
[88] Y. C. Wu and S. Coulson, “Iconic gestures prime related concepts: An erp study,” Psy-chonomic Bulletin & Review, vol. 14, no. 1, pp. 57–63, 2007.
[89] M. Kutas and K. D. Federmeier, “Thirty years and counting: Finding meaning in the n400 component of the event related brain potential (erp),” Annual review of psychology, vol. 62, p. 621, 2011.
[90] Y. N. Yum, P. J. Holcomb, and J. Grainger, “Words and pictures: An electrophysiologi-cal investigation of domain specific processing in native chinese and english speakers,” Neuropsychologia, vol. 49, no. 7, pp. 1910–1922, 2011.
[91] S. G. Goto, A. Yee, K. Lowenberg, and R. S. Lewis, “Cultural differences in sensitivity to social context: Detecting affective incongruity using the n400,” Social neuroscience, vol. 8, no. 1, pp. 63–74, 2013.
[92] S. Fondevila, M. Martín-Loeches, L. Jiménez-Ortega, P. Casado, A. Sel, A. Fernández-Hernández, and W. Sommer, “The sacred and the absurd—-an electrophysiological study of counterintuitive ideas (at sentence level),” Social neuroscience, vol. 7, no. 5, pp. 445–457, 2012.
[93] M. Y. Robert and D. John, “The relation of strength of stimulus to rapidity of habit-formation,” Journal of comparative neurology and psychology, vol. 18, pp. 459–482, 1908.
[94] I. Burmistrov, T. Zlokazova, A. Izmalkova, and A. Leonova, “Flat design vs traditional design: Comparative experimental study,” in Human-Computer Interaction–INTERACT 2015. Springer, 2015, pp. 106–114.
[95] N. Dell, V. Vaidyanathan, I. Medhi, E. Cutrell, and W. Thies, “Yours is better!: participant response bias in hci,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2012, pp. 1321–1330.
[96] M. Böhmer and A. Krüger, “A study on icon arrangement by smartphone users,” in Pro-ceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2013, pp. 2137–2146.
[97] T. F. Liu, M. Craft, J. Situ, E. Yumer, R. Mech, and R. Kumar, “Learning design semantics for mobile apps,” in The 31st Annual ACM Symposium on User Interface Software and Technology. ACM, 2018, pp. 569–579.
[98] D. J. Ketchen and C. L. Shook, “The application of cluster analysis in strategic manage-ment research: an analysis and critique,” Strategic management journal, vol. 17, no. 6, pp. 441–458, 1996.
[99] M. Lagunas, E. Garces, and D. Gutierrez, “Learning icons appearance similarity,” Mul-timedia Tools and Applications, pp. 1–19, 2018.
[100] Z. Wu, T. Kim, Q. Li, and X. Ma, “Understanding and modeling user-perceived brand personality from mobile application uis,” in Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, 2019, p. 213.
[101] A. Swearngin and Y. Li, “Modeling mobile interface tappability using crowdsourcing and deep learning,” in Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, 2019, p. 75.
[102] L. F. Laursen, Y. Koyama, H.-T. Chen, E. Garces, D. Gutierrez, R. Harper, and T. Igarashi, “Icon set selection via human computation,” in Pacific Graphics Short Papers. Goslar, Germany, 2016.
[103] J. Bromley, I. Guyon, Y. LeCun, E. Säckinger, and R. Shah, “Signature verification using a” siamese” time delay neural network,” in Advances in neural information processing systems, 1994, pp. 737–744.
[104] S. Chopra, R. Hadsell, Y. LeCun et al., “Learning a similarity metric discriminatively, with application to face verification,” in CVPR (1), 2005, pp. 539–546.
[105] V. Nair and G. E. Hinton, “Rectified linear units improve restricted boltzmann machines,” in Proceedings of the 27th international conference on machine learning (ICML-10), 2010, pp. 807–814.
[106] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
[107] L. v. d. Maaten and G. Hinton, “Visualizing data using t-sne,” Journal of machine learning research, vol. 9, no. Nov, pp. 2579–2605, 2008.
[108] B. Jin, M. V. O. Segovia, and S. Süsstrunk, “Image aesthetic predictors based on weighted cnns,” in 2016 IEEE International Conference on Image Processing (ICIP). Ieee, 2016, pp. 2291–2295.
[109] F. Babiloni and L. Astolfi, “Social neuroscience and hyperscanning techniques: past, present and future,” Neuroscience & Biobehavioral Reviews, vol. 44, pp. 76–93, 2014.
[110] Y.-K. Ou and Y.-C. Liu, “Effects of sign design features and training on comprehension of traffic signs in taiwanese and vietnamese user groups,” International Journal of Industrial Ergonomics, vol. 42, no. 1, pp. 1–7, 2012.
[111] F. Schroff, D. Kalenichenko, and J. Philbin, “Facenet: A unified embedding for face recognition and clustering,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 815–823.
電子全文 電子全文(網際網路公開日期:20241110)
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top