|
[1] Tobii“Technology - world leader in eye tracking and gaze interaction”. http://www.tobii.com/. [2] R. Bednarik, H. Vrzakova, and M. Hradis. What do you want to do next: a novel approach for intent prediction in gaze-based interaction. In Proceedings of the symposium on eye tracking research and applications, pages 83–90. ACM, 2012. [3] A. Bulling, D. Roggen, and G. Tröster. It’s in your eyes: towards context-awareness and mobile hci using wearable eog goggles. In Proceedings of the 10th international conference on Ubiquitous computing, pages 84–93. ACM, 2008. [4] C. Djeraba. State of the art of eye tracking. [5] H. Istance, R. Bates, A. Hyrskykari, and S. Vickers. Snap clutch, a moded approach to solving the midas touch problem. In Proceedings of the 2008 symposium on Eye tracking research & applications, pages 221–228. ACM, 2008. [6] H. Istance, A. Hyrskykari, L. Immonen, S. Mansikkamaa, and S. Vickers. Designing gaze gestures for gaming: an investigation of performance. In Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications, pages 323–330. ACM, 2010. [7] R. J. Jacob. The use of eye movements in human-computer interaction techniques: what you look at is what you get. ACM Transactions on Information Systems (TOIS), 9(2):152–169, 1991. [8] D. E. Kieras and A. J. Hornof. Towards accurate and practical predictive models of active-vision-based visual search. In Proceedings of the 32nd annual ACM conference on Human factors in computing systems, pages 3875–3884. ACM, 2014. [9] D. Mardanbegi, D. W. Hansen, and T. Pederson. Eye-based head gestures. In Proceedings of the symposium on eye tracking research and applications, pages 139– 146. ACM, 2012. [10] S. M. Munn and J. B. Pelz. Fixtag: An algorithm for identifying and tagging fixations to simplify the analysis of data collected by portable eye trackers. ACM Transactions on Applied Perception (TAP), 6(3):16, 2009. [11] C. Rother, V. Kolmogorov, and A. Blake. Grabcut: Interactive foreground extraction using iterated graph cuts. ACM Transactions on Graphics (TOG), 23(3):309–314, 2004. [12] L. E. Sibert and R. J. Jacob. Evaluation of eye gaze interaction. In Proceedings of the SIGCHI conference on Human Factors in Computing Systems, pages 281–288. ACM, 2000. [13] L. E. Sibert, J. N. Templeman, and R. J. Jacob. Evaluation and analysis of eye gaze interaction. Technical report, DTIC Document, 2001. [14] O. Špakov, P. Isokoski, and P. Majaranta. Look and lean: accurate head-assisted eye pointing. In Proceedings of the Symposium on Eye Tracking Research and Applications, pages 35–42. ACM, 2014. [15] S. Stellmach and R. Dachselt. Still looking: investigating seamless gaze-supported selection, positioning, and manipulation of distant targets. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 285–294. ACM, 2013. [16] T. Toyama, T. Kieninger, F. Shafait, and A. Dengel. Gaze guided object recognition using a head-mounted eye tracker. In Proceedings of the Symposium on Eye Tracking Research and Applications, pages 91–98. ACM, 2012. [17] Y.-C. Tseng and A. Howes. The adaptation of visual search strategy to expected information gain. In Proceedings of the SIGCHI conference on Human factors in computing systems, pages 1075–1084. ACM, 2008. [18] J. Turner, A. Bulling, J. Alexander, and H. Gellersen. Cross-device gaze-supported point-to-point content transfer. In Proceedings of the Symposium on Eye Tracking Research and Applications, pages 19–26. ACM, 2014. [19] J. Turner, A. Bulling, and H. Gellersen. Extending the visual field of a head-mounted eye tracker for pervasive eye-based interaction. In Proceedings of the Symposium on Eye Tracking Research and Applications, pages 269–272. ACM, 2012. [20] R. Vertegaal. A fitts law comparison of eye tracking and manual input in the selection of visual targets. In Proceedings of the 10th international conference on Multimodal interfaces, pages 241–248. ACM, 2008. [21] C. Ware and H. H. Mikaelian. An evaluation of an eye tracker as a device for computer input2. In ACM SIGCHI Bulletin, volume 17, pages 183–188. ACM, 1987. [22] J. O. Wobbrock, H. H. Aung, B. Rothrock, and B. A. Myers. Maximizing the guessability of symbolic input. In CHI’05 extended abstracts on Human Factors in Computing Systems, pages 1869–1872. ACM, 2005.
|