跳到主要內容

臺灣博碩士論文加值系統

(216.73.216.17) 您好!臺灣時間:2025/09/03 02:52
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:張巧慧
研究生(外文):Chiao-Hui Chang
論文名稱:智慧眼鏡眼神目標選取技術
論文名稱(外文):EyeLasso: Real-World Object Selection using Gaze-based Gestures
指導教授:陳彥仰
口試委員:余能豪王浩全汪曼穎
口試日期:2015-06-02
學位類別:碩士
校院名稱:國立臺灣大學
系所名稱:資訊工程學研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2015
畢業學年度:103
語文別:英文
論文頁數:18
中文關鍵詞:眼神互動眼神追蹤
外文關鍵詞:Gestural InteractionInput and Interaction TechnologiesEye trackerSmart glasses
相關次數:
  • 被引用被引用:0
  • 點閱點閱:265
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
Selecting objects in real-world settings is currently difficult to automate and requires significant manual effort. We propose a gaze-based gesture approach using wearable eye trackers. However, achieving effective gaze-based selection of real-world object has several challenges, such as the issue of Double Role and Midas touch. Prior studies required explicit manual activation/deactivation to confirm the user’s intention, which impede fast and continuous interaction. We present EyeLasso - a fast gaze-based selection technique that allows users to select the target they see with only a single Lasso gaze gesture, without requiring additional manual input. EyeLasso uses Random Forest learning for gesture detection and GrabCut using OpenCV for improving the accuracy of target selection. Results from our 6-user experiments and 10-object-selection tasks in both gesture detection and item selection show that EyeLasso selected the target with 90% accuracy, without requiring manual input (0.17 times unintended selections in two minutes, 10% false negative rate).

1 Introduction 1
2 Related Work 4
2.1 Eye-based Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2 Gaze Gestures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.3 Object Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.4 Image Editing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3 User Study 1 6
3.1 Implementation of Target Selection . . . . . . . . . . . . . . . . . . . . . 7
3.2 Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
4 User Study 2 10
4.1 Lab Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
4.2 Implementation of Gaze Gesture Detection . . . . . . . . . . . . . . . . 11
4.3 Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
4.4 Detection System Field Trial . . . . . . . . . . . . . . . . . . . . . . . . 13
5 Limitation and Discussion 14
6 Conclusion and Future Work 15
Bibliography 16

[1] Tobii“Technology - world leader in eye tracking and gaze interaction”. http://www.tobii.com/.
[2] R. Bednarik, H. Vrzakova, and M. Hradis. What do you want to do next: a novel approach for intent prediction in gaze-based interaction. In Proceedings of the symposium on eye tracking research and applications, pages 83–90. ACM, 2012.
[3] A. Bulling, D. Roggen, and G. Tröster. It’s in your eyes: towards context-awareness and mobile hci using wearable eog goggles. In Proceedings of the 10th international conference on Ubiquitous computing, pages 84–93. ACM, 2008.
[4] C. Djeraba. State of the art of eye tracking.
[5] H. Istance, R. Bates, A. Hyrskykari, and S. Vickers. Snap clutch, a moded approach to solving the midas touch problem. In Proceedings of the 2008 symposium on Eye
tracking research & applications, pages 221–228. ACM, 2008.
[6] H. Istance, A. Hyrskykari, L. Immonen, S. Mansikkamaa, and S. Vickers. Designing gaze gestures for gaming: an investigation of performance. In Proceedings of the
2010 Symposium on Eye-Tracking Research & Applications, pages 323–330. ACM, 2010.
[7] R. J. Jacob. The use of eye movements in human-computer interaction techniques: what you look at is what you get. ACM Transactions on Information Systems (TOIS),
9(2):152–169, 1991.
[8] D. E. Kieras and A. J. Hornof. Towards accurate and practical predictive models of active-vision-based visual search. In Proceedings of the 32nd annual ACM conference on Human factors in computing systems, pages 3875–3884. ACM, 2014.
[9] D. Mardanbegi, D. W. Hansen, and T. Pederson. Eye-based head gestures. In Proceedings of the symposium on eye tracking research and applications, pages 139–
146. ACM, 2012.
[10] S. M. Munn and J. B. Pelz. Fixtag: An algorithm for identifying and tagging fixations to simplify the analysis of data collected by portable eye trackers. ACM Transactions on Applied Perception (TAP), 6(3):16, 2009.
[11] C. Rother, V. Kolmogorov, and A. Blake. Grabcut: Interactive foreground extraction using iterated graph cuts. ACM Transactions on Graphics (TOG), 23(3):309–314,
2004.
[12] L. E. Sibert and R. J. Jacob. Evaluation of eye gaze interaction. In Proceedings of the SIGCHI conference on Human Factors in Computing Systems, pages 281–288.
ACM, 2000.
[13] L. E. Sibert, J. N. Templeman, and R. J. Jacob. Evaluation and analysis of eye gaze interaction. Technical report, DTIC Document, 2001.
[14] O. Špakov, P. Isokoski, and P. Majaranta. Look and lean: accurate head-assisted eye pointing. In Proceedings of the Symposium on Eye Tracking Research and Applications, pages 35–42. ACM, 2014.
[15] S. Stellmach and R. Dachselt. Still looking: investigating seamless gaze-supported selection, positioning, and manipulation of distant targets. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 285–294. ACM, 2013.
[16] T. Toyama, T. Kieninger, F. Shafait, and A. Dengel. Gaze guided object recognition using a head-mounted eye tracker. In Proceedings of the Symposium on Eye Tracking Research and Applications, pages 91–98. ACM, 2012.
[17] Y.-C. Tseng and A. Howes. The adaptation of visual search strategy to expected information gain. In Proceedings of the SIGCHI conference on Human factors in
computing systems, pages 1075–1084. ACM, 2008.
[18] J. Turner, A. Bulling, J. Alexander, and H. Gellersen. Cross-device gaze-supported point-to-point content transfer. In Proceedings of the Symposium on Eye Tracking Research and Applications, pages 19–26. ACM, 2014.
[19] J. Turner, A. Bulling, and H. Gellersen. Extending the visual field of a head-mounted eye tracker for pervasive eye-based interaction. In Proceedings of the Symposium on Eye Tracking Research and Applications, pages 269–272. ACM, 2012.
[20] R. Vertegaal. A fitts law comparison of eye tracking and manual input in the selection of visual targets. In Proceedings of the 10th international conference on Multimodal interfaces, pages 241–248. ACM, 2008.
[21] C. Ware and H. H. Mikaelian. An evaluation of an eye tracker as a device for computer input2. In ACM SIGCHI Bulletin, volume 17, pages 183–188. ACM, 1987.
[22] J. O. Wobbrock, H. H. Aung, B. Rothrock, and B. A. Myers. Maximizing the guessability of symbolic input. In CHI’05 extended abstracts on Human Factors in Computing Systems, pages 1869–1872. ACM, 2005.

QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top