跳到主要內容

臺灣博碩士論文加值系統

(44.200.27.215) 您好!臺灣時間:2024/04/24 18:23
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:黃大源
研究生(外文):Da-Yuan Huang
論文名稱:探索指觸輸入介面以增強沉浸式多媒體互動
論文名稱(外文):Exploring Finger Touch Interface for Enhancing Immersive Multimedia Interactions
指導教授:洪一平洪一平引用關係
指導教授(外文):Yi-Ping Hung
口試委員:傅楸善歐陽明陳炳宇曾元琦賴祐吉王聖銘
口試委員(外文):Chiou-Shann FuhMing OuhyoungBing-Yu ChenYuan-Chi TsengYu-Chi LaiShen-Ming Wang
口試日期:2016-07-14
學位類別:博士
校院名稱:國立臺灣大學
系所名稱:資訊網路與多媒體研究所
學門:電算機學門
學類:網路學類
論文種類:學術論文
論文出版年:2016
畢業學年度:104
語文別:英文
論文頁數:93
中文關鍵詞:人體工學指觸介面輸入表達性輸入可得性沉浸式多媒體互動
外文關鍵詞:ergonomicfinger touchexpressivityavailabilityimmersive multimedia interaction
相關次數:
  • 被引用被引用:0
  • 點閱點閱:383
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
沉浸式多媒體系統旨在提供使用者身歷其境的互動體驗。為了提供 使用者直覺、易學的互動操作,現今的多媒體互動系統多採用指觸輸 入介面。然而,介面配置乃是隨著輸入裝置的形狀排列,而非使用者 手指與裝置的空間關係,使得部分介面難以觸及,遭成輸入可得性之 不足。再者,常見的指觸介面一般單純將手指數位化成二維單點,提 供有限的輸入表達性。受限的輸入因此降低了互動時的沉浸感。本論 文之主旨便是在於探索如何將人體工學納入觸控介面設計時的考量, 強化其輸入表達性與可得性。研究使用者的使用介面時的手指可動範 圍,則能夠幫助我們改進介面的空間配置,增強輸入可得性。而人體 精密且多維度的特性可被用於增加手指輸入時的表達性。本論文探討 多種指觸介面,包括行動裝置、穿戴型裝置、以及身體輸入介面等。 我們針對每一種觸控介面使用時的特性,討論如何考量人體工學重新 設計,並提出設計準則與原型系統。我們的研究結果能夠益於現有的 指觸介面,強化輸入的表達性與可得性。其研究結果更能益於使用指 觸介面的沉浸式多媒體系統。使用者得以快速完成輸入,將注意力放 在多媒體內容上,維持其沉浸感。

Immersive multimedia systems aim to provide users with immersive experiences, and usually adopt touch interface for intuitive and learnable control. However, the layout of touch widgets is usually arranged without considering the spatial relationship between the device and the human body, yielding some widgets hard to be accessed. Therefore, the availability of finger input is sometimes restricted. Furthermore, contemporary touch interface digitizes fingers as simplistic two-dimensional input, offering limited expressivity of finger input. The insufficiencies might degrade the immersivity during multimedia interactions. In this dissertation, we aim to enhance finger touch by considering the ergonomic factors. The additional degree of freedom of our sophisticated body can be captured and formed into extra input expressivity. By understanding the anatomy and motor capability of our body, we can improve the layout design for more reachable interfaces. Also, the additional degree of freedom of our sophisticated body can be captured and formed into extra input expressivity. Different touch user interfaces are addressed, including mobile, wearable, and on-body interfaces. We propose the design guidelines and prototype systems for the addressed finger touch interfaces. Our research results not only benefit the finger touch interface, but also help future designers to design better multimedia system using finger touch input.

誌謝 v
摘要 vii
Abstract ix
1 Introduction 1
1.1 Background 1
1.2 Our Proposed Solution: Considering Ergonomics 3
1.3 Dissertation Organization 4

2 Related Work 9
2.1 Enhancingthe Availability of Finger Touch Input 9
2.1.1 Enhancing Input Availability with Beyond-surface interactions for
Large-scale TouchInterfaces 9
2.1.2 Enhancing Reachability of One-Handed Thumb-based Interactions
for Large-screen MobileDevices 10
2.2 Enhancing the Expressivity on FingerTouch Input 11
2.2.1 Enhancing Input Expressivity with Tangible Objects 11
2.2.2 Single-Tap Mode Switching by Fingers 11
2.2.3 Multi-Step Mode Switching by Fingers 12
2.3 Using On-Body Interface to Enhance Touch Interactions 13
2.3.1 On-Body Interface on Different Body Locations 13
2.3.2 Understanding On-Body Touch Interfaces 14

3 i-m-Cave: An Immersive Multimedia Platform for Virtual Touring 15
3.1 Introduction 15
3.2 Design Considerations and Multimedia Contents 16
3.2.1 Design Considerations Based on a Field Study 16
3.2.2 MultimediaContents 16
3.3 ImmersiveMultimediaSystems 17
3.3.1 Overview 17
3.3.2 TabletopSystem 18
3.3.3 TabletVersion 23
3.3.4 Head-MountedDisplayVersion 25
3.4 Exploring Finger Touch Interface Owing to The Inspiration of i-m-Cave . 27
3.5 Summary 28

4 Enhancing Input Availability by Considering Thumb Anatomy 29
4.1 Introduction 29
4.1.1 PilotStudy 30
4.2 Design 31
4.2.1 Adaptive comfort zone and moding technique 31
4.2.2 CornerSpace 32
4.2.3 BezelSpace 32
4.3 Evaluation 34
4.3.1 Task 34
4.3.2 Apparatus and Participants 35
4.3.3 ExperimentalDesign 35
4.4 Results and discussion 35
4.4.1 Selection Time 36
4.4.2 Error Rate 37
4.4.3 SubjectivePreference 37
4.4.4 Discussion 37
4.5 Summary 38

5 Enhancing Input Expressivity by Considering Touch Precision 39
5.1 Introduction 39
5.2 User Study A: Targeting Ona FingerPad 41
5.2.1 Rationale behind the use of Contact Point Model 41
5.2.2 InterfaceandApparatus 42
5.2.3 TaskandProcedure 43
5.2.4 Participants 44
5.2.5 ResultsandDiscussion 44
5.3 User Study B: Working with touch interaction 46
5.3.1 Interface and Apparatus 46
5.3.2 Task and Procedure 46
5.3.3 Participants 46
5.3.4 Result and Discussion 47
5.4 Prototype 48
5.4.1 Design Consideration 48
5.4.2 Implementation 49
5.4.3 Example Applications 49
5.5 Understanding TouchSense on Real Applications 51
5.5.1 Calculator: DiscreteInteraction 51
5.5.2 MapViewer: ContinuousInteraction 53
5.5.3 Discussion 56
5.6 Summary 57

6 Enhancing Input Expressivity and Availability by Considering Thumb Anatomy and Touch Precision 59
6.1 Introduction 59
6.2 Study 1: Comfort Regions 61
6.2.1 Task and Procedure 62
6.2.2 Participants 62
6.2.3 Results and Discussions 62
6.3 Study 2:Button Widgets on Fingers 64
6.3.1 Apparatus 65
6.3.2 Task and Procedure 67
6.3.3 Participants 67
6.3.4 Hypothesis 68
6.3.5 Results and Discussions 69
6.4 Study 3:Stroke Gesture on Fingers 70
6.4.1 Apparatus 72
6.4.2 Task and Procedure 72
6.4.3 Participants 74
6.4.4 Results and Discussions 74
6.5 Interface Design Consideration 78
6.6 Prototype Implementation 79
6.6.1 Hardware Design Consideration 79
6.6.2 Hardware 80
6.6.3 Tracking Algorithm 81
6.6.4 Applications 81
6.6.5 Limitation 82
6.7 Summary 82
7 Conclusion and Future Work 83
7.1 Summary of the Thesis 83
7.2 Future Direction 84

Bibliography 87

[1] D. Ashbrook, P. Baudisch, and S. White. Nenya: Subtle and eyes-free mobile input with a magnetically-tracked finger ring. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’11, pages 2043–2046, New York, NY, USA, 2011. ACM.
[2] G. Bailly, J. Müller, M. Rohs, D. Wigdor, and S. Kratz. ShoeSense: A new perspec- tive on gestural interaction and wearable applications. In Proc. ACM CHI ’12, pages 1239–1248, 2012.
[3] J. Bergstrom-Lehtovirta and A. Oulasvirta. Modeling the functional area of the thumb on mobile touchscreen surfaces. In Proc. ACM CHI ’14, pages 1991–2000, 2014.
[4] A. F. Blackwell, G. Fitzmaurice, L. E. Holmquist, H. Ishii, and B. Ullmer. Tangible user interfaces in context and theory. In Proc. ACM CHI EA’07, pages 2817–2820, 2007.
[5] R. Blanch, Y. Guiard, and M. Beaudouin-Lafon. Semantic pointing: Improving tar- get acquisition with control-display ratio adaptation. In Proc. ACM SIGCHI ’04, pages 519–526, 2004.
[6] S. Boring, D. Ledo, X. A. Chen, N. Marquardt, A. Tang, and S. Greenberg. The fat thumb: Using the thumb’s contact size for single-handed mobile interaction. In Proc. ACM MobileHCI ’12, pages 207–208, 2012.
[7] A. Bragdon, E. Nelson, Y. Li, and K. Hinckley. Experimental analysis of touch- screen gesture designs in mobile environments. In Proc. ACM CHI ’11, pages 403– 412, 2011.
[8] L. Chan, Y.-L. Chen, C.-H. Hsieh, R.-H. Liang, and B.-Y. Chen. CyclopsRing: En- abling whole-hand and context-aware interactions through a fisheye ring. In Proc. ACM UIST ’15, 2015.
[9] L.Chan,R.-H.Liang,M.-C.Tsai,K.-Y.Cheng,C.-H.Su,M.Y.Chen,W.-H.Cheng, and B.-Y. Chen. FingerPad: Private and subtle interaction using fingertips. In Proc. ACM UIST ’13, pages 255–260, 2013.
[10] L.-W. Chan, H.-T. Wu, H.-S. Kao, J.-C. Ko, H.-R. Lin, M. Y. Chen, J. Hsu, and Y.-P. Hung. Enabling beyond-surface interactions for interactive surface with an invisible projection. In Proc. ACM UIST ’10, pages 263–272, 2010.
[11] K.-Y. Chen, K. Lyons, S. White, and S. Patel. uTrack: 3D input using two magnetic sensors. In Proc. ACM UIST ’13, pages 237–244, 2013.
[12] X. A. Chen, N. Marquardt, A. Tang, S. Boring, and S. Greenberg. Extending a mobile device’s interaction space through body-centric interaction. In Proc. ACM MobileHCI ’12, pages 151–160, 2012.
[13] A. Dementyev and J. A. Paradiso. WristFlex: Low-power gesture input with wrist- worn pressure sensors. In Proc. ACM UIST ’14, pages 161–166, 2014.
[14] H. Durrant-Whyte and T. Bailey. Simultaneous localization and mapping: part i, June 2006.
[15] T.Geller.Interactivetabletopexhibitsinmuseumsandgalleries.ComputerGraphics and Applications, IEEE, pages 6–11, 2006.
[16] S. Gustafson, C. Holz, and P. Baudisch. Imaginary phone: Learning imaginary in- terfaces by transferring spatial memory from a familiar device. In Proc. ACM UIST ’11, pages 283–292, 2011.
[17] S.G.Gustafson,B.Rabe,andP.M.Baudisch.Understandingpalm-basedimaginary interfaces: The role of visual and tactile cues when browsing. In Proc. ACM CHI ’13, pages 889–898, 2013.
[18] B. L. Harrison, K. P. Fishkin, A. Gujar, C. Mochon, and R. Want. Squeeze me, hold me, tilt me! an exploration of manipulative user interfaces. In Proc. ACM CHI ’98, pages 17–24, 1998.
[19] C. Harrison, H. Benko, and A. D. Wilson. OmniTouch: Wearable multitouch inter- action everywhere. In Proc. ACM UIST ’11, pages 441–450, 2011.
[20] C. Harrison and S. Hudson. Using shear as a supplemental two-dimensional input channel for rich touchscreen interaction. In Proc. ACM CHI ’12, pages 3149–3152, 2012.
[21] C. Harrison, J. Schwarz, and S. E. Hudson. Tapsense: Enhancing finger interaction on touch surfaces. In Proc. ACM UIST ’11, pages 627–636, 2011.
[22] C. Harrison, D. Tan, and D. Morris. Skinput: Appropriating the body as an input surface. In Proc. ACM CHI ’10, pages 453–462, 2010.
[23] S. Heo and G. Lee. Force gestures: Augmenting touch screen gestures with normal and tangential forces. In Proc. ACM UIST ’11, pages 621–626, 2011.
[24] S.HeoandG.Lee.Forcetap:Extendingtheinputvocabularyofmobiletouchscreens by adding tap gestures. In Proc. ACM MobileHCI ’11, pages 113–122, 2011.
[25] S. Heo and G. Lee. Forcedrag: Using pressure as a touch input modifier. In Proc. ACM OzCHI ’12, pages 204–207, 2012.
[26] S. Heo and G. Lee. Indirect shear force estimation for multi-point shear force oper- ations. In Proc. ACM CHI ’13, pages 281–284, 2013.
[27] K. Hinckley and H. Song. Sensor synaesthesia: Touch in motion, and motion in touch. In Proc. ACM CHI ’11, pages 801–810, 2011.
[28] C. Holz and P. Baudisch. The generalized perceived input point model and how to double touch accuracy by extracting fingerprints. In Proc. ACM CHI ’10, pages 581–590, 2010.
[29] C. Holz and P. Baudisch. Fiberio: A touchscreen that senses fingerprints. In Proc. ACM UIST ’13, pages 41–50, 2013.
[30] K. Iwasaki, T. Miyaki, and J. Rekimoto. Expressive typing: A new way to sense typing pressure and its applications. In Proc. ACM CHI EA ’09, pages 4369–4374, 2009.
[31] S. Izadi, S. Hodges, S. Taylor, D. Rosenfeld, N. Villar, A. Butler, and J. Westhues. Going beyond the display: A surface technology with an electronically switchable diffuser. In Proc. ACM UIST ’08, pages 269–278, 2008.
[32] A. K. Karlson and B. B. Bederson. Studies in one-handed mobile design: Habit, desire and agility. Technical Report HCIL-2006-02, 2006.
[33] A. K. Karlson and B. B. Bederson. Thumbspace: Generalized one-handed input for touchscreen-based mobile devices. In Proc. ACM INTERACT ’07, pages 324–338, 2007.
[34] H. Ketabdar, M. Roshandel, and K. A. Yüksel. Magiwrite: Towards touchless digit entry using 3d space around mobile devices. In Proceedings of the 12th International Conference on Human Computer Interaction with Mobile Devices and Services, Mo- bileHCI ’10, pages 443–446, New York, NY, USA, 2010. ACM.
[35] D. Kim, O. Hilliges, S. Izadi, A. D. Butler, J. Chen, I. Oikonomidis, and P. Olivier. Digits: Freehand 3d interactions anywhere using a wrist-worn gloveless sensor. In Proc. ACM UIST ’12, pages 167–176, 2012.
[36] L.-C. Kuo, H.-Y. Chiu, C.-W. Chang, H.-Y. Hsu, and Y.-N. Sun. Functional workspace for precision manipulation between thumb and fingers in normal hands. In J. Electromyogr. Kines., pages 829–839, 2009.
[37] Z.-M. Li and J. Tang. Coordination of thumb joints during opposition. In J. Elec- tromyogr. Kines., pages 502–510, 2009.
[38] S.-Y. Lin, C.-H. Su, K.-Y. Cheng, R.-H. Liang, T.-H. Kuo, and B.-Y. Chen. Pub - point upon body: Exploring eyes-free interaction and methods on an arm. In Proc. ACM UIST ’11, pages 481–488, 2011.
[39] C. Loclair, G. Sean, and P. Baudisch. PinchWatch: A wearable devices for one- handed microinteractions. In Proc. MobileHCI ’10 Workshop on Ensembles of On- Body Devices, 2010.
[40] N. Magnenat-thalmann and G. Papagiannakis. Virtual worlds and augmented reality in cultural heritage applications. 2005.
[41] N. Marquardt, R. Jota, S. Greenberg, and J. A. Jorge. The continuous interaction space: Interaction techniques unifying touch and gesture on and above a digital sur- face. In Proc. ACM INTERACT ’11, pages 461–476, 2011.
[42] P. Mistry, P. Maes, and L. Chang. WUW - wear ur world: A wearable gestural interface. In Proc. ACM CHI EA ’09, pages 4111–4116, 2009.
[43] A. Mujibiya, X. Cao, D. S. Tan, D. Morris, S. N. Patel, and J. Rekimoto. The sound of touch: On-body touch and gesture sensing based on transdermal ultrasound prop- agation. In Proc. ACM ITS ’13, pages 189–198, 2013.
[44] T. Ni and P. Baudisch. Disappearing mobile devices. In Proc. ACM UIST ’09, pages 101–110, 2009.
[45] K. Partridge, S. Chatterjee, V. Sazawal, G. Borriello, and R. Want. Tilttype: Accelerometer-supported text entry for very small devices. In Proc. ACM UIST ’02, pages 201–204, 2002.
[46] K. B. Perry and J. P. Hourcade. Evaluating one handed thumb tapping on mobile touchscreen devices. In Proc. ACM GI ’08, pages 57–64, 2008.
[47] M. Piovesana, Y.-J. Chen, N.-H. Yu, H.-T. Wu, L.-W. Chan, and Y.-P. Hung. Multi- display map touring with tangible widget. In Proc. ACM MM ’10, pages 679–682, 2010.
[48] M. Prätorius, D. Valkov, U. Burgbacher, and K. Hinrichs. DigiTap: An eyes-free VR/AR symbolic input device. In Proc. ACM VRST ’14, pages 9–18, 2014.
[49] V. Roth and T. Turner. Bezel swipe: Conflict-free scrolling and multiple selection on mobile touch screen devices. In Proc. ACM CHI ’09, pages 1523–1526, 2009.
[50] A. Roudaut, S. Huot, and E. Lecolinet. Taptap and magstick: Improving one-handed target acquisition on small touch-screens. In Proc. ACM AVI ’08, pages 146–153, 2008.
[51] A. Roudaut, E. Lecolinet, and Y. Guiard. Microrolls: Expanding touch-screen input vocabulary by distinguishing rolls vs. slides of the thumb. In Proc. ACM CHI ’09, pages 927–936, 2009.
[52] T. S. Saponas, D. S. Tan, D. Morris, R. Balakrishnan, J. Turner, and J. A. Landay. Enabling always-available input with muscle-computer interfaces. In Proc. ACM UIST ’09, pages 167–176, 2009.
[53] M. Serrano, B. M. Ens, and P. P. Irani. Exploring the use of hand-to-face input for interacting with head-worn displays. In Proc. ACM CHI ’14, pages 3181–3190, 2014.
[54] T. Starner, J. Auxier, D. Ashbrook, and M. Gandy. The gesture pendant: a self- illuminating, wearable, infrared computer vision system for home automation con- trol and medical monitoring. In Proc. IEEE ISWC ’00, pages 87–94, 2000.
[55] A. Sugiura and Y. Koseki. A user interface using fingerprint recognition: Holding commands and data objects on fingers. In Proc. ACM UIST ’98, pages 71–79, 1998.
[56] K. Tsukada and M. Yasamura. Ubi-finger: Gesture input device for mobile use. In Proc. APCHI ’02, pages 388–400, 2002.
[57] B. Ullmer and H. Ishii. The metadesk: Models and prototypes for tangible user interfaces. In Proc. ACM UIST ’97, pages 223–232, 1997.
[58] J. Wagner, M. Nancel, S. G. Gustafson, S. Huot, and W. E. Mackay. Body-centric design space for multi-surface interaction. In Proc. ACM CHI ’13, pages 1299–1308, 2013.
[59] C.-Y. Wang, M.-C. Hsiu, P.-T. Chiu, C.-H. Chang, L. Chan, B.-Y. Chen, and M. Y. Chen. PalmGesture: Using palms as gesture interfaces for eyes-free input. In Proc. ACM MobileHCI ’15, pages 217–226, 2015.
[60] M. Weigel, V. Mehta, and J. Steimle. More than touch: Understanding how people use skin as an input surface for mobile computing. In Proc. ACM CHI ’14, pages 179–188, 2014.
[61] M. Weiss, J. Wagner, Y. Jansen, R. Jennings, R. Khoshabeh, J. D. Hollan, and J. Borchers. Slap widgets: Bridging the gap between virtual and physical controls on tabletops. In Proc. ACM CHI ’09, pages 481–490, 2009.
[62] D. Wigdor and R. Balakrishnan. Tilttext: Using tilt for text input to mobile phones. In Proc. ACM UIST ’03, pages 81–90, 2003.
[63] J. O. Wobbrock, A. D. Wilson, and Y. Li. Gestures without libraries, toolkits or training: A $1 recognizer for user interface prototypes. In Proc. ACM UIST ’07, pages 159–168, 2007.
[64] A. Wu, D. Reilly, A. Tang, and A. Mazalek. Tangible navigation and object manip- ulation in virtual environments. In Proc. ACM TEI ’11, pages 37–44, 2011.
[65] S. H. Yoon, K. Huo, V. P. Nguyen, and K. Ramani. TIMMi: Finger-worn textile input device with multimodal sensing in mobile interaction. In Proc. ACM TEI ’15, pages 269–272, 2015.

QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top