(3.239.33.139) 您好!臺灣時間:2021/03/07 22:36
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

我願授權國圖
: 
twitterline
研究生:王鈞奕
研究生(外文):Chun-I Wang
論文名稱:運用群眾力量以建構基於線稿圖式檢索的手機設計範例資料庫
論文名稱(外文):Leveraging Crowd for Creating a Wireframe-based Mobile Design Pattern Gallery
指導教授:許永真許永真引用關係
指導教授(外文):Jane Yung-jen Hsu
口試委員:朱浩華Robby Findler詹力韋紀婉容
口試委員(外文):Hao-Hua ChuRobby FindlerLiwei ChanWan-Rong Jih
口試日期:2014-07-30
學位類別:碩士
校院名稱:國立臺灣大學
系所名稱:資訊工程學研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2014
畢業學年度:102
語文別:英文
論文頁數:43
中文關鍵詞:人力運算標注工具
外文關鍵詞:Human ComputationAnnotation Tool
相關次數:
  • 被引用被引用:0
  • 點閱點閱:109
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
隨著智慧型手機以及行動網路的普及,人們可以在各地透過手機應 用程式方便的獲取所需資訊,由手機應用的龐大需求所驅,越來越多 開發者投入與手機應用程式的開發與設計。然而,手機介面設計因為 多重使用情境以及手機的螢幕大小等特殊的限制,有截然不同的設計 考量,對於新手設計師,要做出好的手機介面設計是相對比較困難的。 在這篇論文裡,我們實現了 Dmatch: 一套以線稿圖來搜尋設計範例的 系統。我們深入的探討如何運用 Amazon Mechanical Turk 群眾外包平 台豐沛的人力,精確有效的還原每張截圖中使用者界面的原件組成, 以提供線稿圖式的探索。其中,“標注使用者界面的原件” 這項工作不 但需要專業的辨別能力,標注本身也是一件耗時費工的事。為了讓未 經訓練的工作者可以精確地完成任務,我們用範例學習的方式提升工 作者認知能力,並結合智慧標注工具減輕其標注負擔。透過 “標注,驗 證” 兩階段的機制,我們的實驗結果顯示 Dmatch 產生的介面標注可以 達到不錯精確度和招回率。

The ubiquitous availability of smart phones have made it easier than ever for people to acquire information through apps. Driven by the huge demand in mobile apps, more and more designers and developers are entering the area of mobile app design. However, due to the context of use and limited screen size, it becomes difficult for novice designers or developers to design a good mobile app. In this work, we introduce Dmatch: a wireframe-based design exploration tool to help designers visually query mobile designs for inspiration. Dmatch crowdsources UI element annotations of mobile design images through crowds using Amazon Mechanical Turk. The core challenge is to facilitate crowd of non-experts to generate UI element annotations precisely. Our key observation is that “drawing UI element annotations” requires expertise to recognize and it is much more time-consuming. Therefore, we pro- vides in-hit guiding example aid to enhance their knowledge and a semi-auto intelligent annotation tool to alleviate the workload. Using the “Draw-Verify” workflow design, the experiment result demonstrates that Dmatch can interpret UI element with high precision and recall.

1 Introduction 1
1.1 Motivation.................................. 1
1.2 Challenges.................................. 2
1.3 Proposed Method .............................. 3
2 Related Work 5
2.1 Computer Vision .............................. 5
2.1.1 Edge-based Approach ....................... 5
2.1.2 Template Matching-based Approach. . . . . . . . . . . . . . . . 6
2.2 Crowdsourcing ............................... 6
2.2.1 Pros and Cons of Mechanical Turk Platform. . . . . . . . . . . . 6
2.2.2 Crowdsourcing Workflows..................... 7
2.2.3 Quality Control for Image Annotation . . . . . . . . . . . . . . . 7
3 Methodology 9
3.1 Pilot Test .................................. 9 3.2 Design Decisions .............................. 10
&;#65532;3.3 Workflow Overview ............................ 11
3.4 Drawing Task................................ 12
3.4.1 Instruction ............................. 12
3.4.2 In-Hit Annotation Aid ....................... 12
3.5 Verification Task Design .......................... 14
3.6 Precision Verification Task ......................... 15
3.6.1 Instructions............................. 15
3.6.2 Quality Control........................... 16
3.7 Recall Verification Task........................... 16
3.7.1 Quality Control........................... 17
4 UI Candidate Detection 21
4.1 UIElement Candidates Extraction ..................... 22
4.1.1 Connected Component Analysis.................. 22
4.1.2 X-Y-cut for grid........................... 22
4.2 Textline Extraction ............................. 23
4.3 UI Element Candidates Elimination .................... 23
4.4 Evaluation.................................. 24
4.4.1 Dataset ............................... 24
4.4.2 Evaluation Metrics ......................... 24 4.4.3 UI Candidate Detection Result................... 25
5 Wireframe-based Design Example Retrieval 27
5.1 Ranking Function.............................. 27
6 Experiment 31
6.1 Dataset ................................... 31
6.2 Experiment I: Comparison between an expert and non-expert crowds . . . 31
6.2.1 Annotation Quality Comparison .................. 32
6.2.2 Cost Comparison.......................... 34
6.3 Experiment II: Effect of In-hit Annotation Aid on Draw Task . . . . . . . 34
&;#65532;6.3.1 Annotation Quality Comparison .................. 34
6.3.2 Workers’ behavior ......................... 35
6.3.3 Efficiency.............................. 36
7 Conclusion and Future Work Bibliography .....39
Bibliography...................................41

[1] M.S.Bernstein,G.Little,R.C.Miller,B.Hartmann,M.S.Ackerman,D.R.Karger, D. Crowell, and K. Panovich. Soylent: a word processor with a crowd inside. In Proceedings of the 23nd annual ACM symposium on User interface software and technology, pages 313–322. ACM, 2010.
[2] M. Dixon and J. Fogarty. Prefab: implementing advanced behaviors using pixel-based reverse engineering of interface structure. In E. D. Mynatt, D. Schoner, G. Fitzpatrick, S. E. Hudson, W. K. Edwards, and T. Rodden, editors, CHI, pages 1525– 1534. ACM, 2010.
[3] M. Dixon, D. Leventhal, and J. Fogarty. Content and hierarchy in pixel-based methods for reverse engineering interface structure. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’11, pages 969–978, New York, NY, USA, 2011. ACM.
[4] J.S.Downs,M.B.Holbrook,S.Sheng,andL.F.Cranor.Areyourparticipantsgaming the system?: Screening mechanical turk workers. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’10, pages 2399–2402, New York, NY, USA, 2010. ACM.
[5] K. Heimerl, B. Gawalt, K. Chen, T. Parikh, and B. Hartmann. Communitysourcing: engaging local crowds to perform expert work via physical kiosks. In Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems, pages 1539–1548. ACM, 2012.
[6] A. Kittur, E. H. Chi, and B. Suh. Crowdsourcing user studies with mechanical turk. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’08, pages 453–456, New York, NY, USA, 2008. ACM.
[7] J. Kong, O. Barkol, R. Bergman, A. Pnueli, S. Schein, K. Zhang, and C. Zhao. Web interface interpretation using graph grammars. IEEE Transactions on Systems, Man, and Cybernetics, Part C, 42(4):590–602, 2012.
[8] R. Kumar, A. Satyanarayan, C. Torres, M. Lim, S. Ahmad, S. R. Klemmer, and J. O. Talton. Webzeitgeist: Design mining the web. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 3083–3092. ACM, 2013.
[9] B. Lee, S. Srivastava, R. Kumar, R. Brafman, and S. R. Klemmer. Designing with interactive example galleries. In Proceedings of the 28th international conference on Human factors in computing systems, CHI ’10, pages 2257–2266, New York, NY, USA, 2010. ACM.
[10] J. Noronha, E. Hysen, H. Zhang, and K. Z. Gajos. Platemate: crowdsourcing nutritional analysis from food photographs. In Proceedings of the 24th annual ACM symposium on User interface software and technology, pages 1–12. ACM, 2011.
[11] A. Pnueli, R. Bergman, S. Schein, and O. Barkol. Web page layout via visual segmentation. Technical report, HP Laboratories, 2009.
[12] A. J. Quinn and B. B. Bederson. Human computation: a survey and taxonomy of a growing field. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 1403–1412. ACM, 2011.
[13] D. Ritchie, A. A. Kejriwal, and S. R. Klemmer. d.tour: style-based exploration of design example galleries. In Proceedings of the 24th annual ACM symposium on User interface software and technology, UIST ’11, pages 165–174, New York, NY, USA, 2011. ACM.
[14] H. Su, J. Deng, and L. Fei-Fei. Crowdsourcing annotations for visual object detection. Workshops at the Twenty-Sixth AAAI Conference, 2012.
[15] C. Yao, X. Bai, W. Liu, Y. Ma, and Z. Tu. Detecting texts of arbitrary orientations in natural images. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pages 1083–1090. IEEE, 2012.
[16] T. Yeh, T.-H. Chang, and R. C. Miller. Sikuli: Using gui screenshots for search and automation. In Proceedings of the 22Nd Annual ACM Symposium on User Interface Software and Technology, UIST ’09, pages 183–192, New York, NY, USA, 2009. ACM.

QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
系統版面圖檔 系統版面圖檔