跳到主要內容

臺灣博碩士論文加值系統

(44.221.73.157) 您好!臺灣時間:2024/06/20 10:47
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:洪俊銘
研究生(外文):HUNG, CHUN-MING
論文名稱:動作捕捉虛擬網紅直播系統
論文名稱(外文):Motion Capture Virtual YouTuber Live Streaming System
指導教授:謝東儒謝東儒引用關係
指導教授(外文):HSIEH, TUNG-JU
口試委員:謝東儒張陽郎葉士青
口試委員(外文):HSIEH, TUNG-JUCHANG, YANG-LANGYEH, SHIH-CHING
口試日期:2020-07-15
學位類別:碩士
校院名稱:國立臺北科技大學
系所名稱:資訊工程系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2020
畢業學年度:108
語文別:中文
論文頁數:60
中文關鍵詞:虛擬網紅動作捕捉人臉混合變形虛擬攝影機iOS Face CapOSC
外文關鍵詞:Virtual YoutuberMotion CaptureiOS Face CapOSCBlendShapeVirtual Camera
相關次數:
  • 被引用被引用:4
  • 點閱點閱:1123
  • 評分評分:
  • 下載下載:241
  • 收藏至我的研究室書目清單書目收藏:1
本論文開發一套整合性的虛擬網紅直播系統,結合了市面上常見的動作捕捉設備。使用者可藉由穿著動捕設備來操控系統上的虛擬角色,讓虛擬角色達到完美同步使用者的動作。根據資深動畫師描述,傳統動畫的製程方式需要動畫師針對角色動作一幀一幀做調整,因此所耗費的工時及成本極高。然而,若使用本系統來改善傳統動畫的製程,經過統計,省下的時間為手調動畫的十倍以上,其成效相當可觀。
本系統除了結合動捕設備,讓使用者能輕易操控虛擬角色的肢體動作外,針對角色臉部表情也提供了兩種方式做控制:(1)iOS Face Cap,本應用程式提供52組通道串接虛擬角色的臉部骨架,再透過iOS提供的深度攝影機抓取使用者的臉部資訊,並應用OSC (Open Sound Control)協定傳送到系統中的虛擬角色上。(2)鍵盤熱鍵控制虛擬角色表情,預先建置虛擬角色的人臉混合變形(BlendShape),再透過鍵盤熱鍵控制,以做出嘴形變化及喜怒哀樂表情回應。
本論文除了可提升動畫製作的效率外,也可搭配虛擬攝影機的運鏡及場景的變化,讓有志成為虛擬網紅直播的創作者,都可以透過本系統來展示其設計的虛擬角色,並演出預設腳本或是多樣化的主題內容。

This paper develops an integrated virtual YouTuber live streaming system that incorporates common motion capture devices in the market. Users can control the virtual characters on the system by wearing motion capture devices. Virtual characters can achieve perfect synchronization with user actions. According to the description of senior animators, the traditional animation production requires the animators to make adjustments to the character's movements frame by frame. Thus, the time and cost are extremely high. However, if we use this system to improve the traditional animation production, we can achieve 10 times faster than the time for manual adjusting the animation. The results are quite impressive.
The system integrates with a motion capture device that allows users to easily control the physical movements of the virtual character. Two ways to control the character's facial expressions are : (1) iOS Face Cap, the application provides 52 channels to connect the virtual character's face skeleton. In addition, the iOS depth camera captures the user's face information and applies the OSC protocol to send it to the virtual character in the system. (2) Hotkeys control the expressions of the virtual character. Pre-built virtual character BlendShape, can be controlled by the keyboard hotkeys to make basic mouth shape changes and face expressions in response.
In addition to improving the efficiency of animation production, the technique proposed by this paper can also be combined with the virtual camera operation and scene changes. Creators who are interested in becoming a virtual YouTuber can use this system to showcase their virtual characters and to perform default scripts or a variety of thematic content.

摘 要 i
ABSTRACT ii
誌 謝 iii
目 錄 iv
表目錄 vi
圖目錄 vii
第一章 導論 1
1.1 前言 1
1.2 製作動機 2
1.3 研究背景 4
1.4 論文貢獻 6
1.5 論文架構 6
第二章 相關文獻與技術 7
2.1 角色動畫製作 7
2.2 角色骨架追蹤 9
2.3 動作捕捉設備的分類 14
2.4 Unity骨架相關模組 15
2.4.1 Final IK Asset 15
2.4.2 OSC Receiver 16
第三章 設計概念 17
3.1 虛擬網紅演出情境 17
3.2 系統架構 18
3.3 動作捕捉設備設計 19
3.3.1 HTC VIVE Motion Capture 21
3.3.2 慣性傳導式動捕設備 24
3.3.3 Optics Motion Capture 28
3.3.4 No Any Device 30
3.4 串接設備設計 30
3.4.1 臉部表情抓取 30
3.4.2 VIVE Tracker動態虛擬攝影機 33
第四章 使用者操作流程設計 34
4.1 使用者操作流程 34
4.1.1 角色選擇區 35
4.1.2 動作捕捉設備選擇區 36
4.1.3 場景選擇區 46
第五章 實驗結果與討論 48
5.1 系統開發環境 48
5.2 系統介面 49
5.3 系統實作 51
5.4 實驗成果與問卷說明 53
第六章 結論與未來展望 55
6.1 結論 55
6.2 系統限制 56
6.3 未來展望 56
參考文獻 57
附錄A FBX轉VRM格式說明 59
附錄B 使用者體驗回饋問卷 60

1.第一隻虛擬網紅—絆愛之YouTube頻道「A.I.Channel」,https://www.youtube.com/channel/UC4YaOt1yT-ZeyB0OmxHgolA
2.Thomas B. Moeslund, and Erik Granum (2001). A Survey of Computer Vision-Based Human Motion Capture. Computer Vision and Image Understanding, Volume 81, Issue 3, March 2001, Pages 231-268
3.增田弘道 (2012). 日本動漫產業的商業運作模式
4.虛擬網紅肯德基爺爺拍攝母親節廣告,https://www.youtube.com/watch?v=1OF8MRT3aQk
5.「One PIECE Day」使用動捕設備嘗試VTuber演出,https://www.youtube.com/watch?v=xds00gPAOk8&t=5s
6.HTC VIVE,https://www.vive.com/tw/
7.Xsens MVN,https://www.xsens.com/
8.FOHEART,http://www.foheart.com/en/
9.Neuron Noitom,https://neuronmocap.com/
10.青瞳視覺CHINGMU,http://www.chingmu.com/
11.Kathernie Pullen, and Christoph Bregler (2002). Motion Capture Assisted Animation: Texturing and Synthesis. ACM Transactions on Graphics Volume 21, Issue 3.
12.Katsu Yamane, James J. Kuffner, and Jessica K. Hodgins (2004). Synthesizing animations of human manipulation tasks. SIGGRAPH '04, Page 532-539.
13.Peng Huang, Margara Tejera, John P Collomosse, and Adrian Hilton (2015). Hybrid Skeletal-Surface Motion Graphs for Character Animation from 4D Performance Capture. ACM Transactions on Graphics Volume 34, Issue 2.
14.Paul S. A. Reitsma, and Nancy S. Pollard (2003). Perceptual metrics for character animation: sensitivity to errors in ballistic motion. ACM Transactions on Graphics Volume 22, Issue 3.
15.R. Rosales, M. Siddiqui, J. Alon, and S. Sclaroff (2001). Estimating 3D Body Pose using Uncalibrated Cameras. Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001.
16.C. Theobalt, M. Magnor, P. Schuler and H.-P. Seidel (2002). Combining 2D Feature Tracking and Volume Reconstruction for Online Video-Based Human Motion Capture. 10th Pacific Conference on Computer Graphics and Applications, 2002. Proceedings.
17.Xiaolin K. Wei, and Jinxiang Chai (2009). Modeling 3D human poses from uncalibrated monocular images. 2009 IEEE 12th International Conference on Computer Vision.
18.Xiaolin Wei, Peizhao Zhang, and Jinxiang Chai (2012). Accurate realtime full-body motion capture using a single depth camera. ACM Transactions on Graphics Volume 31, Issue 6.
19.Takaaki Shiratori, Hyunsoo Park, Leonid Sigal, Yaser Ajmal Sheikh, and Jessica Kate Hodgins (2011). Motion capture from body-mounted cameras. ACM Transactions on Graphics Volume 30, Issue 4.
20.Sangil Park, and Jessica Kate Hodgins (2006). Capturing and animating skin deformation in human motion. ACM Transactions on Graphics Volume 25, Issue 3.
21.Menguc Y, Park Y-L, Martinez-Villalpando E, Aubin P, Zizook M, Stirling L, Wood RJ, and Walsh C,Soft wearable motion sensing suit for lower limb biomechanics measurements,2013
22.Peizhao Zhang, Kristin Siu, Jianjie Zhang, C Karen Liu, and Jinxiang Chai,Leveraging depth cameras and wearable pressure sensors for full-body kinematics and dynamics capture,2014
23.Vicon Motion Capture,https://www.vicon.com/
24.Final IK module,https://assetstore.unity.com/packages/tools/animation/final-ik-14290
25.Ilias Bergstrom、Anthony Steed and Beau Lotto,Mutable Mapping: gradual re-routing of OSC control data as a form of artistic performance,2009
26.Bannaflak,https://www.bannaflak.com/
27.Mixamo角色資源庫,https://www.mixamo.com/#/
28.Vroid角色資源庫,https://vroid.com/
29.Bagozzi, R. P. and Yi, Y. (1988). On the Evaluation of Structural Equation Models. Academy of Marking Science, 16, 76-94.

QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top