跳到主要內容

臺灣博碩士論文加值系統

(98.80.143.34) 您好!臺灣時間:2024/10/16 08:48
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:吳俞震
研究生(外文):Yu-Jhen Wu
論文名稱:漫畫風格分鏡腳本產生系統
論文名稱(外文):Automatic Storyboard Generated System With Comic Style
指導教授:江佩穎江佩穎引用關係
口試委員:謝東儒朱宏國姚智原江佩穎
口試日期:2016-07-21
學位類別:碩士
校院名稱:國立臺北科技大學
系所名稱:資訊工程系研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
畢業學年度:104
語文別:中文
中文關鍵詞:關鍵姿勢擷取非真實渲染
外文關鍵詞:keypose extractionnon-photorealistic rendering
相關次數:
  • 被引用被引用:1
  • 點閱點閱:373
  • 評分評分:
  • 下載下載:17
  • 收藏至我的研究室書目清單書目收藏:1
本論文提供一套自動化的漫畫風格分鏡腳本產生系統,透過錄製使用者操控虛擬角色與虛擬物件的表演過程,系統會自動分析場景內各種物件的軌跡資訊,判斷出使用者想表達的關鍵畫面,並繪製成漫畫影像,讓不熟悉電腦軟體操作或不專精於漫畫繪製的使用者,也能透過本系統表達各種的內容如故事情節、工具操作流程或產品使用說明等,清楚的以漫畫方式呈現。本論文利用深度辨識技術擷取人體骨架,並在對應骨架資訊後,將其套用在3D虛擬人偶上,藉此讓使用者能自由操控虛擬3D場景內的人偶,並針對多位虛擬角色控制提供使用者多次錄製功能,讓使用者可以一人分飾多角,達成漫畫中多角色互動的劇情。最後本系統將錄製使用者的表演過程,並透過使用者產生之骨架軌跡資訊,判斷使用者欲呈現的故事內容,自動生成分鏡腳本,並將其產生之結果繪製成非寫實風格(non-photorealistic)之漫畫影像。藉由本系統,使用者不須理解複雜的繪圖工具與技巧,也不需要精通漫畫的繪製技術,只要利用簡單的肢體語言配合虛擬物件的操控,即可透過本系統的流程,呈現出想表達的內容,輕鬆地創建漫畫腳本並分享成果。
This paper provides an automated storyboard generated system, allowing users to play a role in the 3d scene and make a small performance. The system automatically analyzes keypose through recording any information from scene and depict these frames as comic style. Users can produce any kind of comic like story, facilities operation or product manual without understanding computer operation and skill of drawing comic. At the initialize step, we only need to let user choose characters and practice to control them. Then we let user perform a short story in the scene, System will record the skeleton trajectory and objects position. After the show is finished, out system will produce candidates of the keypose by our keypose extraction algorithm. User can choose keyframes from candidates and adjust the composition. Finally, the system will depict the scene as comic style and produce a comic storyboard. Using our system can make people easily make comic storyboards without any drawing ability.
摘 要 i
Abstract ii
誌 謝 iii
目 錄 iv
圖目錄 vi
第一章 緒論 1
1.1 背景 1
1.2 研究動機與目的 1
1.3 論文架構 2
1.4 論文貢獻 2
第二章 相關研究 3
2.1 非真實渲染 3
2.2 動作擷取方式 6
2.3 關鍵影格擷取 10
2.4 關鍵姿勢擷取 12
2.5 邊緣偵測相關技術 15
2.5.1 Sobel operator 15
2.5.2 Robert Cross operator 16
第三章 系統概觀 17
3.1 系統使用流程 17
3.2 錄製功能 17
3.3 關鍵姿勢推薦 18
3.3.1 關鍵姿勢分析 18
3.3.2 動作相似性過濾 20
3.4 漫畫風格描繪 21
3.4.1 人物描繪 22
3.4.2 場景描繪 22
3.5 構圖調整 24
第四章 實驗結果 26
4.1 閾值設定 26
4.1.1 範圍過濾 26
4.1.2 實驗流程 27
4.1.3 實驗結果 28
第五章 漫畫腳本成品 29
5.1 避難指南 29
5.2 組裝說明 30
5.3 犯罪現場 31
第六章 結論與未來展望 33
6.1 電腦規格 33
6.2 架設環境 33
6.3 系統限制 33
6.3.1 輪廓雜點 34
6.3.2 人物場景交叉 34
6.4 結論 35
6.5 未來展望 36
6.5.1 人物描繪 36
6.5.2 場景描繪 36
6.5.3 漫畫特效 37
6.5.4 漫畫腳本排版 37
第七章 參考文獻 38
1.Clipstudio http://www.clipstudio.net/
2.Inkscape https://inkscape.org/zh-tw/
3.Stéphane Grabli, Emmanuel Turquin, Frédo Durand, and François X. Sillion, “Programmable rendering of line drawing from 3D scenes,” ACM Trans. Graph. vol. 29, no. 2, 2010, Article 18 (April 2010).
4.Milán Magdics, Catherine Sauvaget, Rubén J. García, and Mateu Sbert. “Post-processing NPR effects for video games,” In Proceedings of the 12th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and Its Applications in Industry (VRCAI 13). ACM, New York, NY, USA, 2013, pp. 147-156.
5.Adrien Bousseau, James P. Oshea, Frédo Durand, Ravi Ramamoorthi, and Maneesh Agrawala. “Gloss perception in painterly and cartoon rendering,” ACM Trans. Graph. vol. 32, no. 2, 2013,Article 18 (April 2013).
6.Paul L. Rosin and Yu-Kun Lai. “Non-photorealistic rendering of portraits,” In Proceedings of the workshop on Computational Aesthetics (CAE 15). Eurographics Association, Aire-la-Ville, Switzerland, Switzerland, 2015, pp. 159-170.
7.Daiki Umeda, Tomoaki Moriya, and Tokiichiro Takahashi. “Real-time Manga-like depiction based on interpretation of bodily movements by using Kinect,” In SIGGRAPH Asia 2012 Technical Briefs (SA 12). ACM, New York, NY, USA, 2012, Article 28.
8.Yuto Nara, Genki Kunitomi, Yukua Koide, Wataru Fujimura, and Akihiko Shirai. “Manga generator: immersive posing role playing game in manga world,” In Proceedings of the Virtual Reality International Conference: Laval Virtual (VRIC 13). ACM, New York, NY, USA, 2013, Article 27.
9.Sho Sakurai, Takuji Narumi, Tomohiro Tanikawa, and Michitaka Hirose. “Augmented emotion by superimposing depiction in comics,” In Proceedings of the 8th International Conference on Advances in Computer Entertainment Technology (ACE 11), Teresa Romão, Nuno Correia, Masahiko Inami, Hirokasu Kato, Rui Prada, Tsutomu Terada, Eduardo Dias, and Teresa Chambel (Eds.). ACM, New York, NY, USA, 2011, Article 66.
10.Richang Hong, Meng Wang, Guangda Li, Xiao-Tong Yuan, Shuicheng Yan, and Tat-Seng Chua. “iComics: automatic conversion of movie into comics,” In Proceedings of the 18th ACM international conference on Multimedia (MM 10). ACM, New York, NY, USA, 2010, pp. 1599-1602.
11.Wii http://www.nintendo.com/
12.Kinect http://www.xbox.com/zh-TW/Kinect
13.PS Move https://asia.playstation.com/tw/cht/ps3/psmove
14.Özge Samanci, Yanfeng Chen, and Ali Mazalek. “Tangible comics: a performance space with full-body interaction,” In Proceedings of the international conference on Advances in computer entertainment technology (ACE 07). ACM, New York, NY, USA, 2007, pp. 171-178.
15.Daehwan Kim and Daijin Kim. “A novel fitting algorithm using the ICP and the particle filters for robust 3d human body motion tracking,” In Proceedings of the 1st ACM workshop on Vision networks for behavior analysis (VNBA 08). ACM, New York, NY, USA, 2008, pp. 69-76.
16.C. Bregler and J. Malik. “Tracking People with Twists and Exponential Maps,” In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 98). IEEE Computer Society, Washington, DC, USA, 1998, 8-.
17.Sy Bor Wang and David Demirdjian. “Inferring body pose using speech content,” In Proceedings of the 7th international conference on Multimodal interfaces (ICMI 05). ACM, New York, NY, USA, 2005, pp. 53-60.
18.C. Barrón and I. A. Kakadiaris. “A convex penalty method for optical human motion tracking,” In First ACM SIGMM international workshop on Video surveillance (IWVS 03). ACM, New York, NY, USA, 2003, pp. 1-10.
19.Ankur Agarwal and Bill Triggs. “Monocular Human Motion Capture with a Mixture of Regressors,” In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR05) - Workshops - Volume 03 (CVPR 05), Vol. 3. IEEE Computer Society, Washington, DC, USA, 2005, 72-.
20.Christian Theobalt, Marcus Magnor, Pascal Schüler, and Hans-Peter Seidel. “Combining 2D Feature Tracking and Volume Reconstruction for Online Video-Based Human Motion Capture,” InProceedings of the 10th Pacific Conference on Computer Graphics and Applications (PG 02). IEEE Computer Society, Washington, DC, USA, 2002, 96-.
21.Genliang Guan, Zhiyong Wang, Shiyang Lu, Jeremiah Da Deng, and David Dagan Feng. “Keypoint-Based Keyframe Selection,” IEEE Trans. Cir. and Sys. for Video Technol. vol. 23, no. 4 (April 2013), 2013, pp. 729-734.
22.Photchara Ratsamee, Yasushi Mae, Amornched Jinda-apiraksa, Mitsuhiro Horade, Kazuto Kamiyama, Masaru Kojima and Tatsuo Arai. “Keyframe Selection Framework Based on Visual and Excitement Features for Lifelog Image Sequences,” International Journal of Social Robotics. vol. 7, no. 5, 2015, pp. 859–874
23.Andreas Girgensohn, Frank Shipman, and Lynn Wilcox. "Adaptive clustering and interactive visualizations to support the selection of video clips, " In Proceedings of the 1st ACM International Conference on Multimedia Retrieval (ICMR 11). ACM, New York, NY, USA, 2011, Article 34.
24.W. Wolf. "Key frame selection by motion analysis, "Acoustics, Speech, and Signal Processing, 1996. ICASSP-96. Conference Proceedings., 1996 IEEE International Conference, vol. 2, 1996, pp. 1228-1231.
25.Jun-Wei Hsieh, Yung-Tai Hsu, Chung-Li, Yung-Tai Hsu, Hong-Yuan Mark Liao and Chih-Chiang Chen. "Video-Based Human Movement Analysis and Its Application to Surveillance Systems, " In Proceedings of the IEEE Transactions on Multimedia. vol. 10, no. 3 (April 2008), 2008, pp.372-384.
26.Tong-Yee Lee, Chao-Hung Lin, Yu-Shuen Wang, and Tai-Guang Chen. "Animation Key-Frame Extraction and Simplification Using Deformation Analysis," In Proceedings of the IEEE Transactions on Circuits and Systems for Video Technology. vol. 18, no. 4 (April 2008), 2008, pp.478-486.
27.Wenjuan Gong, Andrew D. Bagdanov, F. Xavier Roca, and Jordi Gonzàlez. "Automatic key pose selection for 3D human action recognition," In Proceedings of the 6th international conference on Articulated motion and deformable objects (AMDO10), Francisco J. Perales and Robert B. Fisher (Eds.). Springer-Verlag, Berlin, Heidelberg, 2010, pp. 290-299.
28.Takeshi Miura, Takaaki Kaiga, Hiroaki Katsura, Katsubumi Tajima, Takeshi Shibata and Hideo Tamamoto. "Adaptive Keypose Extraction from Motion Capture Data," Journal of Information Processing, vol.22, no. 1 (January 2014), 2014, pp. 67-75.
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top