跳到主要內容

臺灣博碩士論文加值系統

(44.200.171.156) 您好!臺灣時間:2023/03/22 02:39
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:曾義鈞
研究生(外文):Tseng, Yi-Chun
論文名稱:結合動態浮雕結構與流體互動的即時影片轉換
論文名稱(外文):Real-time Transferring Video to Dynamic Relief Structures with Fluid Interaction
指導教授:黃世強
指導教授(外文):Wong, Sai-Keung
口試委員:蔡侑庭林文杰王昱舜黃世強
口試委員(外文):Tsai, Yu-TingLin, Wen-ChiehWang, Yu-ShuenWong, Sai-Keung
口試日期:2018-09-18
學位類別:碩士
校院名稱:國立交通大學
系所名稱:多媒體工程研究所
學門:電算機學門
學類:軟體發展學類
論文種類:學術論文
論文出版年:2018
畢業學年度:107
語文別:英文
論文頁數:38
中文關鍵詞:互動模擬影像處理
外文關鍵詞:Interactive simulationImage processing
相關次數:
  • 被引用被引用:0
  • 點閱點閱:150
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
在本篇論文中,我們提出一個將影片轉換為動態浮雕結構並結合流體互動的方法。
關鍵的想法在於將影片中的物件轉換為2.5D半立體的結構並且透過流體模擬的技術使其與流體互動。
此外我們還透過光流法去追蹤物體的移動進而計算出物體輪廓的移動方向及速度。
為了減少動態浮雕結構上因雜訊及影像處理而造成的閃爍問題,
產生之半立體結構在空間平面以及時間軸上都將會作平滑化處理。
主要的處理過程包含三個階段,
分別是
1)產生浮雕結構(2.5D半立體結構)
2)時間軸上的處理
以及3)基於粒子系統的流體模擬
我們利用的靜態圖像以及包含不同運動模式的影片來實驗並產生結果。
此外我們還透過使用者研究,在感官回饋上對我們的結果做評估與驗證。
In this paper, we present a real-time framework which transfers video to dynamic relief-structures with fluid interaction. Our method enables fluid interaction with video content. The key idea is that the dynamic 2.5D structures of the objects in video are constructed and then fluid flows are simulated to interact with the 2.5D structures. There are three major stages: relief generation (2.5D structures), temporal
processing, and waterflow simulation. We keep track of the objects using optical flow and then compute the movement directions of the contours of the objects. To reduce temporal flickering, an approach is adopted to smooth the 2.5D structures in both spatial and temporal domains.
We performed experiments for still images, and video in which objects had various motion styles. A real-time user interaction was also demonstrated.
We conducted a user study to evaluate our results based on the perceptual feedback of participants. The user study indicated that our method could produce visually pleasing results.
摘要i
Abstract ii
Acknowledgements iii
Table of Contents iv
List of Figures vi
List of Tables viii
1 Introduction 1
2 Related Work 3
3 Our Method 5
3.1 Relief Generating . . . . . . . 5
3.2 Temporal Processing . . . . . . 8
3.3 Waterflow Simulation . . . . . 10
4 Rendering 20
5 Results 23
5.1 Performance analysis . . . . . 23
5.2 User Study . . . . . . . . . . 28
iv
5.3 Discussions and Limitations . . 30
6 Conclusion 33
Bibliography 34
[BKTS06] Adrien Bousseau, Matt Kaplan, Joëlle Thollot, and François X Sillion. Interactive
watercolor rendering with temporal coherence and abstraction. In
Proceedings of the 4th international symposium on Non-photorealistic animation
and rendering, pages 141–149. ACM, 2006.
[BP16] Jonathan T Barron and Ben Poole. The fast bilateral solver. In European
Conference on Computer Vision, pages 617–632. Springer, 2016.
[CBP05] Simon Clavet, Philippe Beaudoin, and Pierre Poulin. Particle-based viscoelastic
fluid simulation. In Proceedings of the 2005 ACM SIGGRAPH/Eurographics
symposium on Computer animation, pages 219–228. ACM, 2005.
[CCW12] Kai-Chun Chen, Pei-Shan Chen, and Sai-Keung Wong. A hybrid method
for water droplet simulation. In Proceedings of the 11th ACM SIGGRAPH
International Conference on Virtual-Reality Continuum and its Applications
in Industry, pages 341–344. ACM, 2012.
[CCW13] Kai-Chun Chen, Pei-Shan Chen, and Sai-Keung Wong. A heuristic approach
to the simulation of water drops and flows on glass panes. Computers &
Graphics, 37(8):963–973, 2013.
[CGZ+05] Yung-Yu Chuang, Dan B Goldman, Ke Colin Zheng, Brian Curless, David H
Salesin, and Richard Szeliski. Animating pictures with stochastic motion textures.
In ACM Transactions on Graphics (TOG), volume 24, pages 853–860.
ACM, 2005.
34
[Far03] Gunnar Farnebäck. Two-frame motion estimation based on polynomial expansion.
In Scandinavian conference on Image analysis, pages 363–370.
Springer, 2003.
[FL04] Raanan Fattal and Dani Lischinski. Target-driven smoke animation. In ACM
Transactions on Graphics (TOG), volume 23, pages 441–448. ACM, 2004.
[FSDH16] Oriel Frigo, Neus Sabater, Julie Delon, and Pierre Hellier. Split and match:
Example-based adaptive patch sampling for unsupervised style transfer. In
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,
pages 553–561, 2016.
[FSDH17] Oriel Frigo, Neus Sabater, Julie Delon, and Pierre Hellier. Video style transfer
by consistent adaptive patch sampling. The Visual Computer, pages 1–15,
2017.
[FXDG17] Noura Faraj, Gui-Song Xia, Julie Delon, and Yann Gousseau. A generic
framework for the structured abstraction of images. In Proceedings of the
Symposium on Non-Photorealistic Animation and Rendering, page 9. ACM,
2017.
[GEB16] Leon A Gatys, Alexander S Ecker, and Matthias Bethge. Image style transfer
using convolutional neural networks. In Proceedings of the IEEE Conference
on Computer Vision and Pattern Recognition, pages 2414–2423, 2016.
[HK04] Jeong-mo Hong and Chang-hun Kim. Controlling fluid animation with geometric
potential. Computer Animation and Virtual Worlds, 15(3-4):147–157,
2004.
[JC16] Wei-Cih Jhou and Wen-Huang Cheng. Animating still landscape photographs
through cloud motion creation. IEEE Transactions on Multimedia, 18(1):4–
13, 2016.
35
[KCWI13] Jan Eric Kyprianidis, John Collomosse, Tinghuai Wang, and Tobias Isenberg.
State of the ”art”: A taxonomy of artistic stylization techniques for images and
video. IEEE Transactions on Visualization and Computer Graphics, 19(5):
866–885, May 2013.
[KD08] Jan Eric Kyprianidis and Jürgen Döllner. Image abstraction by structure adaptive
filtering. In TPCG, pages 51–58, 2008.
[KL08] Henry Kang and Seungyong Lee. Shape-simplifying image abstraction. In
Computer Graphics Forum, volume 27, pages 1773–1780. Wiley Online Library,
2008.
[KLC09] Henry Kang, Seungyong Lee, and Charles K Chui. Flow-based image abstraction.
IEEE transactions on visualization and computer graphics, 15(1):
62–76, 2009.
[KWY14] Gaewon Kim, Youngsun Woo, and Changhoon Yim. Color pencil filter for
non-photorealistic rendering applications. In Consumer Electronics (ISCE
2014), The 18th IEEE International Symposium on, pages 1–2. IEEE, 2014.
[LW14] Shing-Yeu Lii and Sai-Keung Wong. Ice melting simulation with water flow
handling. The Visual Computer, 30(5):531–538, 2014.
[LXT18] Cewu Lu, Yao Xiao, and Chi-Keung Tang. Real-time video stylization using
object flows. IEEE transactions on visualization and computer graphics,
24(6):2051–2063, 2018.
[MCB17] Yiting Ma, Xuejin Chen, and Yu Bai. An interactive system for low-poly illustration
generation from images using adaptive thinning. In Multimedia and
Expo (ICME), 2017 IEEE International Conference on, pages 1033–1038.
IEEE, 2017.
[MCG03] Matthias Müller, David Charypar, and Markus Gross. Particle-based fluid
simulation for interactive applications. In Proceedings of the 2003 ACM SIG-
36
GRAPH/Eurographics symposium on Computer animation, pages 154–159.
Eurographics Association, 2003.
[Mei96] Barbara J Meier. Painterly rendering for animation. In Proceedings of the 23rd
annual conference on Computer graphics and interactive techniques, pages
477–484. ACM, 1996.
[OAO11] Makoto Okabe, Ken Anjyor, and Rikio Onai. Creating fluid animation from a
single image using video database. In Computer Graphics Forum, volume 30,
pages 1973–1982. Wiley Online Library, 2011.
[Par08] Sylvain Paris. Edge-preserving smoothing and mean-shift segmentation of
video streams. In European Conference on Computer Vision, pages 460–473.
Springer, 2008.
[SKv+14] Daniel Sýkora, Ladislav Kavan, Martin Čadík, Ondřej Jamriška, Alec Jacobson,
Brian Whited, Maryann Simmons, and Olga Sorkine-Hornung. Ink-and-
Ray: Bas-relief meshes for adding global illumination effects to hand-drawn
characters. ACM Transaction on Graphics, 33(2):16, 2014.
[SSP07] Barbara Solenthaler, Jürg Schläfli, and Renato Pajarola. A unified particle
model for fluid–solid interactions. Computer Animation and Virtual Worlds,
18(1):69–82, 2007.
[STW+17] Tsai-Ho Sun, Yi-Chun Tseng, Sai-Keung Wong, Hsuan Chen, and Tsung-Yu
Tsai. Animating pictures using procedural 2.5 d water flow simulation. In
Multimedia and Expo (ICME), 2017 IEEE International Conference on, pages
1464–1469. IEEE, 2017.
[WOG06] Holger Winnemöller, Sven C Olsen, and Bruce Gooch. Real-time video abstraction.
In ACM Transactions On Graphics (TOG), volume 25, pages 1221–
1226. ACM, 2006.
37
[XRY+15] Li Xu, Jimmy Ren, Qiong Yan, Renjie Liao, and Jiaya Jia. Deep edge-aware
filters. In International Conference on Machine Learning, pages 1669–1678,
2015.
[YHJ+17] Chih-Kuo Yeh, Shi-Yang Huang, Pradeep Kumar Jayaraman, Chi-Wing Fu,
and Tong-Yee Lee. Interactive high-relief reconstruction for organic and
double-sided objects from a photo. IEEE Transactions on Visualization &
Computer Graphics, (7):1796–1808, 2017.
[ZDXZ15] Feihu Zhang, Longquan Dai, Shiming Xiang, and Xiaopeng Zhang. Segment
graph based image filtering: fast structure-preserving smoothing. In Proceedings
of the IEEE International Conference on Computer Vision, pages
361–369, 2015.
[ZSXJ14] Qi Zhang, Xiaoyong Shen, Li Xu, and Jiaya Jia. Rolling guidance filter. In
European Conference on Computer Vision, pages 815–830. Springer, 2014.
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top