跳到主要內容

臺灣博碩士論文加值系統

(3.236.50.201) 您好!臺灣時間:2021/08/02 00:03
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:林志明
研究生(外文):Jhih-MingLin
論文名稱:基於影像貼補之多視角影像合成演算法及GPU加速實現
論文名稱(外文):Inpainting-based Multi-view Synthesis Algorithms and Its GPU Accelerated Implementation
指導教授:劉濱達楊家輝楊家輝引用關係
指導教授(外文):Bin-Da LiuBin-Da Liu
學位類別:碩士
校院名稱:國立成功大學
系所名稱:電機工程學系碩博士班
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2012
畢業學年度:100
語文別:英文
論文頁數:73
中文關鍵詞:影像合成影像貼補GPU平行運算技術基於景深圖之影像合成
外文關鍵詞:View SynthesisInpaintingGPUDepth-image-based rendering (DIBR)
相關次數:
  • 被引用被引用:0
  • 點閱點閱:215
  • 評分評分:
  • 下載下載:22
  • 收藏至我的研究室書目清單書目收藏:0
本論文提出基於影像貼補技術之即時多視角影像合成演算法,只要輸入單一視角及其對應之景深圖,即可輸出雙視角影像給快門式眼鏡或九視角影像給裸眼立體電視。在多視角影像合成演算法中,如何修補影像中的瑕疵及破洞是首要問題,本論文首先提出使用內插影像修補演算法,利用影像紋理資訊來決定如何修補影像中的小瑕疵,接著剩餘的破洞則利用多視角影像投影的特性來貼補,以得到較好的修補結果。本論文並且提出新的優先權決定法來降低運算複雜度,以達到即時運算之目的。為了進一步增加運算效能,本論文採用了GPU平行運算技術,以降低演算過程中的資料相依性,提升平行化的程度。模擬結果顯示,本論文提出的演算法可以合成出良好的影像,並能即時合成雙視角影像或九視角影像。
In this thesis, inpainting-based multi-view synthesis algorithms for 2-views glasses stereo or 9-views naked-eye display systems are proposed when their inputs are with one color image and depth map. Mostly, the depth-image-based rendering (DIBR) algorithms will produce some holes and cracks in the multi-view synthesized images. How to fill up holes and crack in multi-view synthesized image becomes an important issue for providing high quality 3D views. This work proposes a texture-based interpolation method, which can fix cracks in the image based on texture information. Afterward, the holes in the image are filled up by inpainting based on the principle of warping to get better results, Besides, a priority method is also proposed to reduce computational complexity. Finally, we use compute unified device architecture (CUDA) to realize the proposed algorithms, and reduce data dependence. Simulation results reveal that the proposed algorithms achieve good results for synthesizing 2-view or 9-view in real-time.
Abstract (Chinese)...i
Abstract (English)...iii
Table of Contents...vii
List of Figures...ix
List of Tables...xi
Chapter 1 Introduction...1
1.1 Three dimensional (3D) films...1
1.2 Motivation...4
1.3 Thesis Organization...5
Chapter 2 Related Works...7
2.1 Depth-Based Image Rendering (DIBR)...7
2.1.1 3D warping...8
2.1.2 Boundary artifacts...11
2.1.3 Hole Filling...14
2.2 Inpainting...16
Chapter 3 Compute Unified Device Architecture (CUDA)...19
3.1 CUDA Programming Model...20
3.2 CUDA Hardware Model...23
Chapter 4 The Proposed Method...27
4.1 Algorithm Overview...28
4.2 Texture-based Interpolation Crack Filling...31
4.2.1 Finding small crack...32
4.2.2 Filtering small crack...33
4.3 Fast Disocclusions Inpainting...35
4.3.1 Analysis vector...36
4.3.2 Arrange priority...39
4.3.3 Search match block...43
4.4 Fast Hole Filling by Reference View...47
4.4.1 Hole Filling by One Reference View...48
4.4.2 Hole Filling by Two Reference View...48
4.5 CUDA Acceleration...49
4.5.1 Depth map pro-processing...50
4.5.2 3D warping...50
4.5.3 Fast 3D warping...51
4.5.4 Texture-based interpolation...53
4.5.5 Fast Disocclusions inpainting...53
4.5.6 Hole filling by reference view...55
4.6 Simulation Result...55
4.6.1 Coefficient select and comparison...55
4.6.2 Synthesis result...59
Chapter 5 Conclusions and Future Work...65
5.1 Conclusions...65
5.2 Future Work...66
References...69
[1]L. Meesters, W. A. Ijsselsteijn, and P. J. H. Seuntiens, “A survey of perceptual evaluations and requirements of 3-D TV, IEEE Trans. Circuits Syst. Video Technol., vol. 14, pp. 381-391, Mar. 2004.
[2]3D film [Online]. Available: http://en.wikipedia.org/
[3]K. Muller, P. Merkle, and T. Wiegand, “3-D video representation using depth maps, Proc. IEEE., vol. 99, pp. 643-656, Apr. 2011.
[4]P. Benzie, J. Watson, P. Surman, I. Rakkolainen, K. Hopf, H. Urey, V. Sainov, and C. von Kopylow, “A survey of 3DTV displays: techniques and technologies, IEEE Trans. Circuits Syst. Video Technol., vol. 17, pp. 1647-1658, Nov. 2007.
[5]M. Tanimoto, “Overview of free viewpoint television, in Proc. IEEE ICME, July 2009, pp. 1552-1553.
[6]“Introduction to 3D Video, ISO/IEC JTC1/SC29/WG11 N9784, May 2008.
[7]“Vision on 3D Video, ISO/IEC JTC1/SC29/WG11 N10357, Feb. 2009.
[8]3D video formats and coding standards [Online]. Available: http://3d-video.start.bg/link.php?id=620756
[9]“Auxiliary Video Data Representations, ISO/IEC JTC1/SC29/WG11 N8039, Apr. 2006.
[10]D. Tian, P. Lai, P. Lopez, and C. Gomila, “View synthesis techniques for 3-D video, in Proc. SPIE, Aug. 2009, pp. 74430T-74430T-11.
[11]Y. Mori, N. Fukushima, T. Yendo, T. Fujii, and M. Tanimoto, “View generation with 3D warping using depth information for FTV, in Proc. IEEE 3DTV, Jan. 2008, pp. 65-72.
[12]Y. Zhao, C. Zhu, Z. Chen, D. Tian, and L. Yu, “Boundary artifact reduction in view synthesis of 3D video: from perspective of texture-depth alignment, IEEE Trans. Broadcast., vol. 54, pp. 510-522, June 2011.
[13]P. Merkle, Y. Morvan, A. Smolic, D. Farin, K. Müller, P.H.N. de With, and T. Wiegand, “The effect of depth compression on multiview rendering quality, in Proc. IEEE 3DTV, May 2008, pp. 245-248.
[14]L. Yu, S. Xiang, H. Deng, and P. Zhou, “Depth based view synthesis with artifacts removal for FTV, in Proc. IEEE ICIG, Aug. 2011, pp. 506-510.
[15]C. Lee and Y. S. Ho, “Boundary filtering on synthesized views of 3D video, in Proc. IEEE FGCNS, Dec. 2008, pp. 15-18.
[16]J. Lu, Q. Yang, and G. Lafruit, “Interpolation error as a quality metric for stereo: robust, or not?, in Proc. IEEE ICASSP, Apr. 2009, pp. 977-980.
[17]L. Yang, T. Yendoa, M. P. Tehrania, T. Fujii, and M. Tanimoto, “Artifact reduction using reliability reasoning for image generation of FTV, J. Vis. Commun. Image Represent., vol. 21, pp. 542-560, July 2010.
[18]I. Y. Shin and Y. S. Ho, “GPU Parallel Programming for Real-time Stereoscopic Video Generation, in Proc. IEEE ICEIC, July 2010, pp. 315-318.
[19]Y. K. Park, K. Jung, Y. Oh, S. Lee, J. K. Kim, G. Lee, H. Lee, K. Yun, N. Hur, and J. Kim, “Depth-image-based rendering for 3DTV service over T-DMB, Signal Process.-Image Commun., vol. 24, pp. 122-136, Jan. 2009.
[20]Y. R. Horng, Y. C. Tseng, and T. S. Chang, “Stereoscopic images generation with directional Gaussian filter, in Proc. IEEE ISCAS, Jun. 2010, pp. 2650-2653.
[21]Z. W. Liu, P. An, S. X. Liu, and Z. Y. Zhang, “Arbitrary view generation based on DIBR, in Proc. IEEE ISPACS, Nov. 2007, pp. 168-171.
[22]L. Zhang and W. J. Tam, “Stereoscopic image generation based on depth images for 3D TV IEEE Trans. Broadcast., vol. 51, pp. 191-199, June 2005.
[23]Y. M. Feng, D. X. Li, K. Luo, and M. Zhang, “Asymmetric bidirectional view synthesis for free viewpoint and three-dimensional video IEEE Trans. Consum. Electron., vol. 55, pp. 2349-2355, Nov. 2009.
[24]A. Criminisi, P. P´erez, and K. Toyama, “Region filling and object removal by exemplar-based image inpainting, IEEE Trans. Image Process., vol. 13, pp. 1200-1212, Sep. 2004.
[25]F. H. Cheng, Y. W. Chang, and Y. S. Huang, “A hardware architecture for real-time stereoscopic image generation from depth map, in Proc. IEEE ICMLC, July 2011, pp. 1622-1627.
[26]G. L. Wu, C. Y. Chen, and S. Y. Chien, “Algorithm and architecture design of image inpainting engine for video error concealment applications, IEEE Trans. Circuits Syst. Video Technol., vol. 21, pp. 792-803, June 2011.
[27]K. Y. Chen, P. K. Tsung, P. C. Lin, H.-J. Yang, and L.-G. Chen, “Hybrid motion/depth-oriented inpainting for virtual view synthesis in multiview applications, in Proc. IEEE 3DTV, Jun. 2010, pp. 1-4.
[28]C. Sunghwan, H. Bumsub, and S. Kwanghoon, “Hole filling with random walks using occlusion constraints in view synthesis, in Proc. IEEE ICIP, Sept. 2011, pp. 1965-1968.
[29]K. M. Chang, T. C. Lin, and Y. M. Huang, “Parallax-Guided Disocclusion Inpainting for 3D View Synthesis, in Proc. IEEE ICCE, Jan. 2012, pp. 398-399.
[30]GPGPU [Online]. Available: http://en.wikipedia.org/
[31]NVIDIA GPU Computing Documentation [Online]. Available: http://developer.nvidia.com/nvidia-gpu-computing-documentation
[32]SIMD [Online]. Available: http://en.wikipedia.org/
[33]Sanders, J. and Kandrot, E, CUDA By Example: An Introduction to General-Purpose GPU Programming. Boston, Massachusetts: Addison-Wesley, July 2010.
[34]C.L. Zitnick, S.B. Kang, M. Uyttendaele, S. Winder, and R. Szeliski, “high-quality video view interpolation using a layered representation, ACM Trans. Graph., vol. 23, pp. 600-608, Aug. 2004.
[35]C. Vazquez, W. J. Tam, and F. Speranza, “Stereoscopic imaging: filling disoccluded areas in depth image-based rendering, in Proc. SPIE, Oct. 2006, pp. 123-134.
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top