跳到主要內容

臺灣博碩士論文加值系統

(35.175.191.36) 您好!臺灣時間:2021/08/01 00:50
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:楊敦价
研究生(外文):Tun-ChiehYang
論文名稱:應用邊緣導向補洞之多維視訊深度圖像生成演算法及其VLSI實現
論文名稱(外文):Depth Image-based Rendering with Edge-oriented Hole Filling for Multiview Synthesis and Its VLSI Implementation
指導教授:劉濱達楊家輝楊家輝引用關係
指導教授(外文):Bin-da LiuBin-da Liu
學位類別:碩士
校院名稱:國立成功大學
系所名稱:電機工程學系碩博士班
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2012
畢業學年度:100
語文別:英文
論文頁數:75
中文關鍵詞:深度圖像生成法三維立體電視補洞法深度圖無平滑化
外文關鍵詞:DIBR3D TVhole fillingnon-smoothing depth
相關次數:
  • 被引用被引用:0
  • 點閱點閱:204
  • 評分評分:
  • 下載下載:7
  • 收藏至我的研究室書目清單書目收藏:0
本論文提出基於深度圖像生成虛擬影像之邊緣導向補洞演算法。本演算法放棄傳統平滑深度圖之方法,改強調補洞的重要性。所提出之物體邊緣偵測演算法用於偵測並移除物體邊緣模糊之區塊,提升補洞時資訊的正確性。由於補洞獲得資訊之準確度提升,合成的影像品質也隨之提升。本演算法藉由判斷洞的類別,以及洞周圍的邊緣資訊,來決定補洞之方法及方向。由於深度圖不經過平滑化處理,因此合成之虛擬影像無幾何失真,而且演算法增加補洞之準確度。
硬體架構方面,提出之方法總共需要7,567個邏輯閘,系統最高超作頻率為 100 MHz。此方法可以即時地分工實現 720p HD (1280 × 720)大小的虛擬影像,亦可實現並排格式之 1080p (1920 × 1080) 影像大小。模擬結果顯示,本法可提升雜訊比1 dB到 4 dB,結構相似度皆最接近1。

In this thesis, an edge-oriented hole filling algorithm for depth image-based rendering is proposed by discarding the traditional smoothing depth approach. An object boundary detection method is used to detect and remove blur pixels near the object boundary of color images. With the blur pixels removal, the hole filling will become more reliable. Thus the quality of synthesis virtual view is increased. The hole after 3D warping are filled by the proposed edge-oriented hole filling algorithm for natural and smooth view syntheses. The non-smoothing depth map reduces geometric distortion while the proposed hole filling method increases hole filling accuracy.
Synthesis results show the number of gates in the proposed DIBR system is 7.56k. The proposed architecture can reach 100 MHz operation frequency. The maximum frame size can achieve 720p HD (1280 × 720) or side by side FHD (1920 × 1080)@30 frames/sec. In simulation results, the proposed methods improve the synthesis virtual view quality. For performance evaluation, the PSNR can be increased from 1 dB to 4 dB while the SSIM is also close to 1.

Abstract (Chinese) i
Abstract (English) iii
Acknowledgement v
Table of Contents vii
List of Figures ix
List of Tables xiii
Chapter 1 Introduction 1
1.1 Motivation 1
1.2 Organization for the Thesis 2
Chapter 2 Overview of Depth Image-based Rendering System 3
2.1 Basic Concepts of Depth Image-based Rendering System 3
2.1.1 3D warping 5
2.1.2 Hole filling 10
2.1.3 Depth map preprocessing 11
2.2 Smooth-depth-based Depth Image-based Rendering 12
2.2.1 Symmetric and asymmetric Gaussian smoothing filter 13
2.2.2 Parallax-map-based DIBR 15
2.2.3 Adaptive edge-oriented smoothing filter 15
Chapter 3 The Proposed Non-smoothing DIBR System 19
3.1 Overview of Proposed DIBR System 19
3.2 Proposed VS_1 22
3.2.1 Proposed object boundary detection 22
3.2.2 3D warping 25
3.2.3 The proposed edge-oriented hole filling method 27
3.3 Proposed VS_2 37
3.4 Hardware Design of Proposed DIBR System 40
3.4.1 Architecture design of VS_1 40
3.4.2 Architecture design of VS_2 44
Chapter 4 Simulation Results and Comparison 47
4.1 Simulation Results 47
4.2 Verification 64
Chapter 5 Conclusions and Future Work 67
5.1 Conclusions 67
5.2 Future Work 68
References 71
Biography 75
[1]C. Fehn, K. Hopf, and B. Quante, “Key technologies for an advanced 3D TV system, in Proc. SPIE Three-Dimensional TV, Video, and Display Ⅲ, Oct. 2004, pp. 66-80.
[2]C. Fehn, R. D. L. Barre, and S. Pastoor, “Interactive 3-DTV: concepts and key technologies, Proc. IEEE, vol. 94, pp. 524-538, 2006.
[3]C. Fehn, “Depth-image-based rendering (DIBR), compression and transmission for a new approach on 3D-TV, in Proc. SPIE Stereoscopic Displays and Virtual Reality Syst., Jan. 2004, pp. 93-104.
[4]K. J. Yoon and S. Kweon, “Adaptive support-weight approach for correspondence search, IEEE Trans. Pattern Anal. Mach. Intell., vol. 28, pp. 650-656, Apr. 2006.
[5]M. C. Yang, J. Y. Liu, Y. C. Yang, and K. H. Chen, “A quality-improved stereo matching by using incrementally-averaging orthogonally-scanned edge detection, in Proc. Int. Conf. 3DSA, June 2012, pp. 489-491.
[6]K. Zhang, J. Lu, and G. Lafruit, “Cross-based local stereo matching using orthogonal integral images, IEEE Trans. Circuits Syst. Video Technol., vol. 19, pp. 1073-1079, July 2009.
[7]D. Kim, D. Min, and K. Sohn, “A stereoscopic video generation method using stereoscopic display characterization and motion analysis, IEEE Trans. Broadcast., vol. 54, pp. 188-197, June 2008.
[8]J. Lee, S. Yoo, M. Kang, C. Chun, and C. Kim, “Depth estimation from a single image using salient objects detection, in Proc. Int. Conf. 3DSA, June 2012, pp. 264-267.
[9]M. T. Pourazad, P. Nasiopoulos, and R. K. Ward, “An H.264-based scheme for 2D to 3D video conversion, IEEE Trans. Consum. Electron., vol. 55, pp. 742-748, May 2009.
[10]Y. Feng, J. Ren, and J. Jiang, “Object-based 2D-to-3D video conversion for effective stereoscopic content generation in 3D-TV applications, IEEE Trans. Broadcast., vol. 57, pp. 500-509, June 2011.
[11]Y. C. Fan and T. C. Chi, “The novel non-hole-filling approach of depth image based rendering, in Proc. IEEE 3DTV, May 2008, pp. 325-328.
[12]C. Fehn, “A 3D-TV approach using depth-image-based rendering (DIBR), in Proc. Vis., Imaging, and Image Process., Sept. 2003, pp. 482-487
[13]A. Redert, M. O. de Beeck, C. Fehn, W. Ijsselsteijn, M. Pollefeys, L. V. Gool, E. Ofek, I. Sexton, and P. Surman, “ATTEST-advanced three-dimensional television system technologies, in Proc. IEEE 3DPVT, Jan. 2002, pp. 313-319.
[14]A. Woods, T. Docherty, and R. Koch, “Image distortions in stereoscopic video systems, in Proc. SPIE Stereoscopic Displays and Appl., Feb. 1993, pp. 36-48
[15]W. A. IJsselsteijn, H. de Ridder, and J. Vliegen, “Effects of stereoscopic filming parameters and display duration on the subjective assessment of eye strain, in Proc. SPIE Stereoscopic Displays and Virtual Reality Syst., Apr. 2000, pp. 12-22.
[16]I. Daribo and H. Saito, “A novel inpainting-based layered depth video for 3DTV, IEEE Trans. Broadcast., vol. 57, pp. 533-541, June 2011.
[17]P. Ndjiki-Nya, M. Koppel, D. Doshkov, H. Lakshman, P. Merkle, K. Muller, and T. Wiegand, “Depth image-based rendering with advanced texture synthesis for 3-D video, IEEE Trans. Multimedia, vol. 13, pp. 453-465, June 2011.
[18]M. Solh and G. AlRegib, “Hierarchical hole-filling (HHF): depth image based rendering without depth map filtering for 3D-TV, in Proc. IEEE MMSP, Oct. 2010, pp. 87-92.
[19]M. S. Ko, D. W. Kim, D. L. Jones, J. Yoo, “A new common-hole filling algorithm for arbitrary view synthesis, in Proc. Int. Conf. 3DSA, June 2012, pp. 242-245.
[20]A. Criminisi, P. Perez, and K. Toyama, “Region filling and object removal by exemplar-based image inpainting, IEEE Trans. Image Process., vol. 13, pp. 1200-1212, Sept. 2004.
[21]W. J. Tam, G. Alain, L. Zhang, T. Martin, and R. Renaud, “Smoothing depth maps for improved stereoscopic image quality, in Proc. SPIE Three-Dimensional TV, Video, and Display Ⅲ, Oct. 2004, pp. 162-172.
[22]L. Zhang and W. J. Tam, “Stereoscopic image generation based on depth images for 3DTV, IEEE Trans. Broadcast., vol. 51, pp. 191-199, June 2005.
[23]W. Y. Chen, Y. L. Chang, S. F. Lin, L. F. Ding, and L. G. Chen, “Efficient depth image based rendering with edge dependent depth filter and interpolation, in Proc. IEEE ICME, July 2005, pp. 1314-1317.
[24]T. C. Lin, H. C. Huang, and Y. M. Huang, “Preserving depth resolution of synthesized images using parallax-map-based DIBR for 3D-TV, IEEE Trans. Consum. Electron., vol. 56, pp. 720-727, May 2010.
[25]H. Dong, S. Jianfei, and X. Ping, “Improvement of virtual view rendering based on depth image, in Proc. IEEE ICIG, Aug. 2011, pp. 254-257.
[26]P. J. Lee and Effendi, “Nongeometric distortion smoothing approach for depth map preprocessing, IEEE Trans. Multimedia, vol. 13, pp. 246-254, Apr. 2011.
[27]L. H. Wang, X. J. Huang, M. Xi, D. X. Li, and M. Zhang, “An asymmetric edge adaptive filter for depth generation and hole filling in 3DTV, IEEE Trans. Broadcast., vol. 56, pp. 425-431, Sept. 2010.
[28]B. E. Bayer, “Color imaging array, U.S. Patent 3 971 065, July 20, 1976.
[29]Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity, IEEE Trans. Image Process., vol. 13, pp. 600-612, Apr. 2004.
[30]D. Scharstein, R. Szeliski, and R. Zabih, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms, in Proc. IEEE SMBV, Dec. 2001, pp. 131-140.
[31]D. Scharstein and R. Szeliski, “High-accuracy stereo depth maps using structured light, in Proc. IEEE CVPR, June 2003, pp. 195-202.
[32]D. Scharstein and C. Pal, “Learning conditional random fields for stereo, in Proc. IEEE CVPR, June 2007, pp. 1-8.
[33]Middlebury Stereo Vision Page. [Online]. Available: http://vision.middlebury.edu/stereo/

連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top