跳到主要內容

臺灣博碩士論文加值系統

(18.97.14.87) 您好!臺灣時間:2024/12/04 17:03
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:王韋翔
研究生(外文):Wei-ShiangWang
論文名稱:應用於改良DIBR系統的時空一致性補洞演算法
論文名稱(外文):A Consistent Spatio-Temporal Hole Filling Algorithm for the Improvement of DIBR Systems
指導教授:楊家輝楊家輝引用關係
指導教授(外文):Jar-Ferr Yang
學位類別:碩士
校院名稱:國立成功大學
系所名稱:電腦與通信工程研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2013
畢業學年度:101
語文別:英文
論文頁數:82
中文關鍵詞:二維至三維影像轉換基於深度影像繪圖法時域空洞填補運動向量虛擬影像
外文關鍵詞:2D to 3D conversiondepth-image-based rendering (DIBR)spatiohole-fillingmotion vectorvirtual image
相關次數:
  • 被引用被引用:1
  • 點閱點閱:358
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
隨著近幾年3D電影所帶起的風潮,再加上越來越普及的3D立體顯示器,立體視訊的需求量也越來越大,但是由於三維立體視訊內容的不足,如何利用現今龐大的二維平面視訊內容去生成三維立體視訊,便是一個很重要的課題。隨著裸眼立體顯示器之發展,虛擬視角影像亦隨著增加,使得立體視角產生更加困難。
本論文為改良深度影像繪圖法(depth-image-based rendering,DIBR)生成虛擬影像,提出基於運動向量之時空一致性補洞演算法(consistent spatio-temporal hole filling algorithm)。本演算法放棄使用平滑深度圖,以減少失真和運算時間,利用運動向量找到其他鄰近幀的相同物件作為參考,取得被前景阻擋的影像資訊,以填補利用基於深度影像繪圖法生成後所產生的空洞。本演算法能有效處理複雜背景的填補,且不會有橡皮板假影(rubber-sheets artifacts)及幾何失真(geometric distortion)出現,能有效增進虛擬影像的品質。
In recent years, along with the wave of 3D movies and the growth of popularity of 3D stereoscopic displays, the demand for stereoscopic videos becomes more and more serious. To overcome the lack of 3D video contents, a very important issue is how to convert the existing huge 2D video contents to generate 3D ones. For multiview autostereoscopic displays, the number of virtual viewing images is also increased to make the 3D view generation become harder.
In this thesis, a consistent spatio-temporal hole filling algorithm, which is based on motion information, the virtual image is proposed to improve the depth-image-based rendering (DIBR) algorithm. We abandon the use of smoothing depth map in order to reduce distortion and computation time, instead we use the motion vectors to find the same object in neighboring frames as a reference and retrieve the image information which is blocked by the foreground to fill the disocclusions generated by the depth-image-based rendering algorithm. The proposed algorithm can deal with the hole-filling for complex background effectively. Simulations show that we could avoid most rubber-sheets artifacts and geometric distortions and effectively improve the quality of the generated virtual images.
摘 要 III
ABSTRACT IV
致 謝 V
CONTENTS VI
LIST OF TABLES VIII
LIST OF FIGURES IX
CHAPTER 1 INTRODUCTION 1
1.1 RESEARCH BACKGROUND 1
1.1.1 Principle of Depth Perception: 4
1.1.2 Stereo and 3D Display Technologies: 10
1.1.3 Method of Generating Stereoscopic Contents: 15
1.2 MOTIVATION AND PURPOSE OF THE RESEARCH 17
CHAPTER 2 FUNDAMENTALS 19
2.1 DEPTH IMAGE-BASED RENDERING (DIBR) ALGORITHM 20
2.1.1 Depth Map: 20
2.1.1 Preprocessing: 22
2.1.2 3D Image Warping: 24
2.1.3 Hole-filling: 27
2.2 CHALLENGES OF DIBR ALGORITHM 28
2.2.1 Visibility competition: 28
2.2.2 Resampling: 28
2.2.3 Ghost Contour: 30
2.2.4 Geometric Distortion: 31
2.2.5 Artifacts: 32
2.2.6 Temporal Consistency: 33
CHAPTER 3 CONSISTENT SPATIO-TEMPORAL HOLE-FILLING ALGORITHMS 34
3.1 OVERVIEW 34
3.2 MODIFIED WARPING ALGORITHM 37
3.2.1 3D Image Warping: 37
3.2.2 Color Image Preprocessing: 39
3.2.3 Depth Warping and Filling: 42
3.3 WEIGHT MAP ESTIMATION 45
3.4 REFERENCE FRAME SELECTION 47
3.5 BLOCK DIVISION 48
3.6 CORRESPONDING BLOCK SEARCH 50
3.7 THE PROPOSED HOLE FILLING 51
3.7.1 Color and Reference Point Matching: 52
3.7.2 Hole-filling by Reference Frame: 53
3.7.3 Remain Hole-filling: 54
3.8 TEMPORAL CONSISTENCY IN THE PROPOSED SYSTEM: 55
CHAPTER 4 EXPERIMENTAL RESULTS 56
4.1 SUBJECTIVE COMPARISON 56
4.2 OBJECTIVE COMPARISON 67
CHAPTER 5 CONCLUSION 75
REFERENCES 76
[1]Jim Dorey, “新ビジネス潮流「3D映像」特集Part 3-技術動向とビジネス展望, Renesas, April 4, 2013. Retrieved on‎ June 8, 2013 from
http://japan.renesas.com/edge_ol/global/08/index.jsp
[2]Kirsten Acuna, “3 Signs That 3D Movies Are The Way Of The Future, Business Insider, January 15, 2013. Retrieved on‎ June 8, 2013 from
http://www.businessinsider.com/3d-movies-have-a-future-in-hollywood-2013-1
[3]Yoonsung Chung, “Demand for 3D Optical Film Rises as Passive 3D TV Competes with Shutter Glass, According to NPD DisplaySearch, NPD DisplaySearch, March 27, 2013. Retrieved on‎ June 8, 2013 from
http://www.displaysearch.com/cps/rde/xchg/displaysearch/hs.xsl/130327_demand_for_3d_optical_film_rises_as_passive_3d_tv_competes_with_shutter_glass.asp
[4]“All Time Box Office: WORLDWIDE GROSSES, Box Office Mojo. Retrieved on‎ June 8, 2013 from http://www.boxofficemojo.com/alltime/world/
[5]Charles Wheatstone, “Contributions to the physiology of vision.—Part the First. On some remarkable, and hitherto unobserved, phænomena of binocular vision, Proceeding of Philosophical Transactions of the Royal Society of London, vol. 128, pp. 371–394, January 1, 1838.
[6]Eugen Bruce Goldstein, “Sensation and perception (6th edition), Pacific Grove CA: Wadsworth, 2002.
[7]Harry Edwin Burton, “The optics of Euclid, Journal of the Optical Society of America, vol.35, no. 5, p. 357, May 1945.
[8]Ferris Steven, “Motion parallax and absolute distance, Journal of experimental psychology, vol. 95(2), pp. 258-263, October 1972.
[9]John Aloimonos and Brown CM., “On the kinetic depth effect, Biological Cybernetics, vol. 60, no. 6, pp. 445-455, April 1, 1989.
[10]Mike Malak, “Railroad tracks vanishing into the distance, Wikimedia Commons, May 23, 2006. Retrieved on‎ June 9, 2013 from
https://upload.wikimedia.org/wikipedia/commons/2/2a/Railroad-Tracks-Perspective.jpg
[11]Albert Flocon and André Barre, “CurvilinearPerspective: From Visual Space to the Constructed Image, University of California Press, Berkely and Los Angeles, California, 1987.
[12]Robert P. O'Shea, Shane G. Blackburn and Hiroshi Ono, “Contrast as a depth cue, Proceeding of Vision Research, vol.34, no. 12, pp. 1595-1604, June 1994.
[13]“Augen, Vorarlberger Bildungsserver. Retrieved on‎ June 11, 2013 from
http://www.bio.vobs.at/physiologie/a-augen.htm
[14]Gustave Caillebotte, “Paris Street, Rainy Day, Art Institute of Chicago, 1877. Retrieved on‎ June 11, 2013 from http://www.artic.edu/aic/collections/artwork/20684
[15]Cassin, B. and Solomon, S, “Dictionary of Eye Terminology, Gainsville, Florida: Triad Publishing Company, 1990.
[16]Antonio Medina Puerta, The power of shadows: shadow stereopsis, Journal of the Optical Society of America A, vol.6, no. 2, pp. 309-311, February 1989.
[17]William Welling, “PHOTOGRAPHY IN AMERICA: The Formative Years, 1839-1900 - A Documentary History, Crowell, p. 23, November 16, 1978.
[18]Marshall Brain, “How 3-D Glasses Work, HowStuffWorks, Retrieved on‎ June 12, 2013 from http://science.howstuffworks.com/3-d-glasses2.htm
[19]N. Dodgson, “Autostereoscopic 3D Displays, IEEE Computer, vol. 38, no.8, pp. 31-36, August 2005.
[20]Nantawan Kalapak, “Hatsune Miku, Japan's 3D Hologram Rock Star, Joy of Giving, Retrieved on‎ June 13, 2013 from
http://chic-ideas.blogspot.tw/2010/11/hatsune-miku-japans-3d-hologram-rock.html
[21]“Launch of New Portable Game Machine, Minami-ku Kyoto: Nintendo. March 23, 2010. Retrieved on‎ June 13, 2013 from
http://www.nintendo.co.jp/ir/pdf/2010/100323e.pdf
[22]KONAMI573ch, “「Project ラブプラス for Nintendo 3DS」プロモーションムービー, Youtube, September 28, 2010. Retrieved on‎ June 13, 2013 from
https://www.youtube.com/watch?v=-khobY1H2KE
[23]Mike Malak, “Railroad tracks vanishing into the distance, FUJIFILM. Retrieved on‎ June 14, 2013 from
http://www.fujifilm.com/products/3d/camera/finepix_real3dw3/product_views/
[24]Yun-Suk Kang, Yo-Sung Ho, “High-quality multi-view depth generation using multiple color and depth cameras, Proceeding of IEEE International Conference on Multimedia and Expo, pp. 1405-1410, July 19, 2010.
[25]Jian Sun, Heung-Yeung Shum and Nan-Ning Zheng, Stereo Matching Using Belief Propagation, IEEE Transactions of Pattern Analysis and Machine Intelligence, vol. 25, no. 5, pp. 787-800, 2003.
[26]ToFExpert, “Image of CamCube, Wikimedia Commons, August 11, 2010. Retrieved on‎ June 14, 2013 from
http://commons.wikimedia.org/wiki/File:PMDCamCube.jpg
[27]Michael Roderick, “新ビジネス潮流「3D映像」特集 Part 2-「2D/3D」変換技術, Renesas, December 27, 2012. Retrieved on‎ June 14, 2013 from
http://japan.renesas.com/edge_ol/global/07/index.jsp

[28]Pei-Jun Lee and Effendi, “Nongeometric Distortion Smoothing Approach for Depth Map Preprocessing, IEEE Transactions on Multimedia, vol. 13, no. 2, pp. 246-254, April 2011.
[29]André Redert, Marc Op de Beeck, Christoph Fehn, Wijnand IJsselsteijn, Marc Pollefeys, Luc Van Gool, Eyal Ofek, Ian Sexton and Philip Surman, “ATTEST: Advanced Three-dimensional Television System Technologies, Proceedings of the First International Symposium on 3D Data Processing Visualization and Transmission, pp. 313-319, 2002.
[30]Marc Op de Beeck and André Redert, “Three dimensional video for the home, Proceedings of the International Conference on Augmented, Virtual Environments and Three-Dimensional Imaging (ICAV3D), Mykonos, Greece, pp. 188-191, 2001.
[31]Liang Zhang and Wa James Tam, “Stereoscopic image generation based on depth images for 3D TV, IEEE Transactions on Broadcasting, vol. 51, no. 2, pp. 191-199, June 2005.
[32]Christoph Fehn, “Depth-Image-Based Rendering (DIBR), Compression and Transmission for a New Approach on 3D-TV, Proceedings of SPIE: Stereoscopic Displays and Virtual Reality Systems XI, vol. 5291, pp. 93-104, May 21, 2004.
[33]Andrew Woods, Tom Docherty and Rolf Koch, “Image Distortions in Stereoscopic Video Systems, Proceedings of SPIE: Stereoscopic Dispalys and Applications IV, vol. 1915, pp. 36-48, February 1993.
[34]Hirokazu Yamanoue, Makoto Okui and Fumio Okano, “Geometrical Analysis of Puppet-Theater and Cardboard Effects in Stereoscopic HDTV Images, IEEE Transactions on Circuits and Systems for Video Technology, vol. 16, no. 6, pp. 744- 752, June 2006.

[35]Marcelo Bertalmio, Guillermo Sapiro, Vicent Caselles and Coloma Ballester, “Image Inpainting, Proceedings of ACM SIGGRAPH, Intern. Conference on Computer Graphics and Interactive Techniques, vol. 174, pp. 417-424, New Orleans, LA, July 2000.
[36]Lai-Man Po, Shihang Zhang, Xuyuan Xu and Yuesheng Zhu, “A new multidirectional extrapolation hole-filling method for Depth-Image-Based Rendering, Proceedings of IEEE International Conference on Image Processing (ICIP), pp. 2589- 2592, September 11, 2011.
[37]Ned Greene, Michael Kass and Gavin Miller, “Hierarchical Z-buffer visibility, Proceedings of the 20th annual conference on Computer graphics and interactive techniques, pp. 231-240, 1993.
[38]George Wolberg, “Digital Image Warping, Proceedings of IEEE Computer Society Press, Los Alamitos, CA, USA, 1990.
[39]Sveta Zinger, Luat Do and Peter H. N. de With, “Free-viewpoint depth image based rendering, Journal of Visual Communication and Image Representation, vol. 21, no. 5-6, pp. 533–541, July 2010.
[40]Ya-Mei Feng, Dong-Xiao Li, Kai Luo and Ming Zhang, “Asymmetric bidirectional view synthesis for free viewpoint and three-dimensional video, IEEE Transactions on Consumer Electronics, vol. 55, no. 4, pp. 2349-2355, November 2009.
[41]Xue Jiufei, Xi Ming, Li Dongxiao and Zhang Ming, “A New Virtual View Rendering Method Based on Depth Image, Asia-Pacific Conference on Wearable Computing Systems (APWCS), pp. 147-150, April 17, 2010.
[42]Do, Luat, and Sveta Zinger, “Quality improving techniques for free-viewpoint DIBR, Proceedings of SPIE: Stereoscopic Displays and Applications XXI, February 24, 2010.
[43]Chia-Ming Cheng, Shu-Jyuan Lin and Shang-Hong Lai, “Spatio-Temporally Consistent Novel View Synthesis Algorithm From Video-Plus-Depth Sequences for Autostereoscopic Displays, IEEE Transactions on Broadcasting, vol. 57, no. 2, pp. 523-532, June 2011.
[44]Kuo Hao Lee and Jar Ferr Yang, “Multiview Synthesis Algorithms Based on Depth and Texture Consistency, Master’s Thesis, Institute of Computer and Communication Engineering, National Cheng Kung University, Tainan, Taiwan, 2011.
[45]Wan-Yu Chen, Yu-Lin Chang, Shyh-Feng Lin, Li-Fu Ding and Liang-Gee Chen, “Efficient Depth Image Based Rendering with Edge Dependent Depth Filter and Interpolation, Proceedings of the 2005 IEEE International Conference on Multimedia and Expo, pp. 1314-1317, July 6, 2005.
[46]C. Liu, “Beyond Pixels: Exploring New Representations and Applications for Motion Analysis, Doctoral Thesis, Massachusetts Institute of Technology, May 2009.
[47]Thomas Brox, Andréa Bruhn, Nils Papenberg and Joachim Weickert, “High accuracy optical flow estimation based on a theory for warping, Proceedings of European Conference on Computer Vision (ECCV), pp. 25-36, 2004.
[48]Andrés Bruhn, Joachim Weickert and Christoph Schnörr, “Lucas/Kanade meets Horn/Schunk: combining local and global optical flow methods, International Journal of Computer Vision (IJCV), vol. 61, no. 3, pp. 211-231, 2005.
[49]Zhou Wang, Alan Conrad Bovik, Hamid Rahim Sheikh and Eero P. Simoncelli, “Image quality assessment: from error visibility to structural similarity, IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600-612, April 13, 2004.
[50]Scharstein, Daniel and Richard Szeliski, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms, International Journal of Computer Vision, vol. 47, pp. 7-42, 2002.
[51]Scharstein, Daniel and Richard Szeliski, “High-accuracy stereo depth maps using structured light, In IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 195-202, June 2003.
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top