跳到主要內容

臺灣博碩士論文加值系統

(18.97.9.175) 您好!臺灣時間:2024/12/07 22:28
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:施光祖
研究生(外文):Kuang-Tsu Shih
論文名稱:運用相機陣列之高解析度攝影與場景深度估測
論文名稱(外文):High-Resolution Imaging and Depth Acquisition Using a Camera Array
指導教授:陳宏銘陳宏銘引用關係
指導教授(外文):Homer H. Chen
口試日期:2017-07-17
學位類別:博士
校院名稱:國立臺灣大學
系所名稱:電信工程學研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2017
畢業學年度:105
語文別:英文
論文頁數:129
中文關鍵詞:計算攝影學相機陣列高解析度深度估測
外文關鍵詞:computational photographycamera arrayhigh resolutiondepth estimation
相關次數:
  • 被引用被引用:0
  • 點閱點閱:624
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
隨著攝影成本的大幅下降,這是一個只要有智慧型手機,人人都可能做攝影師的時代。也正因如此,研究者們花費極大的心力與成本追求更高品質的影像。在各種衡量影像品質的指標中,解析度可能是最具有代表性也最重要的一項。一般認為,藉由優化光學設計與硬體構造以追求更高解析度的手段已到達一個瓶頸。因此,人們轉而研究以計算為手段來提高影像解析度的方法。而在這篇論文中,我們探討的主題即為針對多相機系統設計的計算性高解析度攝影術。
本篇論文可分為二部分。第一部分集中於探討與分析現存的高解析攝影方法。我們特別深入研究兩類方法:「次相素重新對焦法」與「影像重建型光場超解像術」。在次相素重新對焦法的部分,我們由數學推導證明現存的方法普遍缺少了反卷積的步驟,並指出在次相素重新對焦的過程中加入反卷積演算法能夠有效提高輸出影像的銳利度。同時,我們亦設計實驗探討校正誤差對次相素重新對焦法的影響,並定量地分析在給定的影像品質之下最大可容忍的校正誤差。而在影像重建型光場超解像術的部分,我們以實驗證明了此類方法所能帶來的解像度增益並無法隨著相機數量的增加而無止盡地提高,並且指出解像度增益的上限是由系統的點擴散函數所決定的。我們所設計的系列實驗證明了輸出影像的解像度與影像對齊的準度無法兩全,而這也是此類型方法之基本限制。
相對於第一部分的分析,在第二部分,我們提出原創的高解析度計算攝影學系統。我們的系統是一混合了不同焦距的相機所組成的相機陣列。與傳統的多攝影機系統相比,我們所設計的相機陣列具有極高的像素利用率,並且能夠產生具有同樣高解析度的場景深度圖。我們的相機陣列由兩部分所構成。其一是優化過的相機布局,其二是一原創的影像融合演算法。在硬體部分,我們提出一優化相機布局的方法,並根據該方法提出非平行光軸與非均一焦距的設計。在軟體部分,我們的影像融合方法可以整合由各相機所拍攝之低解析度影像以產出高解析度影像,並且可以避免之前的影像融合方法所造成的影像模糊。
In this age where everyone can be a photographer with his or her smart phone, the pursuit of higher imaging quality has become more important and profitable than ever before. Among the quality metrics of images, resolution is often the top one that people care the most. Being one of the conventional approaches to increasing the image resolution, optics optimization is believed to have reached its bottleneck. As a consequence, researchers are turning to computational photography to seek breakthrough. In this dissertation, we study the computational approach to high-resolution imaging based on multi-aperture systems such as a camera array or a lenslet array.
This dissertation can be divided into two parts. The first part is dedicated to the analysis of existing approaches. Particularly, two approaches are inspected in depth: subpixel refocusing and reconstruction-based light field super-resolution. For subpixel refocusing, we show that a deconvolution step is missing in previous work and incorporating a deconvolution in the loop significantly enhances the sharpness of the results. We also conduct experiments to quantitatively analyze the effect of calibration error on subpixel refocusing and analyze the upper bound of the error for a targeted image quality. On the other hand, for reconstruction-based light field super-resolution, we show through experiments that the resolution gain obtainable by super-resolution does not increase boundlessly to the number of cameras and is ultimately limited by the size of the point spread function. In addition, we point out through experiment that there is a tradeoff between the obtainable resolution and the registration accuracy. The tradeoff is a fundamental limit of reconstruction-based approaches.
In contrast to the analysis work in the first part, the second part of the dissertation describes our original solution: a computational photography system based on a camera array with mixed focal lengths. Our solution has two distinguished features: it can generate an output image whose resolution is higher than 80% of the total captured pixels and a disparity map of the same resolution that contains the depth information about the scene. Our solution consists of optimized hardware and an image fusion algorithm. On the hardware size, we propose an approach to optimize the configuration of a camera array for high-resolution imaging using cameras with mixed focal lengths and non-parallel optical axes. On the software side, an algorithm is developed to integrate the low-resolution images captured by the proposed camera array into a high-resolution image without the blurry appearance problem of previous methods.
Contents
1 Introduction 1
1.1 Overview…………………………………………………………………………… 3
1.2 Contributions……………………………………………………………………….. 5
2 Light Field Basics 7
2.1 Light Field Representation………………………………………………………….. 7
2.2 Refocusing………………………………………………………………………….. 11
3 Literature Review 15
3.1 Light Field Imaging Systems………………………………………………………. 15
3.1.1 Lenslet-Based Camera……………………………………………………... 16
3.1.2 Camera Array………………………………………………………………. 22
3.1.3 Coded Imaging……………………………………………………………... 25
3.2 High-Resolution Rendering………………………………………………………… 28
3.2.1 Subpixel Refocusing……………………………………………………….. 28
3.2.2 Light Field Super-Resolution……………………………………………… 30
3.2.3 Super-Resolution Analysis…………………………………………………. 32
3.3 High-Resolution Imaging Systems…………………………………………………. 33
3.3.1 Giga Pixel Imaging System………………………………………………… 33
3.3.2 Hybrid Imaging System……………………………………………………. 36
4 Subpixel Refocusing Analysis 37
4.1 Subpixel Refocusing………………………………………………………………... 38
4.2 The Deconvolution Step……………………………………………………………. 42
4.3 The Effect of Calibration Error…………………………………………………….. 46
4.3.1 Camera Position Error……………………………………………………… 49
4.3.2 Camera Orientation Error…………………………………………………... 50
4.3.3 PSF Size Error……………………………………………………………… 51
4.3.4 PSF Shape Error……………………………………………………………. 52
4.4 Summary…………………………………………………………………………… 53
5 Reconstruction-Based Light Field Super-Resolution 55
5.1 One-Dimensional Camera Array………………………………………………….... 56
5.2 Two-Dimensional Camera Array…………………………………………………… 66
5.2.1 Assumptions……………….……………………………………………….. 67
5.2.2 Input Images………………………………………………………………... 68
5.2.3 Super-Resolution…………………………………………………………... 68
5.2.4 Experimental Results………………………………………………………. 70
5.2.5 Prefiltering Kernel and Image Registration………………………………… 81
5.3 Summary…………………………………………………………………………… 82
6 Camera Array with Mixed Focal Lengths 83
6.1 Camera Array Configuration Analysis……………………………………………... 86
6.1.1 Completely Overlapping FOVs…………………………………………….. 87
6.1.2 Partial Overlapping FOV…………………………………………………... 87
6.1.3 Proposed Configuration……………………………………………………. 88
6.2 Technical Details…………………………………………………………………… 94
6.2.1 Hardware…………………………………………………………………… 94
6.2.2 Software……………………………………………………………………. 99
6.3 Experimental Results……………………………………………………………….. 101
6.3.1 Synthetic Scene…………………………………………………………….. 105
6.3.2 Real Scene………………………………………………………………….. 106
6.3.3 Comparison with Existing Method…………………………………………. 110
6.4 Limitations…………………………………………………………………………. 111
6.5 Summary…………………………………………………………………………… 112
7 Conclusion 113
Bibliography 117
Publication and Honors 127
Bibliography
[1]A. Brückner, "Micro optical multi-aperture imaging systems," Ph.D. dissertation, Faculty of Physics and Astronomy, Friedrich Schiller Univ., Jena, Germany, 2012.
[2]A. Buades, Y. Lou, J.-M. Morel, and Z. Tang, "Multi image noise estimation and denoising," MAP5 2010–19, 2010.
[3]Apple Inc. (2017). Apple iPhone7 [Online]. Available: http://www.apple.com/iphone-7
[4]A. Levin, R. Fergus, F. Durand, and W. T. Freeman, "Image and depth from a conventional camera with a coded aperture," ACM Trans. Graph., vol. 26, no. 3, pp. 70:1–70:9, 2007.
[5]A. Lumsdaine, L. Lin, J. Willcock, and Y. Zhou, "Fourier analysis of the focused plenoptic camera," SPIE Multimedia Content and Mobile Devices, pp. 1–14, 2013.
[6]A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin, "Dappled photography: Mask enhanced cameras for heterodyned light fields and coded aperture refocusing," ACM Trans. Graph., vol. 26, no. 3, pp. 69:1–69:12, 2007.
[7]B. Troccolli, S. B. Kang, and S. Seitz, “Multi-view multi-exposure stereo,” 3rd Int. Symp. 3D Data Process. Visual. Transmission, pp. 861–868, Jun. 2006.
[8]B. Wilburn, “High performance imaging using arrays of inexpensive cameras,” Ph.D. dissertation, Dept. Elect. Eng., Stanford Univ., 2004.
[9]B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, "High performance imaging using large camera arrays," ACM Trans. Graph., vol. 24, no. 3, pp. 765–776, 2005.
[10]B. Wilburn, N. Joshi, V. Vaish, M. Levoy and M. Horowitz, "High-speed videography using a dense camera array," IEEE Conf. Computer Vision Patter Recognition (CVPR), pp. 294–301, 2004.
[11]C. Buehler, M. Bosse, L. McMillan, S. Gortler, and M. Cohen, "Unstructured lumigraph rendering," ACM Annual Conf. Computer Graphics Interactive Techniques, pp. 425–432, 2001.
[12]C.-C. Chen, S.-C. Fan Chiang, X.-X. Huang, M.-S. Su, and Y.-C. Lu, "Depth estimation of light field data from pinhole-masked DSLR cameras," IEEE Int. Conf. Image Processing (ICIP), pp. 1769–1772, Sept. 2010.
[13]C. Liu, “Beyond pixels: exploring new representations and applications for motion analysis,” PhD dissertation, Massachusetts Institute of Technology, Cambridge, MA, 2009.
[14]C. Kim, H. Zimmer, Y. Pritch, A. Sorkine-Hornung, and M. Gross, "Scene reconstruction from high spatio-angular resolution light fields," ACM Trans. Graph., vol. 32, no. 4, pp. 73:1–73:12, 2013.
[15]C.-K. Liang and R. Ramamoorthi, "A light transport framework for lenslet light field cameras," ACM Trans. Graph., vol. 34, no. 2, pp. 16:1–16:19, 2015.
[16]C.-K. Liang, T.-H. Lin, B.-Y. Wong, C. Liu, and H. H. Chen, "Programmable Aperture Photography: Multiplexed Light Field Acquisition," ACM Trans. Graph., Vol. 27. No. 3, pp. 55:1–55:10, 2008.
[17]C.-K. Liang, Y.-C. Shih, and H. H. Chen, "Light Field Analysis for Modeling Image Formation," IEEE Trans. Image Process. (TIP), Vol. 20, No. 2, pp. 446–460, 2011.
[18]C. Perwaß and L. Wietzke, "Single lens 3D-camera with extended depth-of-field," Proc. SPIE Human Vision and Electronic Imaging, vol. 17, pp. 1–15, 2012.
[19]C.-T. Huang, Y.-W. Wang, L.-R. Huang, J. Chin, and L.-G. Chen, "Fast physically correct refocusing for sparse light fields using block-based multi-rate view interpolation," IEEE Trans. Image Process. (TIP), vol. 26, no. 2, pp. 603–618, 2017.
[20]C. Zhang, L. Wang, and R. Yang, "Semantic segmentation of urban scenes using dense depth maps," European Conf. Computer Vision (ECCV), pp. 708–721, 2010.
[21]D. Capel and A. Zisserman, "Computer vision applied to super resolution," IEEE Signal Process. Mag., vol. 20, no. 3, pp. 75–86, 2009.
[22]D. J. Brady,M. E. Gehm,R. A. Stack,D. L. Marks,D. S. Kittle,D. R. Golish,E. M. Vera, and S. D. Feller, "Multiscale gigapixel photography." Nature, vol. 486, pp. 386–389, 2012.
[23]D. J. Brady and N. Hagen, "Multiscale lens design," Optics express, vol. 17, no. 13, pp. 10659–10674, 2009.
[24]D. L. Donoho, "Compressed sensing," IEEE Trans. Information Theory, vol. 52, no. 4, pp. 1289–1306, 2006.
[25]D. Robinson and P. Milanfar, "Statistical performance analysis of super-resolution," IEEE Trans. Image Process. (TIP), vol. 15 no. 6, pp. 1413–1428, 2006.
[26]D. Scharstein, H. Hirschmüller, Y. Kitajima, G. Krathwohl, N. Nesic, X. Wang, and P. Westling, “High-resolution stereo datasets with subpixel-accurate ground truth,” Proc. German Conf. Pattern Recognition (GCPR), pp. 1–12, 2014.
[27]E. J. Candès, J. K. Romberg, and T. Tao, "Stable signal recovery from incomplete and inaccurate measurements," Commun. Pure Applied Mathematics, vol. 59, no. 8, pp. 1207-1223, 2006.
[28]E. J. Candès and M. B. Wakin, "An introduction to compressive sampling," IEEE Signal Process. Mag., vol. 25, no. 2, pp. 21-30, 2008.
[29]E. Luo, S. H. Chan, S. Pan, and T. Q. Nguyen, "Adaptive non-local means for multi-view image denoising: searching for the right patches via a statistical approach," IEEE Int. Conf. Image Process. (ICIP), pp. 543–547, 2013.
[30]F. Bouzaraa, O. Urfalioglu, and G. Cordara, “Dual-exposure image registration for HDR processing,” IEEE Int. Conf. Acoust. Speech Signal Process. (ICASSP), pp. 1553–1557, 2015.
[31]F. Pérez, "Super-resolution in plenoptic cameras by the integration of depth from focus and stereo," Int. Conf. Computer Commun. Networks, pp. 1–6, 2010.
[32]F. Pérez and J. P. Lüke, "Simultaneous estimation of super-resolved depth and all-in-focus images from a plenoptic camera," Conf. 3DTV: The True Vision-Capture Transmission and Display of 3D Video, pp. 1–4, 2009.
[33]F. Pérez, A. Pérez, M. Rodríguez, and E. Magdaleno, "Fourier slice super-resolution in plenoptic cameras," IEEE Int. Conf. Computational Photography (ICCP), pp. 1–11, 2012.
[34]G. Wetzstein, I. Ihrke, D. Lanman, and W. Heidrich, "State of the art in computational plenoptic imaging," ACM EUROGRAPHICS, pp. 1–24, 2010.
[35]G. Wetzstein, I. Ihrke, D. Lanman, and W. Heidrich, "Computational plenoptic imaging," Computer Graph. Forum, vol. 30, no. 8, pp. 2397–2426, 2011.
[36]G. Zhang, J. Jia, and H. Bao, "Simultaneous multi-body stereo and segmentation," IEEE Int. Conf. Computer Vision (ICCV), pp. 826–833, 2011.
[37]H. Lin, C. Chen, S. B. Kang, and J. Yu, "Depth recovery from light field using focal stack symmetry," IEEE Int. Conf. Computer Vision (ICCV), pp. 3451–3459, 2015.
[38]J. Fiss, B. Curless, and R. Szeliski, "Refocusing plenoptic images using depth-adaptive splatting," IEEE Int. Conf. Computational Photography (ICCP), pp. 1–9, 2014.
[39]J. Fiss, B. Curless, and R. Szeliski, "Light field layer matting," IEEE Conf. Computer Vision Pattern Recognition (CVPR), pp. 623–631, 2015.
[40]J. Hu, O. Gallo, K. Pulli, and X. Sun, “HDR deghosting: How to deal with saturation?” IEEE Conf. Comput. Vis. Pattern Recog., pp. 1163-1170, 2013.
[41]J. Kopf, M. Uyttendaele, O. Deussen, and M. F. Cohen, "Capturing and viewing gigapixel images," ACM Trans. Graph., vol. 26, no. 3, pp. 93:1–93:10, 2007.
[42]J. Li and Z. N. Li, "Continuous depth map reconstruction from light fields," IEEE Int. Conf. Multimedia Expo (ICME), pp. 1–6, 2013.
[43]J. T. Barron, A. Adams, Y.-C. Shih, and C. Hernández, "Fast bilateral-space stereo for synthetic defocus," IEEE Conf. Computer Vision Pattern Recognition (CVPR), pp. 4466–4474, 2015.
[44]J. Tian and K.-K. Ma, "A survey on super-resolution imaging," Signal, Image and Video Process., vol. 5, no. 3, pp. 329-342, 2011.
[45]K. He, J. Sun, and X. Tang, “Guided image filtering,” IEEE Trans. Pattern Anal. Machine Intell. (TPAMI), vol. 35, no. 6, pp. 1397–1409, 2013.
[46]K. Marwah, G. Wetzstein, Y. Bando, and R. Raskar, "Compressive light field photography using overcomplete dictionaries and optimized projections," ACM Trans. Graph., vol. 32, no. 4, pp. 46:1–46:12, 2013.
[47]K. Mitra and A. Veeraraghavan, "Light field denoising, light field superresolution and stereo camera based refocusing using a GMM light field patch prior," IEEE Conf. Computer Vision Pattern Recognition Workshops (CVPRW), pp. 1–7, 2012.
[48]K. Venkataraman, D. Lelescu, J. Duparré, A. McMahon, G. Molina, P. Chatterjee, R. Mullis, and S. Nayar, "Picam: an ultra-thin high performance monolithic camera array," ACM Trans. Graph., vol. 32, no. 6, pp. 166:1–166:13, 2013.
[49]K. Vaidyanathan, J. Munkberg, P. Clarberg, and M. Salvi, "Layered light field reconstruction for defocus blur," ACM Trans. Graphics, vol. 34, no. 2, pp. 23:1–23:12, 2015.
[50]L. C. Pickup, D. P. Capel, S. J. Roberts, and A. Zisserman, "Overcoming registration uncertainty in image super-resolution: Maximize or marginalize?" EURASIP J. Advances Signal Process, vol. 2007, no. 1, pp. 1–14, 2007.
[51]L. C. Pickup, D. P. Capel, S. J. Roberts, and A. Zisserman, "Bayesian image super-resolution, continued," Advances Neural Inform. Process. Systems (NIPS), pp. 1089–1096. MIT Press, Cambridge, 2006.
[52]L. C. Pickup, D. P. Capel, S. J. Roberts, and A. Zisserman, "Bayesian methods for image super-resolution," The Computer Journal, vol. 52, no. 1, pp. 101–113, 2007.
[53]Light Inc. (2017). Light L16 Camera [Online]. Available: https://light.co/camera
[54]L. Peng and L. Dijun, "All-in-focus image reconstruction based on plenoptic cameras," IEEE Int. Conf. Image and Graphics (ICIG), pp. 612–617, 2013.
[55]Lytro Inc. (2017). Lytro Imaging [Online]. Available: https://www.lytro.com/imaging
[56]L. Zhang, S. Vaddadi, H. Jin, and S. K. Nayar, "Multiple view image denoising," IEEE Conf. Computer Vision Pattern Recognition (CVPR), pp. 1542–1549, 2009.
[57]M. D. Grossberg and S. K. Nayer, “Determining the camera response from images: What is knowable?” IEEE Trans. Pattern Anal. Machine Intell. (TPAMI), vol. 24, no. 11, pp. 1455–1467, 2003.
[58]M. Hog, N. Sabater, B. Vandame, and V. Drazic, "An image rendering pipeline for focused plenoptic cameras," IEEE Trans. Computational Imaging, vol. 3, pp. 1–11, 2017.
[59]M. Levoy and P. Hanrahan, "Light field rendering," Annual Conf. Computer Graph. Interactive Techniques, pp. 31–42, 1996.
[60]M. N. Do, D. Marchand-Maillet, and M. Vetterli, "On the bandwidth of the plenoptic function," IEEE Trans. Image Process., vol. 21, no. 2, pp. 708-717, 2012.
[61]M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, "Depth from combining defocus and correspondence using light-field cameras," IEEE Int. Conf. Computer Vision (ICCV), pp. 673–680, 2013.
[62]M. W. Tao, T.-C. Wang, J. Malik, and R. Ramamoorthim, "Depth estimation for glossy surfaces with light-field cameras," European Conf. Computer Vision Workshop (ECCVW), pp. 1–14, 2014.
[63]N. Sabater, M. Seifi, V. Drazic, G. Sandri, and P. Pérez, "Accurate disparity estimation for plenoptic images," European Conf. Computer Vision Workshop (ECCVW), pp. 548–560, 2014.
[64]N. Sun, H. Mansour and R. Ward, “HDR image construction from multi-exposed stereo LDR images,” IEEE Int. Conf. Image Process., pp. 2973–2976, 2010.
[65]O. S. Cossairt, D. Miau, and S. K. Nayar. "Gigapixel computational imaging," IEEE Int. Conf. Computational Photography (ICCP), pp. 1–8, 2011.
[66]P. E. Debevec and J. Malik, “Recovering high dynamic range radiance maps from photographs,” ACM SIGGRAPH, pp. 369–378, 1997.
[67]POV-Ray. (2017). POV-Ray [Online]. Available: http://www.povray.org/
[68]Q. Shan, J. Jia, and A. Agarwala, "High-quality motion deblurring from a single image," ACM Trans. on Graph., vol. 27, no. 3, pp. 73:1–73:10, 2008.
[69]Raytrix. (2017). Raytrix [Online]. Available: https://www.raytrix.de/produkte/#r29series
[70]R. Horisaki, S. Irie, Y. Ogura, and J. Tanida, "Three-dimensional information acquisition using a compound imaging system," Optical Review, vol. 14, no. 5, pp. 347–350, 2007.
[71]R. Horisaki, K. Choi, J. Hahn, J. Tanida, and D. J. Brady, "Generalized sampling using a compound-eye imaging system for multi-dimensional object acquisition," Optics Express, vol. 18, no. 18, pp. 19367–19378, 2010.
[72]R. Horisaki, Y. Nakao, T. Toyoda, K. Kagawa, Y. Masaki, and J. Tanida, "A compound-eye imaging system with irregular lens-array arrangement," Proc. SPIE 7072 Optics Photonics Inform. Process., vol. 2, pp. 1–9, 2008.
[73]R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, "Light field photography with a hand-held plenoptic camera," Computer Science Technical Report (CSTR), 2005.
[74]R. Ng, "Fourier slice photography," ACM Trans. on Graph., vol. 24, no. 3, pp. 735–744, 2005.
[75]R. Ng, "Digital light field photography," PhD dissertation, Stanford University, 2006.
[76]S. A. Shroff and K. Berkner, "Image formation analysis and high resolution image reconstruction for plenoptic imaging systems," Applied Optics, vol. 52, no. 10, pp. D22–D31, 2013.
[77]S. Baker and T. Kanade, "Limits on super-resolution and how to break them," IEEE Trans. Pattern Anal. Machine Intell. (TPAMI), vol. 24, no. 9, pp. 1167–1183, 2002.
[78]S. Boyd and L. Vandenberghe, Convex optimization, Cambridge University Press, 2004.
[79]S. C. Park, M. K. Park, and M. G. Kang, "Super-resolution image reconstruction: A technical overview," IEEE Signal Process. Mag, vol. 20, no. 3, pp. 21–36, 2013.
[80]S. Farsiu, D. Robinson, M. Elad, and P. Milanfar, "Advances and challenges in super‐resolution," Int. J. Imaging Systems and Technology, vol. 14, no. 2, pp. 47–57, 2004.
[81]S. Farsiu, M. D. Robinson, M. Elad, and P. Milanfar, "Fast and robust multi-frame super resolution," IEEE Trans. Image Process. (TIP), vol. 13, no. 10, pp. 1327–1344, 2004.
[82]S. J. Gortler, R. Grzeszczuk, R. Szeliski, and M. F. Cohen, "The lumigraph," ACM Annual Conf. Computer Graph. Interactive Techniques, pp. 43–54, 1996.
[83]S. Kim, B. Ham, B. Kim, and K. Sohn, “Mahalanobis distance cross-correlation for illumination-invariant stereo matching,” IEEE Trans. Circuits Syst. Video Technol. (TCSVT), vol. 24, no. 11, pp. 1844–1859, 2014.
[84]Stanford Graphics Laboratory. (2008). The Stanford Light Field Archive [Online]. Available: http://lightfield.stanford.edu/lfs.html
[85]S. Wanner and B. Goldluecke, "Variational light field analysis for disparity estimation and super-resolution," IEEE Trans. Pattern Anal. Machine Intell. (TPAMI), vol. 36, no. 3, pp. 606–619, 2014.
[86]S. Wanner, S. Meister, and B. Goldluecke, "Datasets and benchmarks for densely sampled 4D light fields," Int. Symp. Vision Modeling Visualization, pp. 1–8, 2013.
[87]T. Buades, Y. Lou, J. M. Morel, and Z. Tang, "A note on multi-image denoising," IEEE Int. Workshop Local and Non-Local Approximation Image Process. (LNLA), pp. 1–15, 2009.
[88]T.-C.Wang, A. A. Efros, and R. Ramamoorthi, "Occlusion-aware depth estimation using light-field cameras," IEEE Int. Conf. Computer Vision (ICCV), pp. 3487–3495, 2015.
[89]T. E. Bishop, S. Zanetti, and P. Favaro, "Light field superresolution," IEEE Int. Conf. Computational Photography (ICCP), pp. 1–9, 2009.
[90]T. E. Bishop and P. Favaro, "The light field camera: Extended depth of field, aliasing, and superresolution," IEEE Trans. Pattern Anal. Machine Intell. (TPAMI), vol. 34, no. 5, pp. 972–986, 2012.
[91]T. E. Bishop and P. Favaro, "Plenoptic depth estimation from multiple aliased views," IEEE Int. Conf. Computer Vision Workshops (ICCVW), pp. 1622–1629, 2009.
[92]T. E. Bishop and P. Favaro, "Full-resolution depth map estimation from an aliased plenoptic light field," Asian Conf. Computer Vision (ACCV), pp. 186–200, 2010.
[93]T. Georgiev. (2017). Todor Georgiev [Online]. Available: http://www.tgeorgiev.net/
[94]T. Georgiev, G. Chunev, and A. Lumsdaine, "Super-resolution with the focused plenoptic camera," SPIE Computational Imaging, vol. 7873, pp. 1–13, 2011.
[95]T. Georgiev, A. Lumsdaine, and G. Chunev, "Using focused plenoptic cameras for rich image capture," Computer Graph. Applications, pp. 62–73, 2011.
[96]T. Richter and A. Kaup, "Multiview super-resolution using high-frequency synthesis in case of low-framerate depth information," Visual Commun Image Process., pp. 1–6, 2012.
[97]T. Richter, J. Seiler, W. Schnurrer and A. Kaup, "Robust super-resolution in a multi-view setup based on refined high-frequency synthesis," IEEE Int. Workshop Multimedia Signal Process. (MMSP), pp. 7–12, 2012.
[98]T. Richter, J. Seiler, W. Schnurrer and A. Kaup, "Robust super-resolution for mixed-resolution multi-view image plus depth data," IEEE Trans. Circuits Systems Video Technology (TCSVT), vol. 26, no. 5, pp. 814–828, 2016.
[99]V. Boominathan, K. Mitra, and A. Veeraraghavan, "Improving resolution and depth-of-field of light field cameras using a hybrid imaging system," IEEE Int. Conf. Computational Photography (ICCP), pp. 1–10. 2014.
[100]W.-S. Chan, E. Y. Lam, M. K. Ng, and G. Y. Mak, "Super-resolution reconstruction in a computational compound-eye imaging system," Multidimensional Systems Signal Process., vol. 18, no. 2, pp. 83–101, 2007.
[101]Y. Kitamura, R. Shogenji, K. Yamada, S. Miyatake, M. Miyamoto, T. Morimoto, Y. Masaki, N. Kondou, D. Miyazaki, J. Tanida, and Y. Ichioka, "Reconstruction of a high-resolution image on a compound-eye image-capturing system," Applied Optics, vol. 43, no. 8, pp. 1719–1727, 2004.
[102]Y. S. Heo, K. M. Lee, and S. U. Lee, “Robust stereo matching using adaptive normalized cross-correlation,” IEEE Trans. Pattern Anal. Machine Intell. (TPAMI), vol. 33, no. 4, pp. 807–822, Apr. 2011.
[103]Y. Yoon, H.-G. Jeon, D. Yoo, J.-Y. Lee, and I. S. Kweon, "Learning a deep convolutional network for light-field image super-resolution," IEEE Int. Conf. Computer Vision Workshop (ICCVW), pp. 57–65, 2015.
[104]Z. Li, H. Baker, and R. Bajcsy, "Joint image denoising using light-field data," IEEE Int. Conf. Multimedia Expo Workshops (ICMEW), pp. 1–6, 2013.
[105]Z. Lin and H.-Y. Shum, "On the number of samples needed in light field rendering with constant-depth assumption," IEEE Conf. Computer Vision Patter Recognition (CVPR), pp. 588–595, 2000.
[106]Z. Lin and H.-Y. Shum, "Fundamental limits of reconstruction-based super-resolution algorithms under local translation," IEEE Trans. Pattern Anal. Machine Intell. (TPAMI), vol. 26, no. 1, pp. 83–97, 2004.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top