|
Bibliography [1]A. Brückner, "Micro optical multi-aperture imaging systems," Ph.D. dissertation, Faculty of Physics and Astronomy, Friedrich Schiller Univ., Jena, Germany, 2012. [2]A. Buades, Y. Lou, J.-M. Morel, and Z. Tang, "Multi image noise estimation and denoising," MAP5 2010–19, 2010. [3]Apple Inc. (2017). Apple iPhone7 [Online]. Available: http://www.apple.com/iphone-7 [4]A. Levin, R. Fergus, F. Durand, and W. T. Freeman, "Image and depth from a conventional camera with a coded aperture," ACM Trans. Graph., vol. 26, no. 3, pp. 70:1–70:9, 2007. [5]A. Lumsdaine, L. Lin, J. Willcock, and Y. Zhou, "Fourier analysis of the focused plenoptic camera," SPIE Multimedia Content and Mobile Devices, pp. 1–14, 2013. [6]A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin, "Dappled photography: Mask enhanced cameras for heterodyned light fields and coded aperture refocusing," ACM Trans. Graph., vol. 26, no. 3, pp. 69:1–69:12, 2007. [7]B. Troccolli, S. B. Kang, and S. Seitz, “Multi-view multi-exposure stereo,” 3rd Int. Symp. 3D Data Process. Visual. Transmission, pp. 861–868, Jun. 2006. [8]B. Wilburn, “High performance imaging using arrays of inexpensive cameras,” Ph.D. dissertation, Dept. Elect. Eng., Stanford Univ., 2004. [9]B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, "High performance imaging using large camera arrays," ACM Trans. Graph., vol. 24, no. 3, pp. 765–776, 2005. [10]B. Wilburn, N. Joshi, V. Vaish, M. Levoy and M. Horowitz, "High-speed videography using a dense camera array," IEEE Conf. Computer Vision Patter Recognition (CVPR), pp. 294–301, 2004. [11]C. Buehler, M. Bosse, L. McMillan, S. Gortler, and M. Cohen, "Unstructured lumigraph rendering," ACM Annual Conf. Computer Graphics Interactive Techniques, pp. 425–432, 2001. [12]C.-C. Chen, S.-C. Fan Chiang, X.-X. Huang, M.-S. Su, and Y.-C. Lu, "Depth estimation of light field data from pinhole-masked DSLR cameras," IEEE Int. Conf. Image Processing (ICIP), pp. 1769–1772, Sept. 2010. [13]C. Liu, “Beyond pixels: exploring new representations and applications for motion analysis,” PhD dissertation, Massachusetts Institute of Technology, Cambridge, MA, 2009. [14]C. Kim, H. Zimmer, Y. Pritch, A. Sorkine-Hornung, and M. Gross, "Scene reconstruction from high spatio-angular resolution light fields," ACM Trans. Graph., vol. 32, no. 4, pp. 73:1–73:12, 2013. [15]C.-K. Liang and R. Ramamoorthi, "A light transport framework for lenslet light field cameras," ACM Trans. Graph., vol. 34, no. 2, pp. 16:1–16:19, 2015. [16]C.-K. Liang, T.-H. Lin, B.-Y. Wong, C. Liu, and H. H. Chen, "Programmable Aperture Photography: Multiplexed Light Field Acquisition," ACM Trans. Graph., Vol. 27. No. 3, pp. 55:1–55:10, 2008. [17]C.-K. Liang, Y.-C. Shih, and H. H. Chen, "Light Field Analysis for Modeling Image Formation," IEEE Trans. Image Process. (TIP), Vol. 20, No. 2, pp. 446–460, 2011. [18]C. Perwaß and L. Wietzke, "Single lens 3D-camera with extended depth-of-field," Proc. SPIE Human Vision and Electronic Imaging, vol. 17, pp. 1–15, 2012. [19]C.-T. Huang, Y.-W. Wang, L.-R. Huang, J. Chin, and L.-G. Chen, "Fast physically correct refocusing for sparse light fields using block-based multi-rate view interpolation," IEEE Trans. Image Process. (TIP), vol. 26, no. 2, pp. 603–618, 2017. [20]C. Zhang, L. Wang, and R. Yang, "Semantic segmentation of urban scenes using dense depth maps," European Conf. Computer Vision (ECCV), pp. 708–721, 2010. [21]D. Capel and A. Zisserman, "Computer vision applied to super resolution," IEEE Signal Process. Mag., vol. 20, no. 3, pp. 75–86, 2009. [22]D. J. Brady,M. E. Gehm,R. A. Stack,D. L. Marks,D. S. Kittle,D. R. Golish,E. M. Vera, and S. D. Feller, "Multiscale gigapixel photography." Nature, vol. 486, pp. 386–389, 2012. [23]D. J. Brady and N. Hagen, "Multiscale lens design," Optics express, vol. 17, no. 13, pp. 10659–10674, 2009. [24]D. L. Donoho, "Compressed sensing," IEEE Trans. Information Theory, vol. 52, no. 4, pp. 1289–1306, 2006. [25]D. Robinson and P. Milanfar, "Statistical performance analysis of super-resolution," IEEE Trans. Image Process. (TIP), vol. 15 no. 6, pp. 1413–1428, 2006. [26]D. Scharstein, H. Hirschmüller, Y. Kitajima, G. Krathwohl, N. Nesic, X. Wang, and P. Westling, “High-resolution stereo datasets with subpixel-accurate ground truth,” Proc. German Conf. Pattern Recognition (GCPR), pp. 1–12, 2014. [27]E. J. Candès, J. K. Romberg, and T. Tao, "Stable signal recovery from incomplete and inaccurate measurements," Commun. Pure Applied Mathematics, vol. 59, no. 8, pp. 1207-1223, 2006. [28]E. J. Candès and M. B. Wakin, "An introduction to compressive sampling," IEEE Signal Process. Mag., vol. 25, no. 2, pp. 21-30, 2008. [29]E. Luo, S. H. Chan, S. Pan, and T. Q. Nguyen, "Adaptive non-local means for multi-view image denoising: searching for the right patches via a statistical approach," IEEE Int. Conf. Image Process. (ICIP), pp. 543–547, 2013. [30]F. Bouzaraa, O. Urfalioglu, and G. Cordara, “Dual-exposure image registration for HDR processing,” IEEE Int. Conf. Acoust. Speech Signal Process. (ICASSP), pp. 1553–1557, 2015. [31]F. Pérez, "Super-resolution in plenoptic cameras by the integration of depth from focus and stereo," Int. Conf. Computer Commun. Networks, pp. 1–6, 2010. [32]F. Pérez and J. P. Lüke, "Simultaneous estimation of super-resolved depth and all-in-focus images from a plenoptic camera," Conf. 3DTV: The True Vision-Capture Transmission and Display of 3D Video, pp. 1–4, 2009. [33]F. Pérez, A. Pérez, M. Rodríguez, and E. Magdaleno, "Fourier slice super-resolution in plenoptic cameras," IEEE Int. Conf. Computational Photography (ICCP), pp. 1–11, 2012. [34]G. Wetzstein, I. Ihrke, D. Lanman, and W. Heidrich, "State of the art in computational plenoptic imaging," ACM EUROGRAPHICS, pp. 1–24, 2010. [35]G. Wetzstein, I. Ihrke, D. Lanman, and W. Heidrich, "Computational plenoptic imaging," Computer Graph. Forum, vol. 30, no. 8, pp. 2397–2426, 2011. [36]G. Zhang, J. Jia, and H. Bao, "Simultaneous multi-body stereo and segmentation," IEEE Int. Conf. Computer Vision (ICCV), pp. 826–833, 2011. [37]H. Lin, C. Chen, S. B. Kang, and J. Yu, "Depth recovery from light field using focal stack symmetry," IEEE Int. Conf. Computer Vision (ICCV), pp. 3451–3459, 2015. [38]J. Fiss, B. Curless, and R. Szeliski, "Refocusing plenoptic images using depth-adaptive splatting," IEEE Int. Conf. Computational Photography (ICCP), pp. 1–9, 2014. [39]J. Fiss, B. Curless, and R. Szeliski, "Light field layer matting," IEEE Conf. Computer Vision Pattern Recognition (CVPR), pp. 623–631, 2015. [40]J. Hu, O. Gallo, K. Pulli, and X. Sun, “HDR deghosting: How to deal with saturation?” IEEE Conf. Comput. Vis. Pattern Recog., pp. 1163-1170, 2013. [41]J. Kopf, M. Uyttendaele, O. Deussen, and M. F. Cohen, "Capturing and viewing gigapixel images," ACM Trans. Graph., vol. 26, no. 3, pp. 93:1–93:10, 2007. [42]J. Li and Z. N. Li, "Continuous depth map reconstruction from light fields," IEEE Int. Conf. Multimedia Expo (ICME), pp. 1–6, 2013. [43]J. T. Barron, A. Adams, Y.-C. Shih, and C. Hernández, "Fast bilateral-space stereo for synthetic defocus," IEEE Conf. Computer Vision Pattern Recognition (CVPR), pp. 4466–4474, 2015. [44]J. Tian and K.-K. Ma, "A survey on super-resolution imaging," Signal, Image and Video Process., vol. 5, no. 3, pp. 329-342, 2011. [45]K. He, J. Sun, and X. Tang, “Guided image filtering,” IEEE Trans. Pattern Anal. Machine Intell. (TPAMI), vol. 35, no. 6, pp. 1397–1409, 2013. [46]K. Marwah, G. Wetzstein, Y. Bando, and R. Raskar, "Compressive light field photography using overcomplete dictionaries and optimized projections," ACM Trans. Graph., vol. 32, no. 4, pp. 46:1–46:12, 2013. [47]K. Mitra and A. Veeraraghavan, "Light field denoising, light field superresolution and stereo camera based refocusing using a GMM light field patch prior," IEEE Conf. Computer Vision Pattern Recognition Workshops (CVPRW), pp. 1–7, 2012. [48]K. Venkataraman, D. Lelescu, J. Duparré, A. McMahon, G. Molina, P. Chatterjee, R. Mullis, and S. Nayar, "Picam: an ultra-thin high performance monolithic camera array," ACM Trans. Graph., vol. 32, no. 6, pp. 166:1–166:13, 2013. [49]K. Vaidyanathan, J. Munkberg, P. Clarberg, and M. Salvi, "Layered light field reconstruction for defocus blur," ACM Trans. Graphics, vol. 34, no. 2, pp. 23:1–23:12, 2015. [50]L. C. Pickup, D. P. Capel, S. J. Roberts, and A. Zisserman, "Overcoming registration uncertainty in image super-resolution: Maximize or marginalize?" EURASIP J. Advances Signal Process, vol. 2007, no. 1, pp. 1–14, 2007. [51]L. C. Pickup, D. P. Capel, S. J. Roberts, and A. Zisserman, "Bayesian image super-resolution, continued," Advances Neural Inform. Process. Systems (NIPS), pp. 1089–1096. MIT Press, Cambridge, 2006. [52]L. C. Pickup, D. P. Capel, S. J. Roberts, and A. Zisserman, "Bayesian methods for image super-resolution," The Computer Journal, vol. 52, no. 1, pp. 101–113, 2007. [53]Light Inc. (2017). Light L16 Camera [Online]. Available: https://light.co/camera [54]L. Peng and L. Dijun, "All-in-focus image reconstruction based on plenoptic cameras," IEEE Int. Conf. Image and Graphics (ICIG), pp. 612–617, 2013. [55]Lytro Inc. (2017). Lytro Imaging [Online]. Available: https://www.lytro.com/imaging [56]L. Zhang, S. Vaddadi, H. Jin, and S. K. Nayar, "Multiple view image denoising," IEEE Conf. Computer Vision Pattern Recognition (CVPR), pp. 1542–1549, 2009. [57]M. D. Grossberg and S. K. Nayer, “Determining the camera response from images: What is knowable?” IEEE Trans. Pattern Anal. Machine Intell. (TPAMI), vol. 24, no. 11, pp. 1455–1467, 2003. [58]M. Hog, N. Sabater, B. Vandame, and V. Drazic, "An image rendering pipeline for focused plenoptic cameras," IEEE Trans. Computational Imaging, vol. 3, pp. 1–11, 2017. [59]M. Levoy and P. Hanrahan, "Light field rendering," Annual Conf. Computer Graph. Interactive Techniques, pp. 31–42, 1996. [60]M. N. Do, D. Marchand-Maillet, and M. Vetterli, "On the bandwidth of the plenoptic function," IEEE Trans. Image Process., vol. 21, no. 2, pp. 708-717, 2012. [61]M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, "Depth from combining defocus and correspondence using light-field cameras," IEEE Int. Conf. Computer Vision (ICCV), pp. 673–680, 2013. [62]M. W. Tao, T.-C. Wang, J. Malik, and R. Ramamoorthim, "Depth estimation for glossy surfaces with light-field cameras," European Conf. Computer Vision Workshop (ECCVW), pp. 1–14, 2014. [63]N. Sabater, M. Seifi, V. Drazic, G. Sandri, and P. Pérez, "Accurate disparity estimation for plenoptic images," European Conf. Computer Vision Workshop (ECCVW), pp. 548–560, 2014. [64]N. Sun, H. Mansour and R. Ward, “HDR image construction from multi-exposed stereo LDR images,” IEEE Int. Conf. Image Process., pp. 2973–2976, 2010. [65]O. S. Cossairt, D. Miau, and S. K. Nayar. "Gigapixel computational imaging," IEEE Int. Conf. Computational Photography (ICCP), pp. 1–8, 2011. [66]P. E. Debevec and J. Malik, “Recovering high dynamic range radiance maps from photographs,” ACM SIGGRAPH, pp. 369–378, 1997. [67]POV-Ray. (2017). POV-Ray [Online]. Available: http://www.povray.org/ [68]Q. Shan, J. Jia, and A. Agarwala, "High-quality motion deblurring from a single image," ACM Trans. on Graph., vol. 27, no. 3, pp. 73:1–73:10, 2008. [69]Raytrix. (2017). Raytrix [Online]. Available: https://www.raytrix.de/produkte/#r29series [70]R. Horisaki, S. Irie, Y. Ogura, and J. Tanida, "Three-dimensional information acquisition using a compound imaging system," Optical Review, vol. 14, no. 5, pp. 347–350, 2007. [71]R. Horisaki, K. Choi, J. Hahn, J. Tanida, and D. J. Brady, "Generalized sampling using a compound-eye imaging system for multi-dimensional object acquisition," Optics Express, vol. 18, no. 18, pp. 19367–19378, 2010. [72]R. Horisaki, Y. Nakao, T. Toyoda, K. Kagawa, Y. Masaki, and J. Tanida, "A compound-eye imaging system with irregular lens-array arrangement," Proc. SPIE 7072 Optics Photonics Inform. Process., vol. 2, pp. 1–9, 2008. [73]R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, "Light field photography with a hand-held plenoptic camera," Computer Science Technical Report (CSTR), 2005. [74]R. Ng, "Fourier slice photography," ACM Trans. on Graph., vol. 24, no. 3, pp. 735–744, 2005. [75]R. Ng, "Digital light field photography," PhD dissertation, Stanford University, 2006. [76]S. A. Shroff and K. Berkner, "Image formation analysis and high resolution image reconstruction for plenoptic imaging systems," Applied Optics, vol. 52, no. 10, pp. D22–D31, 2013. [77]S. Baker and T. Kanade, "Limits on super-resolution and how to break them," IEEE Trans. Pattern Anal. Machine Intell. (TPAMI), vol. 24, no. 9, pp. 1167–1183, 2002. [78]S. Boyd and L. Vandenberghe, Convex optimization, Cambridge University Press, 2004. [79]S. C. Park, M. K. Park, and M. G. Kang, "Super-resolution image reconstruction: A technical overview," IEEE Signal Process. Mag, vol. 20, no. 3, pp. 21–36, 2013. [80]S. Farsiu, D. Robinson, M. Elad, and P. Milanfar, "Advances and challenges in super‐resolution," Int. J. Imaging Systems and Technology, vol. 14, no. 2, pp. 47–57, 2004. [81]S. Farsiu, M. D. Robinson, M. Elad, and P. Milanfar, "Fast and robust multi-frame super resolution," IEEE Trans. Image Process. (TIP), vol. 13, no. 10, pp. 1327–1344, 2004. [82]S. J. Gortler, R. Grzeszczuk, R. Szeliski, and M. F. Cohen, "The lumigraph," ACM Annual Conf. Computer Graph. Interactive Techniques, pp. 43–54, 1996. [83]S. Kim, B. Ham, B. Kim, and K. Sohn, “Mahalanobis distance cross-correlation for illumination-invariant stereo matching,” IEEE Trans. Circuits Syst. Video Technol. (TCSVT), vol. 24, no. 11, pp. 1844–1859, 2014. [84]Stanford Graphics Laboratory. (2008). The Stanford Light Field Archive [Online]. Available: http://lightfield.stanford.edu/lfs.html [85]S. Wanner and B. Goldluecke, "Variational light field analysis for disparity estimation and super-resolution," IEEE Trans. Pattern Anal. Machine Intell. (TPAMI), vol. 36, no. 3, pp. 606–619, 2014. [86]S. Wanner, S. Meister, and B. Goldluecke, "Datasets and benchmarks for densely sampled 4D light fields," Int. Symp. Vision Modeling Visualization, pp. 1–8, 2013. [87]T. Buades, Y. Lou, J. M. Morel, and Z. Tang, "A note on multi-image denoising," IEEE Int. Workshop Local and Non-Local Approximation Image Process. (LNLA), pp. 1–15, 2009. [88]T.-C.Wang, A. A. Efros, and R. Ramamoorthi, "Occlusion-aware depth estimation using light-field cameras," IEEE Int. Conf. Computer Vision (ICCV), pp. 3487–3495, 2015. [89]T. E. Bishop, S. Zanetti, and P. Favaro, "Light field superresolution," IEEE Int. Conf. Computational Photography (ICCP), pp. 1–9, 2009. [90]T. E. Bishop and P. Favaro, "The light field camera: Extended depth of field, aliasing, and superresolution," IEEE Trans. Pattern Anal. Machine Intell. (TPAMI), vol. 34, no. 5, pp. 972–986, 2012. [91]T. E. Bishop and P. Favaro, "Plenoptic depth estimation from multiple aliased views," IEEE Int. Conf. Computer Vision Workshops (ICCVW), pp. 1622–1629, 2009. [92]T. E. Bishop and P. Favaro, "Full-resolution depth map estimation from an aliased plenoptic light field," Asian Conf. Computer Vision (ACCV), pp. 186–200, 2010. [93]T. Georgiev. (2017). Todor Georgiev [Online]. Available: http://www.tgeorgiev.net/ [94]T. Georgiev, G. Chunev, and A. Lumsdaine, "Super-resolution with the focused plenoptic camera," SPIE Computational Imaging, vol. 7873, pp. 1–13, 2011. [95]T. Georgiev, A. Lumsdaine, and G. Chunev, "Using focused plenoptic cameras for rich image capture," Computer Graph. Applications, pp. 62–73, 2011. [96]T. Richter and A. Kaup, "Multiview super-resolution using high-frequency synthesis in case of low-framerate depth information," Visual Commun Image Process., pp. 1–6, 2012. [97]T. Richter, J. Seiler, W. Schnurrer and A. Kaup, "Robust super-resolution in a multi-view setup based on refined high-frequency synthesis," IEEE Int. Workshop Multimedia Signal Process. (MMSP), pp. 7–12, 2012. [98]T. Richter, J. Seiler, W. Schnurrer and A. Kaup, "Robust super-resolution for mixed-resolution multi-view image plus depth data," IEEE Trans. Circuits Systems Video Technology (TCSVT), vol. 26, no. 5, pp. 814–828, 2016. [99]V. Boominathan, K. Mitra, and A. Veeraraghavan, "Improving resolution and depth-of-field of light field cameras using a hybrid imaging system," IEEE Int. Conf. Computational Photography (ICCP), pp. 1–10. 2014. [100]W.-S. Chan, E. Y. Lam, M. K. Ng, and G. Y. Mak, "Super-resolution reconstruction in a computational compound-eye imaging system," Multidimensional Systems Signal Process., vol. 18, no. 2, pp. 83–101, 2007. [101]Y. Kitamura, R. Shogenji, K. Yamada, S. Miyatake, M. Miyamoto, T. Morimoto, Y. Masaki, N. Kondou, D. Miyazaki, J. Tanida, and Y. Ichioka, "Reconstruction of a high-resolution image on a compound-eye image-capturing system," Applied Optics, vol. 43, no. 8, pp. 1719–1727, 2004. [102]Y. S. Heo, K. M. Lee, and S. U. Lee, “Robust stereo matching using adaptive normalized cross-correlation,” IEEE Trans. Pattern Anal. Machine Intell. (TPAMI), vol. 33, no. 4, pp. 807–822, Apr. 2011. [103]Y. Yoon, H.-G. Jeon, D. Yoo, J.-Y. Lee, and I. S. Kweon, "Learning a deep convolutional network for light-field image super-resolution," IEEE Int. Conf. Computer Vision Workshop (ICCVW), pp. 57–65, 2015. [104]Z. Li, H. Baker, and R. Bajcsy, "Joint image denoising using light-field data," IEEE Int. Conf. Multimedia Expo Workshops (ICMEW), pp. 1–6, 2013. [105]Z. Lin and H.-Y. Shum, "On the number of samples needed in light field rendering with constant-depth assumption," IEEE Conf. Computer Vision Patter Recognition (CVPR), pp. 588–595, 2000. [106]Z. Lin and H.-Y. Shum, "Fundamental limits of reconstruction-based super-resolution algorithms under local translation," IEEE Trans. Pattern Anal. Machine Intell. (TPAMI), vol. 26, no. 1, pp. 83–97, 2004.
|