|
Local Approach for Disparity Estimation [1] S. Birchfield and C. Tomasi, “A pixel dissimilarity measure that is insensitive to image sampling” IEEE Trans. Pattern Anal. Mach. Intell.(TPAMI), no. 20, vol. 4, pp. 401-406, Apr. 1998. [2] H. Hirschmuller and D. Scharstein, “Evaluation of cost functions for stereo matching,” in Proc. IEEE Conf. on Comput. Vision Pattern Recognition (CVPR’07), Jun. 2007. [3] N. Y.-C. Chang, Y.-C. Tseng, and T.-S. Chang, “Analysis of color space and similarity measure impact on stereo block matching,” in Proc. IEEE Asia Pacific Conf. on Circuits and Syst. (APCCAS’08), Dec. 2008, pp. 926-929. [4] J. Lu, G. Lafruit, and F. Catthoor, “Anisotropic local high-confidence voting for accurate stereo correspondence,” in Proc. SPIE Image Process.: Algorithm and Syst. VI, vol. 68120, Jan. 2008. [5] K. Zhang, J. Lu, and G. Lafruit, “Scalable stereo matching with locally adaptive polygon approximation,” in Proc. IEEE Int. Conf. on Image Process. (ICIP’08), Oct. 2008, pp. 313-316. [6] K. Zhang, J. Lu, and G. Lafruit, “Cross-based local stereo matching using orthogonal integral images,” IEEE Trans. Circuits Syst. Video Technol., no. 19, vol. 7, pp. 1073-1079, Jul. 2009. [7] K.-J. Yoon and I.-S. Kweon, “Adaptive support-weight approach for correspondence search,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 28, no. 4, pp. 650-656, Apr. 2006. [8] M.-H. Ju and H.-B. Kang, “Constant time stereo matching” in Proc. Int. Conf. on Machine Vision and Image Process. (IMVIP’09), Step. 2009, pp. 13-17. [9] W. Yu, T. Chen, F. Franchetti, and J. C. Hoe, “High performance stereo vision designed for massively data parallel platforms,” IEEE Trans. Circuits Syst. Video Technol., vol. 20, no. 11, pp. 1509-1519, Nov. 2010. [10] N. Y.-C. Chang, T.-H. Tsai, B.-H. Hsu, Y.-C. Chen, and T.-S. Chang, “Algorithm and architecture of disparity estimation with mini-census adaptive support weight,” IEEE Trans. Circuits Syst. Video Technol., vol. 20, no. 6, pp. 792-805, Jun. 2010. Dynamic Programming [11] Y. Ohta and T. Kanade, “Stereo by intra- and inter- scanline search using dynamic programming,” IEEE Trans. Pattern Anal. Mach. Intell. (TPAMI), no. 7, vol. 2, pp. 139-154, Mar. 1985. [12] O. Veksler, “Stereo correspondence by dynamic programming on a tree,” in Proc. IEEE Conf. on Comput. Vision Pattern Recognition (CVPR’05), 2005, pp. 384-390. [13] Y .Deng and X. Lin, “A fast line segment based dense stereo algorithm using tree dynamic programming,” in Proc. European Conf. on Comput. Vision (ECCV’06), 2006, pp. 201-210. [14] C. Lei, J. Selzer, Y.-H. Yang, “Region-tree based stereo using dynamic programming optimization,” in Proc. IEEE Conf. on Comput. Vision Pattern Recognition (CVPR’06), vol. 2, 2006, pp. 2378-2385. Graph-cut [15] V. Kolomogorov and R. Zabih, “Computing visual correspondence with occlusions using graph cuts,” in Proc. IEEE Int. Conf. on Comput. Vision (ICCV’01), vol. 2, Jul. 2001, pp. 508-515. [16] L. Ford and D. Fulkerson, Flows in networks, Princeton Univ. Press, 1962. [17] A. V. Goldberg, “A new approach to the maximum flow problem,” J. of the ACM, vol. 35, pp. 921-940, 1988. [18] Y. Boykov, O. Veksler, and R. Zabih, “Fast approximate energy minimization via graph cuts,” IEEE Trans. Pattern Anal. Mach. Intell. (TPAMI), vol. 23, no. 11, pp. 1222-1239, Nov. 2001. [19] Y. Boykov and V. Kolmogorov, “An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision,” IEEE Trans. Pattern Anal. Mach. Intell. (TPAMI), vol. 26, no. 9, pp. 1124-1137, Sep. 2004. [20] C.-W. Chou, J.-J. Tsai, H.-M. Hang, and H.-C. Lin, “A fast graph cut algorithm for disparity estimation,” in Proc. Picture Coding Symp. (PCS’10), Nagoya, Japan, Dec. 2010, pp. 326-329. [21] B. V. Cherkassky and A. V. Goldberg, “On implementing the push-relabel method for the maximum flow problem,” Algorithmica, New York Inc.: Spring-Verlag, 1997, vol. 19, pp. 390-410. [22] A. Delong and Y. Boykov, “A scalable graph-cut algorithm for N-D grids,” in Proc. IEEE Conf. on Comput. Vision Pattern Recognition (CVPR’08), Jun. 2008. [23] N. Y.-C. Chang and T.-S. Chang, “A scalable graph-cut engine architecture for real-time vision,” in Proc. VLSI design/CAD Symp., Hualien, Taiwan, 2007. Belief Propagation [24] J. Sun, N.-N. Zhang, and H.-Y. Shum, “Stereo matching using belief propagation,” IEEE Trans. Pattern Anal. Mach. Intell. (TPAMI), vol. 25, no. 7, pp. 787-800, Jul. 2003. [25] P. F. Felzenswalb and D. P. Huttenlocher, “Efficient belief propagation for early vision,” Int. J. Comput. Vision (IJCV), vol. 70, no. 1, pp. 41-54, May 2006. [26] R. Szeliski, R. Zabih, D. Scharstein, O. Veksler, V. Kolmogorov, A. Agarwala, M. Tappen, and C. Rother, “A comparative study of energy minimization methods for Markov Random Fields with smoothness-based priors,” IEEE Trans. Pattern Anal. Mach. Intell. (TPAMI), vol. 30, no. 6, pp. 1060-1080, Jun. 2008. [27] Q. Yang, L. Wang, R. Yang, S. Wang, M. Liao, and D. Nister, “Real-time global stereo matching using hierarchical belief propagation,” in Proc. British Mach. Vision Conf. (BMVC), 2006. [28] S. Park C. Chen, and H. Jeong, “VLSI architecture for MRF based stereo matching,” in Proc. Int. Symp. on Syst., Architecture, Modeling and Simulation (SAMOS’07), Greece, Jul. 2007. [29] C.-C. Cheng, C.-K. Liang, Y.-C. Lai, H. H. Chen, and L.-G. Chen, “Analysis of belief propagation for hardware realization,” in Proc. IEEE Workshop on Signal Process. Syst. (SiPS’08), Washington DC, USA, Oct. 2008, pp. 152-157. [30] C.-C. Cheng, C.-K. Liang, Y.-C. Lai, H. H. Chen, and L.-G. Chen, “Fast belief propagation process element for high-quality stereo estimation,” in Proc. IEEE Int. Conf. on Acoustics, Speech, and Signal Process. (ICASSP’09), Taipei, Taiwan, Apr. 2009, pp. 745-748. [31] C.-K. Liang, C.-C. Cheng, Y.-C. Lai, L.-G. Chen, and H. H. Chen, “Hardware-efficient belief propagation,” in Proc. IEEE Conf. on Comput. Vision and Pattern Recognition (CVPR’09), Florida, USA, Jun. 2009, pp. 80-87. [32] C.-C. Cheng, C.-T. Li, C.-K. Liang, Y.-C. Lai, and L.-G. Chen, “Architecture design of stereo matching using belief propagation,” in Proc. IEEE Int. Symp. Circuits and Syst. (ISCAS’10), Jun. 2010, pp. 4109-4112. [33] C.-K. Liang, C.-C. Cheng, Y.-C. Li, L.-C. Chen, and H. H. Chen, “Hardware-efficient belief propagation,” IEEE Trans. Circuits Syst. Video Technol. (TCSVT), vol. 21, no. 5, pp. 525-537, May 2011. [34] S. C. Park and H. Jeong, “Memory-efficient iterative process for two-dimensional first-order regular graph,” Optics Letter, vol. 33, no. 1, pp. 74-76, Jan. 2008. [35] T. Yu, R.-S. Lin, B. Super, B. Tang, “Efficient message representation for belief propagation,” in Proc. IEEE Int. Conf. on Comput. Vision (ICCV’07), Oct. 2007. [36] Y.-C. Tseng, N. Chang, and T.-S. Chang, “Low memory cost block-based belief propagation for stereo correspondence,” in Proc. IEEE Int. Conf. on Multimedia and Expo (ICME), Beijing, China, Jul. 2007, pp. 1415-1418. [37] M. P. Kumar and P. H. S. Torr, “Fast memory-efficient generalized belief propagation,” in Proc. European Conf. on Computer Vision (ECCV’06), vol. 3954, Austria, May 2006, pp. 451-463. [38] Y.-C. Tseng, N. Y.-C. Chang, and T.-S. Chang, “Block-based belief propagation with in-place message updating for stereo vision,” in Proc. IEEE Asia Pacific Conf. on Circuits and Syst. (APCCAS’08), Macau, China, Dec. 2008, pp. 918-921. [39] A. Klaus, M. Sormann, and K. Karner, “Segment-based stereo matching using belief propagation and self-adapting dissimilarity measure,” in Proc. IEEE Int. Conf. on Pattern Recognition (ICPR’06), Sep. 2006, pp. 15-18. [40] Q. Yang, L. Wang, R. Yang, H. Stewenius, and D. Nister, “Stereo matching with color-weighted correlation, hierarchical belief propagation and occlusion handling,” IEEE Trans. Pattern Anal. Mach. Intell. (TPAMI), vol. 31, no. 3, pp. 1-13, Mar. 2009. [41] E. S. Larsen, P. Mordohai, M. Pollefeys, and H. Fuchs, “Temporally consistent reconstruction from multiple video streams using enhanced belief propagation,” in Proc. IEEE Int. Conf. on Comput. Vision (ICCV’07), Rio de Janeiro, Brazil, Oct. 2007. [42] K. Ogawara, “Approximate belief propagation by hierarchical averaging of outgoing messages,” in Proc. IEEE Int. Conf. Pattern Recognition (ICPR’10), Istanbul, Aug. 2010, pp. 1368-1372. [43] Q. Yang, L. Wang, and N. Ahuja, “A constant-space belief propagation algorithm for stereo matching,” in Proc. IEEE Conf. on Comput. Vision and Pattern Recognition (CVPR’10), Jun. 2010, pp. 1458-1465. [44] M. Sarkis and K. Diepold, “Sparse stereo matching using belief propagation,” in Proc. IEEE Int. Conf. on Image Process. (ICIP’08), San Diego, CA, Oct. 2008, 1780-1783. Disparity Refinement Algorithms [45] G. Egnal and R. P Wildes, “Detecting binocular half-occlusions: empirical comparisons of five approaches,” IEEE Trans. Pattern Anal. Mach. Intell. (TPAMI), vol. 24, no. 8, pp. 1127-1133, Aug. 2002. [46] M. A. Fischler and R. C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to analysis and automated cartography,” Commun. of the ACM, vol. 24, no. 6, pp. 381-395, 1981. [47] M. Gong, “Enforcing temporal consistency in real-time stereo estimation,” in Proc. European Conf. on Comput. Vision (ECCV’06), vol. 3953, 2006, pp. 564-577. [48] D. Min, S. Yea, Z. Arican, and A. Vetro, “Disparity search range estimation: forcing temporal consistency,” in Proc. IEEE Int. Conf. on Acoustics, Speech, and Signal Process. (ICASSP’10), Dallas, Texas, May 2010, pp. 2366-2369. [49] R. Khoshabeh, S. H. Chan, T. Q. Nyuyen, “Spatio-temporal consistency in video disparity estimation,” in Proc. IEEE Int. Conf. on Acoustics, Speech, and Signal Process. (ICASSP’11), Prague, Czech Republic, May 2011. [50] T.-W. Chen and S.-Y. Chien, “Bandwidth adaptive hardware architecture of K-means clustering for video analysis,” IEEE Trans. Very Large Scale Integr. Syst. (TVLSI), vol. 18, no. 6, pp. 957-966, Jun. 2010. View Synthesis Algorithms and Implementation [51] C. Fehn, “Depth-image-based rendering (DIBR), compression and transmission for a new approach on 3D-TV, ” in Proc. SPIE Conf. on Stereoscopic Displays and Virtual Reality Systems, vol. 5291, May 2004, pp. 93-104. [52] C. Vázquez, W. J. Tam, and F. Speranza, “Stereoscopic imaging: filling disoccluded areas in image-based rendering,” in Proc. SPIE Three-Dimensional TV, Video, and Display V, vol. 6392, Oct. 2006, pp. 123-134. [53] C.-M. Cheng, S.-J. Lin, S.-H. Lai and J.-C. Yang, “Improved novel view synthesis from depth image with large baseline,” in Proc. IEEE Int. Conf. on Pattern Recognition (ICPR’08), Dec. 2008, pp.1-4. [54] L. Zhang and W. J. Tam, “Stereoscopic image generation based on depth images for 3D TV,” IEEE Trans. Broadcast., vol. 51, no. 2, pp. 191-199, Jun. 2005. [55] Y.-R. Horng, Y.-C. Tseng, and T.-S. Chang, “Stereoscopic image generation with directional Gaussian filter,” in Proc. IEEE Int. Symp. Circuits and Syst. (ISCAS’10), May-Jun. 2010, pp. 2650-2653. [56] W.-Y. Chen, Y.-L. Chang, S.-F. Lon, L.-F. Ding, and L.-G. Chen, “Efficient depth image based rendering with edge dependent depth filter and interpolation,” in Proc. IEEE Int. Conf. on Multimedia and Expo (ICME’07), Jul. 200, pp. 1314-1317. [57] Y. K. Park, K. Jung, Y. Oh, S. Lee, J. K. Kim, G. Lee, H. Lee, K. Yun, N. Hur, and J. Kim, “Depth-image-based rendering for 3DTV service over T-DMB,” Signal processing: Image communication, vol. 24, no. 1-2, pp. 122-36, Jan. 2009. [58] S. Rogmans, J.-B. Lu, P. Bekaert, and G. Lafruit, “Real-time stereo-based view synthesis algorithms: a unified framework and evaluation on commodity GPUs,” Signal Processing: Image communication, vol. 24, no. 1-2, pp. 49-64, Jan. 2009. [59] Y. Morvan, “Acquisition, compression and rendering of depth and texture for multi-view video,” Ph.D. thesis, Eindhoven University of Technology, Netherlands, Apr. 2009. [60] A. Telea, “An image inpainting technique based on the fast marching method,” J. Graphics, GPU, & Game Tools, vol. 9, no. 1, pp.25-36, 2004. [61] P.-K. Tsung, P.-C. Lin, K.-Y. Chen, T.-D. Chuang, H.-J. Yang, S.-Y. Chien, L.-F. Ding, W.-Y. Chen, C.-C. Cheng, T.-C. Chen, and L.-G. Chen, “A 216fps 4096x2160 3DTV set-top box SoC for free-viewpoint 3DTV applications,” in Proc. IEEE Int. Solid-State Circuits Conf. (ISSCC’11), San Francisco, CA, Feb. 2011, pp. 124-126. [62] Y.-R. Horng, Y.-C. Tseng, and T.-S. Chang, “VLSI architecture for real time HD1080p view synthesis engine,” to appear in IEEE Trans. Circuits Syst. Video Technol. (TCSVT), vol. 21, no. 9, Sep. 2011. Associated Algorithms to 3DVC [63] Depth estimation reference software (DESR), version 4.0 [Online]. Available: http://wg11.sc29.org/svn/repos/MPEG-4/test/tags/3D/depth_estimation/ DERS_4 [64] View Synthesis Reference Software (VSRS), version 3.5 [Online]. Available: http://wg11.sc29.org/svn/repos/MPEG-4/test/tags/3D/view_synthesis/VSRS_3_5 [65] Enhancement of temporal consistency for multi-view depth map estimation, ISO/IEC JTC1/SC29/WG11, M15594, Jul. 2008. [66] Depth estimation improvement for depth discontinuity areas and temporal consistency preserving, ISO/IEC JTC1/SC29/WG11, M16048, Feb. 2008. [67] The consideration of the improved depth estimation algorithm: the depth estimation algorithm for temporal consistency enhancement in non-moving background, ISO/IEC JTC1/SC29/WG11, m16070, Jan. 2009. [68] A soft-segmentation matching in Depth Estimation Reference Software (DERS) 5.0, ISO/IEC JTC1/SC29/WG11, M17049, Xian, China, Oct. 2009. [69] D. Comaniciu and P. Meer, “Mean-shift: a robust approach toward feature space analysis,” IEEE Trans. Pattern Anal. Mach. Intell. (TPAMI), vo. 24, no. 5, pp. 603-619, May 2002. [70] Open Source Computer Vision [Online]. Available: http://opencv.willowgarage.com/wiki/ Test Sequences and Evaluation Methods [71] Description of exploration experiments in 3D video coding, ISO/IEC JTC1/SC29/WG11, W11095, Kyoto, Japan, Jan. 2010. [72] D. Scharstien and R. Szeliski, Middlebury Stereo Evaluation – Version 2 [Online]. Available: http://vision.middlebury.edu/stereo/eval/ [73] D. Scharstien and R. Szeliski, “High-accuracy stereo depth maps using structured light,” in Proc. IEEE Conf. on Comput. Vision and Pattern Recognition (CVPR’03), vol. 1, Jun. 2003, pp. 195-202. [74] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. (TIP), vol. 13, no. 4, pp. 600-612, Apr. 2004. [75] Peak signal-to-perceptible-noise ratio tool: PSPNR 1.0, ISO/IEC JTC1/SC29/WG11, M16584, London, UK, Jul. 2009. [76] PSPNR Tool 2.0, ISO/IEC JTC1/SC29/WG11, M16890, Xian, China, Oct. 2009. [77] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, The SSIM Index for Image Quality Assessment [Online]. Available: http://www.cns.nyu.edu/~lcv/ssim/ [78] HHI test materials for 3D video, ISO/IEC JTC1/SC29/WG11, M15413, Archamps, France, Apr. 2008. Joint Bilateral Filter and Disparity Upsampling [79] J. Kopf, M. F. Cohen, D. Lischinski, and M. Uyttendaele, “Joint bilateral upsampling,” ACM Trans. on Graphics (TOG), vol. 26, no. 3, article 96, Jul. 2007. [80] D. Chan, H. Buisman, C. Theobalt, and S. Thrun, “A noise-aware filter for real-time depth upsampling,” in Proc. European Conf. on Comput. Vision Workshop on Multicamera and Multimodal Sensor Function Algorithms and Applications, Oct. 2008, pp. 1-12. [81] A. K. Riemens, O. O. Gangwal, B. Barenbrug, and R.-P. M. Berretty, “Multi-step joint bilateral depth upsampling,” in Proc. SPIE Visual Commun. and Image Process., vol. 7257, Jan. 2009. [82] O. P. Gangwal, E. Coezijn, and R.-P. Berretty, “Real-time implementation of depth map post-processing for 3D-TV on a programmable DSP (TriMedia),” in Proc. IEEE Int. Conf. on Consumer Electronics (ICCE’09), Jan. 2009. [83] Q. Yang, K.-H. Tan, and N. Ahuja, “Real-time O(1) bilateral filtering,” in Proc. IEEE Conf. on Comput. Vision and Pattern Recognition (CVPR’09), Aug. 2009, pp. 557-564. [84] F. Durand and J. Dorsey, “Fast bilateral filtering for the display of high-dynamic-range images,” ACM Trans. Graphics (TOG), vol. 21, no. 3, pp. 257-266, Jul. 2002. [85] S. Paris and F. Durand, “A fast approximation of the bilateral filter using a signal processing approach,” in Proc. European Conf. on Comput. Vision (ECCV’06), May 2006, pp. 568-580. [86] S. Paris and F. Durand, “A fast approximation of the bilateral filter using a signal processing approach,” Int. J. Comput. Vision, vol. 81, no. 1, pp. 24-52, Jan. 2009. [87] J. Chen, S. Paris, and F. Durand, “Real-time edge-aware image processing with the bilateral grid,” ACM Trans. Graphics (TOG), vol. 26, no. 3, article 103, pp. 1-9, Jul. 2007. [88] A. Adams, N. Gelfand, J. Dolson, and M. Levoy, “Gaussian KD-trees for fast high dimensional filtering,” ACM Trans. Graphics (TOG), vol. 28, no. 3, article 21, Aug. 2009. [89] T. Q. Pham and L. J. van Vliet, “Separable bilateral filtering for fast video processing,” in Proc. IEEE Int. Conf. on Multimedia and Expo (ICME’05), Jul. 2005. [90] T.-S. Huang, “Two-dimensional digital signal processing II: transforms and median filters,” Spring-Verlag, New York, 1981, pp. 209-211. [91] F. Porikli, “Constant time O(1) bilateral filtering,” in Proc. IEEE Conf. on Comput. Vision and Pattern Recognition (CVPR’09), Aug. 2008, pp. 1-8. [92] B. Weiss, “Fast median and bilateral filtering,” ACM Trans. Graphics (TOG), vol. 25, no. 3, pp. 519-526, Jul. 2006. [93] M.-H. Ju, and H.-B. Kang, “Constant time stereo matching,” in Proc. Int. Machine Vision and Image Processing Conf., Sep. 2009, pp. 13-17. [94] C. Charoensak and F. Satter, “FPGA design of a real-time implementation of dynamic range compression for improving television picture,” in Proc. IEEE Int. Conf. on Information Commun. and Signal Process. (ICICS’07), Dec. 2007. [95] T. Q. Vinh, J. H. Park, Y.-C. Kim, and S. H. Hong, “FPGA implementation of real-time edge-preserving filter for video noise reduction,” in Proc. IEEE Int. Conf. on Comput. and Elect. Eng. (ICCEE’08), Dec. 2008, pp. 611-614. [96] A. Gabiger, M. Kube, and R. Weigel, “A synchronous FPGA design of a bilateral filter for image processing,” in Proc. IEEE Ind. Electron. Conf. (IECON’09), Nov. 2009, pp. 1990-1995. [97] S.-K. Han, “An architecture for high-throughput and improved-quality stereo vision processor,” M.S. thesis, Dept. of Electrical and Computer Engineering, Univ. of Maryland, 2010. [98] A. Wong. NVIDIA GeForce 8800 GTX/GTS Tech Report [Online]. Available: http://www.techarp.com/showarticle.aspx?artno=358&pgno=0 [99] A. L. Shimpi and D. Wilson. Nvidia’s 1.4 billion transistor GPU: GT200 arrives as the GeForce GTX 280 & 260 [Online]. Available: http://www.anandtech.com/show/2549 Others [100] M. Tanimoto, “Free-viewpoint television”, Image and Geometry Processing for 3-D Cinematography, Springer-Verlag, vol. 5, part 1, 2010, pp. 52-76. [101] M. Tanimoto, M. P. Tehrani, T. Fujii, and T. Yendo, “Free-viewpoint TV,” IEEE Signal Processing Mag., vo. 28, no. 1, pp. 67-76, Jan. 2011. [102] Q. Wei, “Converting 2D to 3D: a survey,” Inform. and Commun. Theory Group, Faculty Elect. Eng., Math. and Comput. Sci., Delft Univ. of Technol., Netherlands, Research Assignment, Dec. 2005. [103] D. Hoiem, A. Stein, A. A. Efros, and M. Hebert, “Recovering occlusion boundaries from a single image,” in Proc. IEEE Int. Conf. on Comput. Vision (ICCV’07), Oct. 2007. [104] D. Hoiem, A. Efros, and M. Hebert, “Recovering surface layout from an image,” Int. J. Comput. Vision (IJCV), vol. 75, no. 1, pp. 151-172, Oct. 2007. [105] D. Scharstien and R. Szeliski, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithm,” Int. J. Comput. Vision (IJCV), vol. 47, no. 1-3, pp. 7-42, May 2002. [106] M. Z. Brown, D. Burschka, and G. D. Hager, “Advances in computational stereo,” IEEE Trans. Pattern Anal. Mach. Intell. (TPAMI), vol. 25, no. 8, pp. 993-1008, Aug. 2003. [107] Joint draft 6.0 on multiview video coding, ISO/IEC JTC1/SC29 and ITU-T SG16 Q.6 JVT-Z209, Antalya, Turkey, Jan. 2008. [108] N. Matthews, X. Meng, P. Xu, and N. Qian, “A physiological theory of depth perception from vertical theory,” Vision Research, vol. 43, no. 1, pp. 85-99, Jan. 2003. [109] J. C. A. Read and B. G Cumming, “Does depth perception require vertical-disparity detectors?” J. of Vision, vol. 6, no. 12, pp. 1323-1355, Nov. 2006. [110] Micron Inc. 1Gb DDR3 SDRAM: MT41J128M8JP-125 [Online]. Available: http://www.micron.com/get-document/?documentId=425 [111] J. Diaz, E. Ros, R. Carrillo, and A. Prieto, “Real-time system for high-image resolution disparity estimation,” IEEE Trans. Image Process. (TIP), vol. 16, no. 1, pp. 280-285, Jan. 2007.
|