(54.236.58.220) 您好!臺灣時間:2021/02/27 17:24
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

我願授權國圖
: 
twitterline
研究生:詹霖
研究生(外文):Hani Ousamah Morad Jamleh
論文名稱:使用影像散焦點展延函數信息方法來估測淺景深度圖
論文名稱(外文):Shallow Depth Map Estimation from Image Defocus Blur Point Spread Function Information
指導教授:陳中平陳中平引用關係
指導教授(外文):Chung-Ping Chen
口試委員:傅楸善盧奕璋吳家麟賴飛羆
口試委員(外文):Chiou-Shann FuhYi-Chang LuJa-Ling WuFeipei Lai
口試日期:2014-05-20
學位類別:博士
校院名稱:國立臺灣大學
系所名稱:電子工程學研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2014
畢業學年度:102
語文別:英文
論文頁數:154
中文關鍵詞:Defocus MapDepth of FieldMura DefectComputational PhotographyDigital Image ProcessingDefocus AmplificationDepth From DefocusShape From FocusTFT-LCDCameraComputer VisionShallow FocusDefocus MapDepth Map.
外文關鍵詞:Defocus MapDepth of FieldMura DefectComputational PhotographyDigital Image ProcessingDefocus AmplificationDepth From DefocusShape From FocusTFT-LCDCameraComputer VisionShallow FocusDefocus MapDepth Map.
相關次數:
  • 被引用被引用:0
  • 點閱點閱:143
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:2
The aim of this research is addressing both the influence of the limited aperture size of the optical imaging system of the camera, and the defocus aberration influence on output images in order to measure useful information such as defocus and depth through the MTF (Modulation Transfer Function), further we analyze the existing defocus levels by measuring the size of blur kernels.
One of the goals of our study is to make shallow depth photos with blurry background; photographers need to use cameras such as SLR (single-lens reflex) not only for carefully choosing the best position with respect to the object but also changing the lens effective focal length or aperture size in order to obtain an artistic effect mostly desired in many types of photographs (e.g. portraits), which is not available for normal camera users who prefer to use low cost compact point-and-shot cameras; for their ease of use and convenience.
Nowadays, the size of TFT-LCDs (thin-film-transistor liquid-crystal displays) is getting larger, as a result; it becomes harder to inspect defects that may exist which usually require a human visual examiner to judge the severity of the defects on the final product. These defects; so called mura (Japanese shorthand) are defined as visual blemish with non-uniform shapes and boundaries. It is becoming a very serious unpleasant effect which needs to be detected and inspected in order to characterize the LCD’s quality.
Through this research, we essentially propose two contributions. One that given only two images taken under different camera parameters, we measure a reliable defocus map based on scale-space analysis, then we propagate the defocus measures over edges to the entire image using matting process, eventually we will have a refined dense defocus map, which is utilized in applications such as amplifying the existing blurriness yielding a shallow depth photos from all focused images. On the other hand, it helps extracting the foreground object shape and isolating it from the background. The second contribution is experimentally detecting many types of MURA defects on LCD panels by some low-complex effective post-processing imaging techniques.
Practically; we utilize the computational photography techniques to amplify defocus levels and to detect low contrast defects such as MURA.
Our Computational techniques will allow the average photographers to capture more appealing photos, and the LCD manufacturers to increase their Engineer’s efficiencies and performance. We strongly proof that this study will enable cameras and automated vision systems to embed useful computation with few user interventions.


The aim of this research is addressing both the influence of the limited aperture size of the optical imaging system of the camera, and the defocus aberration influence on output images in order to measure useful information such as defocus and depth through the MTF (Modulation Transfer Function), further we analyze the existing defocus levels by measuring the size of blur kernels.
One of the goals of our study is to make shallow depth photos with blurry background; photographers need to use cameras such as SLR (single-lens reflex) not only for carefully choosing the best position with respect to the object but also changing the lens effective focal length or aperture size in order to obtain an artistic effect mostly desired in many types of photographs (e.g. portraits), which is not available for normal camera users who prefer to use low cost compact point-and-shot cameras; for their ease of use and convenience.
Nowadays, the size of TFT-LCDs (thin-film-transistor liquid-crystal displays) is getting larger, as a result; it becomes harder to inspect defects that may exist which usually require a human visual examiner to judge the severity of the defects on the final product. These defects; so called mura (Japanese shorthand) are defined as visual blemish with non-uniform shapes and boundaries. It is becoming a very serious unpleasant effect which needs to be detected and inspected in order to characterize the LCD’s quality.
Through this research, we essentially propose two contributions. One that given only two images taken under different camera parameters, we measure a reliable defocus map based on scale-space analysis, then we propagate the defocus measures over edges to the entire image using matting process, eventually we will have a refined dense defocus map, which is utilized in applications such as amplifying the existing blurriness yielding a shallow depth photos from all focused images. On the other hand, it helps extracting the foreground object shape and isolating it from the background. The second contribution is experimentally detecting many types of MURA defects on LCD panels by some low-complex effective post-processing imaging techniques.
Practically; we utilize the computational photography techniques to amplify defocus levels and to detect low contrast defects such as MURA.
Our Computational techniques will allow the average photographers to capture more appealing photos, and the LCD manufacturers to increase their Engineer’s efficiencies and performance. We strongly proof that this study will enable cameras and automated vision systems to embed useful computation with few user interventions.


中文口試委員審定書 ii
英文口試委員審定書 iii
Abstract iv
Acknowledgements vi
Dedication vii
Table of Contents viii
List of Figures xii
List of Tables xvii
Chapter 1 Introduction 18
Chapter 2 Background and Preliminaries 22
2.1 Geometrical Imaging and Camera Model 22
2.1.1 f-number: N 25
2.2 Point Spread Function (PSF) 26
2.2.1 Wave Optics: Airy Disk PSF 26
2.2.2 Circle of Confusion: coc 29
2.2.3 Focal Gradient 31
2.2.4 Sensor Size Effect 33
2.2.5 Defocus Aberration Model 34
2.2.6 Depth of Field: The Circle of Confusion is Fixed 37
2.3 Optical Transfer Function (OTF) 40
Chapter 3 Depth Map Estimation from Defocus Blur PSF Information 43
3.1 Introduction 43
3.2 Previous and Related Work 44
3.3 Depth from Focus Process: DFF 49
3.4 Blur Estimation from Defocus Information 50
3.4.1 Sparse Defocus Estimation from One Image 54
3.4.2 Defocus Estimation from Two Images 63
3.5 Defocus Estimation Enhancement 69
3.5.1 Sparse Blur Map Post-Processing 70
3.5.2 Image Blocks neighborhood effect 72
3.5.3 Image Zoom Calibration 73
3.5.4 Scale-Space Image Processing 75
3.6 Defocus Propagation and Interpolation 81
3.6.1 Defocus Propagation by Alpha-Matting 81
3.7 Depth from Defocus Process Implementation Using Two Images 84
3.7.1 Input Image Preparation and Smoothing 85
3.7.2 Parsavel’s Theorem (Energy Theorem) and Laplacian Filter 85
3.7.3 Calibration Process 87
3.7.4 Depth Map Measurement 88
3.7.5 Implementation Flow-chart 90
3.8 Experimental Results 94
3.8.1 Computer Environment 94
3.8.2 Cameras and Settings 94
3.8.3 Defocus Map Generation from a Single Image 95
3.8.4 Estimated Defocus Map by Two Images 97
3.9 Summary 112
Chapter 4 Automatic MURA Defect Detection and Inspection in LCD Panels 114
4.1 Introduction 114
4.2 Previous and Related Work 116
4.3 System Architecture and Approach 117
4.3.1 Pseudo-Mura Patterns 119
4.4 Mura Detection Algorithm by Segmentation 121
4.4.1 Preprocessing and Residual Image Extraction 123
4.4.2 Averaging Filter 124
4.4.3 Gradient Operation and Derivatives 125
4.4.4 The Second Derivative (Laplacian) of the Sample Image 126
4.4.5 The Fusion Operation of Two Responses 127
4.4.6 Thresholding 130
4.4.7 Morphological Post-Processing Operation 131
4.5 Experimental Results 131
4.6 Discussions 132
4.7 Summary 132
Chapter 5 Defocus Amplification and Focused Object Extraction 134
5.1 Introduction 134
5.2 Related Works 135
5.3 Image Defocus Amplification 136
5.3.1 Defocus Map Amplification Experimental Results 137
5.4 Focused Object Extraction 142
5.5 Summary 143
Chapter 6 Conclusion and Future Work 144
6.1 Summary 144
6.2 Future Work 145
Bibliography 147



[1]A. P. Pentland, "A new sense for depth of field," IEEE Trans Pattern Anal Mach Intell, vol. 9, pp. 523-31, Apr 1987.
[2]E. Hecht, Optics, 4 ed.: Addison-Wesley Longman, 2002.
[3]K. Rossmann, "Point Spread-Function, Line Spread-Function, and Modulation Transfer Function," Radiology, vol. 93, pp. 257-272, 1969.
[4]J. Goodman, Introduction to Fourier Optics, 2004.
[5]A. P. Pentland, "Depth of scene from depth of field," in Proc. Image Understanding Workshop, Palo Alto, CA, 1982, pp. 253-259.
[6]P. Grossmann, "Depth from focus," Pattern Recognition Letters, vol. 5, pp. 63-69, 1987.
[7]S. Bae and F. Durand, "Defocus Magnification," Computer Graphics Forum, vol. 26, pp. 571-579, 2007.
[8](2014). http://www.videoupskill.com/. Available: http://www.videoupskill.com/large-sensor-vs-small-sensor-depth-of-field-comparison/
[9]Wikipedia. (1/27/2014). Airy disk, http://en.wikipedia.org/wiki/Airy_disk.
[10]A. Horii, "Depth from Defocusing," C. V. a. A. P. Laboratory, Ed., ed. Stockholm, Sweden, 1992.
[11]Y. Xiong and S. A. Shafer, "Depth from focusing and defocusing," in Proceedings IEEE International Conference on Computer Vision and Pattern Recognition, New York, USA, 1993, pp. 68-73.
[12]H. H. Hopkins, "The Frequency Response of a Defocused Optical System," Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences, vol. 231, pp. 91-103, 1955.
[13]R. C. Gonzalez and R. E. Woods, Digital Image Processing, 3rd ed., 2007.
[14]A. Pentland, S. Scherock, T. Darrell, and B. Girod, "Simple range cameras based on focal error," Journal of the Optical Society of America A, vol. 11, pp. 2925-2934, 1994.
[15]A. P. Pentland, T. Darrell, M. Turk, and W. Huang, "A simple, real-time range camera," in Proceedings IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego, CA, 1989, pp. 256-261.
[16]M. Subbarao, "Parallel Depth Recovery By Changing Camera Parameters," in Computer Vision., Second International Conference on, Florida, USA, 1988, pp. 149-155.
[17]M. Subbarao and N. Gurumoorthy, "Depth recovery from blurred edges," in Proc. IEEE Intl. Conf. on Computer Vision and Pattern Recognition, Ann Arbor, Michigan, 1988, pp. 498-503.
[18]M. Subbarao and G. Surya, "Depth from defocus: A spatial domain approach," International Journal of Computer Vision, vol. 13, pp. 271-294, 1994.
[19]M. Subbarao and T.-C. Wei, "Depth from defocus and rapid autofocusing: a practical approach," in Proc. IEEE Intl. Conf. on Computer Vision and Pattern Recognition, Champaign, Illinois, 1992, pp. 773-776.
[20]G. Surya and M. Subbarao, "Depth from defocus by changing camera aperture: A spatial domain approach," presented at the IEEE Intl. Conf. on Computer Vision and Pattern Recognition, New York, USA, 1993.
[21]B. K. P. Horn, "Focusing," MIT Artificial Intelligence Laboratory, Memo No. 160, May, 1968.
[22]T. Darrell and K. Wohn, "Pyramid based depth from focus," in Computer Vision and Pattern Recognition, 1988. Proceedings CVPR ''88., Computer Society Conference on, 1988, pp. 504-509.
[23]S.-H. Lai, C.-W. Fu, and S. Chang, "A Generalized Depth Estimation Algorithm with a Single Image," Ieee Transactions on Pattern Analysis and Machine Intelligence, vol. 14, pp. 405-411, 1992.
[24]J. Ens and P. Lawrence, "An Investigation of Methods for Determining Depth from Focus," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 15, pp. 97-108, Feb 1993.
[25]T. Hwang, J. J. Clark, and A. C. Yuille, "A depth recovery algorithm using defocus information," in Proceedings IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego, CA, 1989, pp. 476-482.
[26]K. V. Prasad, R. J. Mammone, and J. Yogeshwar, "Three-dimensional image restoration using constrained optimization techniques," Optical Engineering, vol. 29, pp. 279-288, 1990.
[27]J. Cardillo and M. A. Sidahmed, "3-D Position Sensing Using a Passive Monocular Vision System," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 13, pp. 809-813, Aug 1991.
[28]V. M. Bove, "Entropy-Based Depth from Focus," Journal of the Optical Society of America a-Optics Image Science and Vision, vol. 10, pp. 561-566, Apr 1993.
[29]W. N. Klarquist, W. S. Geisler, and A. C. Bovik, "Maximum-likelihood depth-from-defocus for active vision," in Proceedings IEEE International Conference on Intelligent Robots and Systems, Pittsburgh, Pennsylvania, 1995, pp. 374-379.
[30]M. Watanabe and S. K. Nayar, "Minimal Operator Set for Passive Depth from Defocus," in Proceedings IEEE International Conference on Computer Vision and Pattern Recognition, San Fransisco, Calif, 1996, pp. 431-438.
[31]D. Ziou, "Passive depth from defocus using a spatial domain approach," in Proceedings IEEE International Conference on Computer Vision and Pattern Recognition, Bombay, India, 1998, pp. 799-804.
[32]Y. Y. Schechner and N. Kiryati, "Depth from defocus vs. stereo: how different really are they?," in Pattern Recognition, 1998. Proceedings. Fourteenth International Conference on, 1998, pp. 1784-1786 vol.2.
[33]Y. Y. Schechner and N. Kiryati, "Depth from defocus vs. stereo: How different really are they?," International Journal of Computer Vision, vol. 39, pp. 141-162, 2000.
[34]S. K. Nayar, M. Watanabe, and M. Noguchi, "Real-time focus range sensor," in Computer Vision, 1995. Proceedings., Fifth International Conference on, 1995, pp. 995-1001.
[35]S. K. Nayar, M. Watanabe, and M. Noguchi, "Real-Time Focus Range Sensor," IEEE Trans. Pattern Anal. Mach. Intell., vol. 18, pp. 1186-1198, 1996.
[36]S. K. Nayar and Y. Nakagawa, "Shape from focus," Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 16, pp. 824-831, 1994.
[37]E. Krotkov, "Focusing," International Journal of Computer Vision, vol. 1, pp. 223-237, 1988.
[38]M. Subbarao, "Direct recovery of depth-map I: Differential methods," in IEEE Computer Society Workshop on COMPUTER VISION, Miami Beach, Florida, 1987, pp. 58-65.
[39]M. Watanabe and S. K. Nayar, "Rational filters for passive depth from defocus," 1995.
[40]S. Bae, "Analysis and Transfer of Photographic Viewpoint and Appearance," PhD, MIT, 2009.
[41]J. H. Elder and S. W. Zucker, "Local scale control for edge detection and blur estimation," Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 20, pp. 699-716, 1998.
[42]H. Hu and G. de Hann, "Low Cost Robust Blur Estimator," in Image Processing, 2006 IEEE International Conference on, 2006, pp. 617-620.
[43]S. Zhuo and T. Sim, "On the Recovery of Depth from a Single Defocused Image," in Computer Analysis of Images and Patterns. vol. 5702, X. Jiang and N. Petkov, Eds., ed: Springer Berlin Heidelberg, 2009, pp. 889-897.
[44]S. Zhuo and T. Sim, "Defocus map estimation from a single image," Pattern Recognition, vol. 44, pp. 1852-1858, 2011.
[45]Z. Wei and C. Wai-Kuen, "Single image focus editing," in Computer Vision Workshops (ICCV Workshops), 2009 IEEE 12th International Conference on, 2009, pp. 1947-1954.
[46]J. Canny, "A Computational Approach to Edge Detection," Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. PAMI-8, pp. 679-698, 1986.
[47]K. Suzuki, I. Horiba, and N. Sugie, "Neural edge enhancer for supervised edge enhancement from noisy images," Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 25, pp. 1582-1596, 2003.
[48]M. Basu, "Gaussian-based edge-detection methods-a survey," Systems, Man, and Cybernetics, Part C: Applications and Reviews, IEEE Transactions on, vol. 32, pp. 252-260, 2002.
[49]T. Lindeberg, "Scale-Space," in Wiley Encyclopedia of Computer Science and Engineering, ed: John Wiley &; Sons, Inc., 2007.
[50]A. P. Witkin, "Scale-space filtering: A new approach to multi-scale description," in Acoustics, Speech, and Signal Processing, IEEE International Conference on ICASSP ''84., 1984, pp. 150-153.
[51]E. Eisemann and F. Durand, "Flash photography enhancement via intrinsic relighting," ACM Trans. Graph., vol. 23, pp. 673-678, 2004.
[52]G. Petschnigg, R. Szeliski, M. Agrawala, M. Cohen, H. Hoppe, and K. Toyama, "Digital photography with flash and no-flash image pairs," ACM Trans. Graph., vol. 23, pp. 664-672, 2004.
[53]R. Kingslake, Optics in Photography: SPIE Publications, 1992.
[54]A. N. Rajagopalan and S. Chaudhuri, "Space-Variant Approaches to Recovery of Depth from Defocused Images," Computer Vision and Image Understanding, vol. 68, pp. 309-329, 1997.
[55]T. Darell and K. Wohn, "Depth from focus using a pyramid architecture," Pattern Recognition Letters, vol. 11, pp. 787-796, 1990.
[56]J. L. Crowley, O. Riff, and J. H. Piater, "Fast Computation of Characteristic Scale Using a Half-Octave Pyramid," presented at the 4th International Conference on Scale-Space theories in Computer Vision, Isle of Skye, 2002.
[57]P. J. Burt, "Fast algorithms for estimating local image properties," Computer Vision, Graphics, and Image Processing, vol. 21, pp. 368-382, 1983.
[58]J. L. Crowley and A. C. Parker, "A Representation for Shape Based on Peaks and Ridges in the Difference of Low-Pass Transform," Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. PAMI-6, pp. 156-170, 1984.
[59]D. Lowe, "Distinctive Image Features from Scale-Invariant Keypoints," International Journal of Computer Vision, vol. 60, pp. 91-110, 2004.
[60]D. Marr and E. Hildreth, "Theory of edge detection," Proceedings of the Royal Society of London, B-207, vol. 207, pp. 187–217, 1980.
[61]P. J. Burt and E. H. Adelson, "The Laplacian pyramid as a compact image code," in Readings in computer vision: issues, problems, principles, and paradigms, A. F. Martin and F. Oscar, Eds., ed: Morgan Kaufmann Publishers Inc., 1987, pp. 671-679.
[62]P. J. Burt, "Fast filter transform for image processing," Computer Graphics and Image Processing, vol. 16, pp. 20-51, 1981.
[63]A. Levin, D. Lischinski, and Y. Weiss, "Colorization using optimization," ACM Trans. Graph., vol. 23, pp. 689-694, 2004.
[64]D. Lischinski, Z. Farbman, M. Uyttendaele, and R. Szeliski, "Interactive local adjustment of tonal values," ACM Trans. Graph., vol. 25, pp. 646-653, 2006.
[65]A. Levin, D. Lischinski, and Y. Weiss, "A Closed-Form Solution to Natural Image Matting," Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 30, pp. 228-242, 2008.
[66]E. Hsu, T. Mertens, S. Paris, S. Avidan, and F. Durand, "Light mixture estimation for spatially varying white balance," ACM Trans. Graph., vol. 27, pp. 1-7, 2008.
[67]H. Kaiming, S. Jian, and T. Xiaoou, "Single Image Haze Removal Using Dark Channel Prior," Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 33, pp. 2341-2353, 2011.
[68]H. Jamleh, T.-Y. Li, S.-Z. Wang, C.-W. Chen, C.-C. Kuo, K.-S. Wang, et al., "50.2: Mura Detection Automation in LCD Panels by Thresholding Fused Normalized Gradient and Second Derivative Responses," presented at the SID Symposium Digest of Technical Papers, 2010.
[69]H. Jamleh, T.-Y. Li, S.-Z. Wang, C.-W. Chen, C.-C. Kuo, K.-S. Wang, et al., "Automatic mura detection based on thresholding the fused normalized first and second derivatives in four directions," Journal of the Society for Information Display, vol. 18, pp. 1058-1064, 2010.
[70]SEMI, "New Standard: Definition of Measurement Index (SEMU) for Luminance Mura in FPD Image Quality Inspection, SEMI Draft Document #3324," ed, 2002.
[71]H.-C. Chen, L.-T. Fang, L. Lee, C.-H. Wen, S.-Y. Cheng, and S.-J. Wang, "LOG-filter-based inspection of cluster Mura and vertical-band Mura on liquid crystal displays," 2005, pp. 257-265.
[72]Y.-C. Song, D.-H. Choi, and K.-H. Park, "Morphological Blob-Mura Defect Detection Method for TFT-LCD Panel Inspection," in Knowledge-Based Intelligent Information and Engineering Systems. vol. 3215, M. Negoita, R. Howlett, and L. Jain, Eds., ed: Springer Berlin Heidelberg, 2004, pp. 862-868.
[73]J. Y. Lee and S. I. Yoo, "Automatic Detection of Region-Mura Defect in TFT-LCD," IEICE Transactions, vol. 87-D, pp. 2371-2378, 2004.
[74]C.-J. Lu and D.-M. Tsai, "Automatic defect inspection for LCDs using singular value decomposition," The International Journal of Advanced Manufacturing Technology, vol. 25, pp. 53-61, 2005.
[75]VESA, "Flat panel display measurements standard, Version 2.0," ed, 2001, pp. 115–124.
[76]C.-C. Chen, S.-L. Hwang, and C.-H. Wen, "Measurement of human visual perception for Mura with some features," Journal of the Society for Information Display, vol. 16, pp. 969-976, 2008.
[77]P.-C. Wang and S.-L. Hwang, "Mura-type effect on human-vision inspection," Journal of the Society for Information Display, vol. 17, pp. 671-680, 2009.
[78]N. Otsu, "A Tlreshold Selection Method from Gray-Level Histograms," IEEE Transactions on Systems, Man, and Cybernetics, vol. SMC-9, pp. 62-66, 1979.
[79]S. K. Sinha and P. W. Fieguth, "Automated detection of cracks in buried concrete pipe images," Automation in Construction, vol. 15, pp. 58-72, 2006.
[80]J. Elder and S. W. Zucker, "Scale-space surfaces and blur estimation," presented at the Proc. Vision Interface 95, Quebec City, Canada, 1995.
[81]C. Tang, C. Hou, and Z. Song, "Defocus map estimation from a single image via spectrum contrast," Optics Letters, vol. 38, pp. 1706-1708, 2013.



QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊
 
系統版面圖檔 系統版面圖檔