跳到主要內容

臺灣博碩士論文加值系統

(18.97.14.82) 您好!臺灣時間:2024/12/11 21:44
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:蔡宇捷
研究生(外文):Yu-Chieh Tsai
論文名稱:基於特徵法則之樣板比對精度量測
論文名稱(外文):Measure Accuracy of Template Matching Based on Feature-Based Approach
指導教授:陳進興陳進興引用關係
指導教授(外文):Chin-Hsing Chen
學位類別:碩士
校院名稱:國立成功大學
系所名稱:電腦與通信工程研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2009
畢業學年度:97
語文別:英文
論文頁數:80
中文關鍵詞:樣板比對環狀投影特徵
外文關鍵詞:featuretemplate matchingring projection
相關次數:
  • 被引用被引用:2
  • 點閱點閱:308
  • 評分評分:
  • 下載下載:63
  • 收藏至我的研究室書目清單書目收藏:0
樣板比對是一種常見應用於信號與影像處理的技術,其應用的範圍包括影像回復、影像定位、物件偵測以及影像辨識等等。主要的步驟為:給定一個參考的物件影像,並且以其為一個樣板搜尋整個背景影像來判定此物件是否存在於背景影像之中,若是存在的話則將其位置找出。在工業的應用上,快速且精確的樣板比對不但可以有效率地分類產品而且還可以增加產量。因此,本論文提出一具有精確與強健的樣板比對方法,對於三種不同的IC晶片,計算其在無雜訊以及受雜訊干擾的狀況之下,經偏移之後的位置。

本論文提出的樣板比對演算法如下所述。首先,從視覺系統中得到背景影像與樣板影像,且將所需要的樣板特徵在教讀的階段中萃取出來並且儲存,避免在後續的比對過程中重複計算。然後利用高斯金字塔來降低背景影像以及樣板影像的解析度以及大小,接下來讓樣板在背景影像中搜尋目標物件。當樣板移動時,會有一個對應的子背景影像被其覆蓋,我們就藉由環狀投影的方法(RPT)萃取出子背景影像的特徵,利用正規化的相關係數公式來判斷它們的相似程度並且選定粗略搜尋的候選者。這些候選者經由篩選與回復的動作來定義出高解析度比對的搜尋範圍。在高解析度比對的階段裡,將環狀投影結合五維的容積差補法則(five-dimensional cubature formula)來做高解析度的樣板比對。最後,引用二次多項式匹配法則(second-degree polynomial fitting formula)將匹配點定位到子像素準度。

本論文針對所提出的方法,評估其在雜訊干擾以及無雜訊干擾下的匹配精確度與穩定性。三種不同的IC晶片影像被用來當作我們的實驗影像,其中包含了原始影像以及加入不同程度的高斯雜訊影像。根據實驗結果顯示,在水平與垂直方向的匹配誤差皆在0.05公厘以下。在效能方面,於一台備有Pentium 2.6GHz處理器的電腦上,完成單次程序所需要的時間為0.735秒,使用的背景影像大小為512*512且樣板大小為128*128。
Template matching is one of the most common techniques which are used in signal and image processing. It has found applications in image retrieval, image registration, object detection, image recognition and so on. The concept of template matching is illustrated as follows. For a given reference image of an object, decide whether the object exists in a scene image under analysis and find its location. In industrial, accurate and efficient template matching can not only classify products fast but also increase the quantity of output. In this thesis, a precise and robust method is proposed to measure the position of IC packages after displacements. Three kinds of IC package images with and without noisy interference are studied.

The proposed template matching algorithm is described as follows. The features of the template are extracted and saved in the teaching stage in order to avoid computational repetition. Then the Gaussian pyramid method is used for reducing resolution and scaling size. The template is then moved on the scene image and the features of the subimage covered by the template are extracted by the RPT (Ring Projection Transformation) method. The normalized correlation is used to select the matching candidates in the coarse search. These candidates determine the searching range for the fine search. After that, the RPT method combined with the proposed five-dimensional cubature formula is used for the fine search. Finally, a second-degree polynomial fitting formula is used to register the matching positions in subpixel accuracy.

Our proposed method is evaluated in terms of stability and matching accuracy. Three kinds of images with and without noises are used. In our experiments, the matching error is under 0.05mm in horizontal and vertical directions respectively. The computational time of the proposed method takes approximate 0.735 seconds to complete entire operation with Pentium 4 processor of 2.6GHz for a 512*512 scene image and a 128*128 template.
Abstract I
Contents III
Figure Captions V
Table Captions IX
Chapter 1 Introduction 1
1.1 Background 1
1.2 Related Work 3
1.3 Research Scope and Methods 6
1.4 Organization of The Thesis 7
Chapter 2 Techniques of Image Preprocessing 8
2.1 Introduction 8
2.2 Ring Projection Transformation 8
2.3 The Measurement of Similarity 12
2.4 Image Moments 12
2.4.1 Hu Moments 13
2.4.2 Zernike Moments 16
2.4.3 Radial Chebyshev Moments 21
2.5 Pyramid Methods in Image Processing 26
2.5.1 Gaussian Window Function 26
2.5.2 Gaussian Pyramid 27
Chapter 3 Procedure of Template Matching and The Proposed Method 29
3.1 Introduction 29
3.2 The Procedure of Template Matching 30
3.2.1 Acquiring Images 31
3.2.2 Teaching 32
3.2.3 Coarse Search 33
3.2.4 Fine Search 37
3.2.5 Registering in Subpixel Accuracy 38
3.3 The Proposed Method 39
Chapter 4 Experimental Results and Discussions 47
4.1 Introduction 47
4.2 Evaluation of Accuracy Using Noise-Free Images 47
4.3 Evaluation of Accuracy Using Images with Gaussian Noises 57
4.4 Discussions 71
Chapter 5 Conclusions and Future Work 72
Appendix A 74
The One-Dimension Taylor Expansion 74
The Two-Dimension Taylor Expansion 75
Reference 77
[1]E. H. Adelson, C. H. Abderson, J. R. Bergen, P. J. Burt and J. M. Ogden, “Pyramid methods in image processing,” RCA Engineer, Nov. 1984.
[2]Y. Bentoutou, N Taleb, K. Kpalma and J. Ronsin. “An automatic image registration for applications in remote sensing,” I IEEE Transactions on Geoscience and Remote Sensing, Vol. 43, No. 9, pp. 2127-2137, Sep. 2005.
[3]C. A. Bernstein, L. N. Kanal, D. Lavin, and E. C. Olson, “A geometric approach to subpixel registration accuracy” IEEE Transactions on Computers, Vol.21, pp. 27-32, Jun. 2006
[4]A. D. Bimbo and P. Pala, “Visual image retrieval by elastic matching of user sketches,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 19, No. 2, pp. 121-132, Feb.1997.
[5]T. S. Caetano, T. Caelli, D. Schuurmans and D. A.C. Barone, “Graphical models and point pattern matching,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 28, No. 10, pp.1646-1663, Oct. 2006.
[6]M. E. Celebi and Y. A. Aslandogan, “A comparative study of three moment-based shape descriptors,” International Conference on Information Technology: Coding and Computing, Vol. 1, pp. 788-793, Apr. 2005.
[7]M. S. Choi and W. Y. Kim, “A novel two stage template matching method for rotation and illumination invariance,” Pattern Recognition. Vol. 35 pp. 119-129, 2002.
[8]X. Dai and S. Khorram, “A feature-based image registration algorithm using improved chain-code representation combined with invariant moments,” IEEE Transactions on Geoscience and Remote Sensing, Vol. 37, No. 5, pp. 2351-2362, Sep. 1999.
[9]S. A. Dudani, K. J. Breeding, and R. B. McGhee, “Aircraft identification by moment invariants,” IEEE Transactions on Computers, Vol. C-26, No. 1, pp. 39-45, Jan. 1983.
[10]V. N Dvorchenko, “Bounds on (deterministic) correlation functions with application to registration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI-5. Vol. 2, pp. 206-213, 1983.
[11]R. C. Gonzalez and R. E. Woods, Digital Image Processing. Second Edition, chapter 12, pp. 701-712.
[12]A. Goshtasby, “Template matching in rotation images,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. PAMI-7, No. 3, pp. 338-344, May 1985.
[13]A. Goshtasby, S. H. Gage and J. F. Bartholic, “A two-stage cross correlation approach to template matching,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 6, pp. 374-378, May 1984.
[14]A. Goshtasby, G. C. Stockman, and C. V. Page, “A region-based approach to digital image registration with subpixel accuracy,” IEEE Transactions on Geoscience and Remote Sensing, Vol. 24, No. 3, pp. 390-399, May 1986.
[15]M. K. Hu, “Visual pattern recognition by moment invariants,” IRE Transactions on Information Theory, Vol. 8, pp. 179-187, Feb. 1962.
[16]M. S. Huajun, H. Song, W. Sheng and Z. Liu, “Fast correlation tracking method based on circular projection,” IEEE Eighth ACIS International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/ Distributed Computing. Vol. 1, pp. 235-238, Jul. 2007.
[17]A. Jacklic and F. Solina, “Moments of superellipsoids and their application to range image registration,” IEEE Transactions on Systems, Man and Cybernetics, part B. Vol. 33, No. 4, pp. 648-657, Aug. 2003.
[18]A. K. Jain and J. Mao, “Statistical pattern recognition: A review,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 22, No. 1, pp. 4-37, Jan. 2000.
[19]R. L. Kashyap and R. Chellappa, “Stochastic models for closed boundary analysis: Representation and reconstruction,” IEEE Transactions on Information and Theory, Vol. IT-27, pp. 627-637, Sep. 1981.
[20]A Khotanzad and Y. H. Hong, “Invariant image recognition by Zernike moments,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 12, No. 5, pp. 489-497, May 1990.
[21]H. K. Kim, J. D. Kim, D. G. Sim and D. I. Oh, “A modified Zernike moment shape descriptor invariant to translation, rotation and scale for similarity-based image retrieval,” ICME International Conference, Vol. 1, pp. 307-310, Aug. 2000.
[22]L. Kotoulas and I. Andreadis, “Accurate calculation of image moments,” IEEE Transactions on Image Processing, Vol. 16, No. 8, pp.2028-2037, Aug. 2007
[23]W. Krattenthaler, K. J. Mayer and M. Zeiller, “Point correlation: a reduced-cost template matching technique,” ICIP International Conference, Vol. 1, pp. 208-212, Nov. 1994.
[24]R. Lai, X. Liu and F. Ohkawa, “A fast template matching algorithm based on central moments of images,” IEEE international Conference on Information and Automation, pp. 596-600, Jun. 2008.
[25]S. X. Laio and M. Pawlak, “On the accuracy of Zernike moments for image analysis,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 20, No. 12, pp. 1358-1364, Dec. 1998.
[26]Z. Li, J. Zhang, Y. Liu and H. Li, “The curve-structure invariant moments for shape analysis and recognition,” Ninth International Conference on Computer Aided Design and Computer Graphics, pp. 5-9, Dec. 2005.
[27]S. X. Liao and M. Pawlak, “On image analysis by moments,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 18, No. 3, pp. 254-266, Mar. 1996.
[28]Y. H. Lin, C. H. Chen and C. C. Wei, “A new approach to fast template matching with rotation invariance using ring code matching,” 19th IPPR Conference on Computer Vision, Graphics and Image Processing, pp. 1297-1301, Aug. 13-15, 2006.
[29]H. Lin and G. P. Abousleman, “Orthogonal rotation-invariant moments for digital image processing,” IEEE Transactions on Image Processing, Vol. 17, No. 3, pp. 272-282, Mar. 2008.
[30]G. Liu, Z. Zhu and X. Jie, “Orientation and damage inspection of insulators based on Techebichef moment invariants,” IEEE International Conference Neural Networks and Signal Processing, Jun. 2008.
[31]S. Mattoccia, F. Tombari and L. D. Stefan, “Fast full equivalent template matching by enhance bounded correlation,” IEEE Transactions on Image Processing, Vol. 17, No. 4, pp.528-538, Apr. 2008.
[32]R. Mukundan, “Some computational aspects of discrete orthogonal moments,” IEEE Transactions on Image Processing, Vol. 13, No. 8, pp. 1055-1059, Aug. 2004.
[33]M. Mukundan, “Radial Tchebichef invaraints for pattern recognition,” IEEE International Conference on Digital Object Identifier. pp. 1-6, Nov. 2005.
[34]R. Mukundan, S. H. Ong and P. A. Lee, “Image analysis by Tchebichef moments,” IEEE Transactions on Image Processing, Vol. 10, No. 9, pp. 1357-1364, Sep. 2001.
[35]S. Omachi and M. Omachi, “Fast template matching with polynomials,” IEEE Trans. On Image Processing. Vol. 16, No. 8, pp.2139-2149, Aug. 2007.
[36]J. A. Parker, R. V. Kenyon, and D. E. Droxel, “Comparison of interpolating method for image resampling,” IEEE Transaction on Medical Imaging, Vol. MI-2, No. 1, pp. 31-39, Mar. 1983.
[37]H. Peng, F. Long and Z. Chi, “Document image recognition based template matching of component block projections,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 25, No. 9, pp. 1188-1192, Sep. 2003.
[38]E. Persoon and K. S. Fu, “Shape discrimination using Fourier descriptors,” IEEE Transactions on Systems, Man and Cybernetics, Vol. SMC-7, pp. 170-179, Mar. 1977.
[39]B. Potocnik, “Assessment of region-based moment invariants for object recognition,” 48th International Symposium ELMAR. pp. 104-107, Feb. 1977.
[40]A. P. Reeves, R. J. Prokop, S. E. Andrews, and F. Kuhl, “Three-dimensional shape analysis using moments and Fourier descriptors,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 10 No. 6, pp. 937-943, Nov. 1988.
[41]A. Rosenfeld and G. J. VanderBrug, “Coarse-fine template matching,” IEEE Transactions on Systems, Man and Cybernetics, Vol. SMC-7 No. 2, pp. 104-107, Feb. 1977.
[42]V. Subbiah Bharathi, M. A. Leo Vijilious and L. Ganesan, “Orthogonal moment based texture analysis of CT liver images,” IEEE International Conference on Computational Intelligence and Multimedia Applications, 2007.
[43]K. Tanaka, M. Sano, S. Ohara and M. Okudaira, “A parametric template method and its application to robust matching,” IEEE Conference on Computer Vision and Pattern Recognition, Vol. 1, pp. 620-627, Jun. 2000.
[44]Y. Y. Tang, B. F. Li, H. Ma and J. Liu, “Ring-projection-wavelet fractal signatures: A novel approach to feature extraction,” IEEE Transactions on Circuits and Systems. Analog Digital Signal Processing, Vol. 45, No. 8, pp.1130-1134, Aug. 1998.
[45]G. J. VanderBrug and A. Rosenfeld, “Two-stage template matching,” IEEE Transactions on Computers, Vol. 19, No. 4, pp. 384-393, Apr. 1977.
[46]S. D. Wei and S. H. Lai, “Fast template matching based on normalized cross correlation with adaptive multilevel winner update,” IEEE Transactions on image processing, Vol. 17, No. 11, pp.2227-2235, Nov. 2008.
[47]P. Wu and J. W. Hsieh, “Efficient image using concentric sampling features and boosting process,” IEEE International Conference on Image Processing, pp. 2004-2007, Oct. 2008.
[48]C. T. Zhan and C. T. Roskies, “Fourier descriptors for plane closed curves,” IEEE Transactions on Computers, Vol. C-21, pp. 269-281, Mar.1972.
[49]R. Zhi and Q. Ruan, “A Comparative study on region-based moments for facial expression recognition,” IEEE international Conference on Image and Signal Processing, Vol. 2, pp. 600-604, May 2008.
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top