跳到主要內容

臺灣博碩士論文加值系統

(18.97.14.91) 您好!臺灣時間:2024/12/14 04:51
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:陳信嘉
研究生(外文):Hsin-Chia Chen
論文名稱:以亮度/色彩對比為基礎的影像分析技術之研究
論文名稱(外文):A Study of Image Analysis Techniques Based on Luminance/Color Contrast
指導教授:王聖智王聖智引用關係
指導教授(外文):Sheng-Jyh Wang
學位類別:博士
校院名稱:國立交通大學
系所名稱:電子工程系所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2006
畢業學年度:95
語文別:英文
論文頁數:119
中文關鍵詞:雲彩彩色切割色彩對比可見色差
外文關鍵詞:MuraColor SegmentationColor ContrastVisible Color Difference
相關次數:
  • 被引用被引用:7
  • 點閱點閱:482
  • 評分評分:
  • 下載下載:107
  • 收藏至我的研究室書目清單書目收藏:1
在本論文中,我們提出以亮度╱色彩對比為基礎的客觀視覺評量以估測人類主觀視覺感受評量。針對不同的影像分析應用,如自動面板缺陷檢測應用及彩色切割應用,我們先透過設計視覺實驗得到人類在分析影像時的主觀視覺評量標準,並設計客觀評量因子來估測這些主觀視覺評量,最後再將這些客觀評量因子應用在自動面板缺陷檢測、彩色切割評量技術以及彩色切割技術上。
在傳統的影像分析系統流程中,包含有四個基本的步驟: 1) 影像擷取,2) 影像分析,3) 輸出影像分析結果,及 4) 分析結果評估。具體而言,在一個影像分析系統中,在輸入端我們輸入一張或多張影像進行分析,此系統將針對不同的影像分析應用,使用不同的影像分析技術來分析影像,並輸出分析結果。然後,再根據一些視覺感受評量來對影像分析結果進行評估。在這篇論文中,我們在傳統的影像分析系統流程中,增加兩個重要的分析程序,分別是視覺實驗以及亮度/色彩對比測量。為了得到和人眼主觀視覺分析影像一致之結果,我們針對不同的影像分析應用,分析亮度/色彩對比在人眼視覺感知中所扮演的角色。透過視覺實驗,我們針對不同的影像分析應用,定義合適的亮度/色彩對比,並且粹取出符合人類視覺感受的主觀視覺評量因子。之後,為了量測這些主觀視覺評量因子,我們以亮度/色彩對比為基礎來發展客觀的視覺評量因子估測方法,並且應用在發展影像分析技術,以此來得到和以人眼分析影像近似的方法和結果。
對於不同的影像分析應用,人眼的主觀視覺評量因子可能不盡相同。在這篇論文中,我們討論了兩種不同的影像分析應用:1) 自動面板缺陷檢測以及2) 彩色切割應用。在自動面板缺陷檢測的應用中,我們討論人類主觀視覺對低亮度對比的面板缺陷影像的之視覺評量因子及其量測問題。首先,我們先介紹在Mori 等人發表的論文中所提到的以亮度對比為基礎的主觀視覺因子及其量測公式�o SEMU 公式,同時介紹他們得到此一視覺因子的視覺實驗。結合SEMU 公式,我們提出了一些影像分析技術,試著來偵測不同形態的面板缺陷。其中包括我們提出合適的偵測運算子,如 LOG 運算子,並且討論最佳的自動門檻值設定方法。
在彩色切割應用方面,我們針對包含少量紋理的彩色切割應用,考慮了人眼對於色彩對比的感受。在一張包含少量紋理的彩色影像中,低色彩對比的相鄰像素往往被視為相同的影像區塊,而相鄰高色彩對比的像素位置則為影像區塊的邊界。因此,我們在論文中討論人眼對色彩對比和色差的感受評量。另外,針對彩色切割應用,我們也考慮了人眼對於色彩對比的主觀視覺評量因子,如人眼對於過度切割 (over-segmentation) 的程度感受以及不足切割 (under-segmentation) 的程度感受 … 等等。對此,在論文中,我們透過視覺實驗來驗證這些主觀的視覺評量因子和彩色切割結果品質的關係。之後,我們設計了一些以色彩對比為基礎的客觀視覺評量方法,來估測這些主觀視覺評量因子。同時,我們結合這些設計出來的客觀視覺評量量化估測方法,應用在客觀的彩色切割結果評量以及發展彩色切割演算法的應用上。
最後,我們模擬驗證了所提出的以亮度/色彩對比為基礎的影像分析技術在不同的應用上的分析結果。其結果驗證,我們以亮度/色彩對比為基礎所設計的客觀評量因子和人類的主觀視覺評量因子有很高關聯性。而且,我們也驗證了,在針對不同的影像分析應用所設計的影像分析技術中,亮度/色彩對比的確扮演著不可或缺的角色。因此,如果可以有效率且有效地估測亮度/色彩對比,並且用亮度/色彩對比為基礎來發展客觀視覺評量,以估測人眼在不同影像分析應用中的主觀視覺評量因子,我們可設計得到近似人類分析影像方法及結果的影像分析技術。
In this dissertation, a study of image analysis techniques by correlating subjective visual qualities with objective visual quantities based on luminance/color contrast is presented. To mimic the way humans perform image analysis, some subjective visual quantities are considered. To extract and verify the applicability of these visual quantities, subjective experiments are performed first. Then, to measure these subjective visual quantities, some objective quantitative measures based on luminance/color contrast are proposed. With these objective quantitative measures, contrast-based image analysis techniques can be developed for various image analysis applications.
In the flow chart of a conventional image analysis system, four basic parts are included: 1) inputting of images to be analyzed, 2) image analysis with one or more techniques, 3) outputting of analyzed results, and 4) evaluation of the analyzed results. Specifically, given one or more images to be analyzed, different image analysis techniques are adopted for different applications. Then, the analyzed results are evaluated with some evaluation methods according to predefined visual perception requirements. In this dissertation, two more processes are added into an image analysis system. They are 1) subjective experiments and 2) measurement of luminance/color contrast and/or measurement of visual perception quantities. To mimic the way humans perform image analysis, we need some suitable subjective visual quantities. To extract appropriate visual quantities that may well correspond to humans’ perception, subjective experiments are needed. To estimate these subjective visual quantities for different applications, we need to propose effective and efficient objective quantitative measures.
In this dissertation, we consider two different image analysis applications: 1) automatic inspection for visual defects on LCD panels, and 2) color segmentation. For different image analysis applications, the applicable visual quantities will be different. In the automatic defect inspection application, we discuss the suitable visual quantities for the extraction of visual defects with low luminance contrast. Here, we follow Mori’s proposal to quantify the degrees of image defects based on the luminance contrast and area size of visual defects. Based on Mori’s subjective experiments, which were performed to relate human visual perception with the luminance contrast and area size of visual defects, and the SEMU formula, which was proposed by Mori et al for a quantitative measurement of visual perception, we may effectively quantify the degrees of image defects based on luminance contrast and defect area. The LOG operator is then used to detect several types of visual defects. An optimal thresholding mechanism is also discussed.
For the applications of color segmentation with little texture, we consider segmentation quality, degree of over-segmentation, and degree of under-segmentation as the visual quantities. To verify the correlation among these visual quantities, a few subjective experiments are performed. Here, we use color contrast to quantify these visual quantities. Usually, given a color image, adjacent pixels with low color-contrast are grouped into regions; while adjacent pixels with high color-contrast are regarded as edges. For color segmentation, we define color-contrast in terms of visible color difference and invisible color difference. Then, some objective quantitative measures based on visible/invisible color difference are proposed to measure these aforementioned subjective visual quantities. In this dissertation, the “intra-region visual error” is proposed to measure the degree of under-segmentation, while the “inter-region visual error” is proposed to measure the degree of over-segmentation. With these visual measures, some image analysis techniques are proposed to perform color segmentation and also the evaluation of color segmentation.
With simulations for these two image analysis applications, some conclusions are drawn. First, the correlations between the luminance/color contrast-based quantitative measures and the visual quantities are really significant. Second, luminance/color contrast may play an important role in the development of image analysis techniques that mimic the way of human perception.
摘要 i
Abstract v
Acknowledgements ix
Contents xi
List of Tables xv
List of Figures xvi
List of Notations xix
1 Introduction 1
1.1 Dissertation Overview 1
1.2 Organization and Contribution 7
2 Backgrounds 9
2.1 Luminance/Color Contrast 9
2.1.1 Luminance Contrast 10
2.1.2 CIE L*a*b* Color Difference 12
2.1.2.1 CIE L*a*b* Color Space 12
2.1.2.2 Color Difference in CIE L*a*b* Color Space 15
2.2 Introduction of Image Segmentation 16
2.2.1 Image Segmentation Algorithms 16
2.2.1.1 Image Domain-Based Approaches 16
2.2.1.1.1 Edge-Based Methods 17
2.2.1.1.2 Region-Based Methods 17
2.2.1.2 Feature Space-Based Approaches 18
2.2.1.3 Physics-Based Approaches 19
2.2.2 Evaluation Methods for Image Segmentation 20
3 Visual Inspection for Mura on LCDs Based on Luminance Contrast 25
3.1 Introduction of Automatic Inspection for Mura on LCDs 25
3.1.1 SEMU Formula Based on Just Noticeable Difference 28
3.2 Photography of FOS Images 32
3.2.1 Aliasing 33
3.2.2 Cluster Mura and V-Band Mura 35
3.3 Inspection of Cluster Mura 38
3.3.1 Cluster Mura Detection 38
3.3.2 Optimal Threshold Based on the SEMU Formula 40
3.4 Inspection of V-Band Mura 43
3.4.1 V-Band Mura Detection 43
3.4.2 FOS Surface Reconstruction 46
4 Development and Evaluation of Color Segmentation Algorithms Based on Color Contrast 49
4.1 Color Contrast and Visible Color Difference 50
4.1.1 Color Contrast in CIE L*a*b* Color Space 50
4.1.1.1 Definition of Directional Contrast 50
4.1.1.2 Definition of Color Contrast in CIE L*a*b* Color Space
53
4.1.2 Definition of Visible Color Difference 57
4.2 Quantitative Evaluation for Color Segmentation Based on
Visible Color Difference 58
4.2.1 Visual Rating Experiments for Color Segmentation Evaluation 58
4.2.2 Quantitative Evaluation for Color Segmentation 68
4.2.2.1 Quantitative Measures of Visual Errors Based on
Visible Color Difference 68
4.2.2.1.1 Intra-region Visual Error 70
4.2.2.1.2 Inter-region Visual Error 71
4.2.2.1.3 The Inter-Region Error/Intra-Region Error Plot 72
4.2.2.1.4 Ratio of Intra-region Visual Error to
Inter-region Visual Error 74
4.2.2.2 Color Segmentation Evaluation Based on Inter-Region-Error/Intra-Region-Error Plot 77
4.2.2.3 Performance Comparison of Color Segmentation Algorithms Based on Inter-Region-Error/Intra-Region-Error Plot 87
4.3 Color Segmentation Algorithms Based on Color-Contrast and Visible-Color-Difference 89
4.3.1 Color Segmentation Algorithm Based on Color Contrast 89
4.3.2 Color Segmentation Algorithm Based on
Visible Color Difference 94
4.3.2.1 Modified Quantitative Visual Error Measures 94
4.3.2.2 Color Segmentation Algorithm Uniting with
Quantitative Measures 97
4.3.2.2.1 Region Adjacent Graph 98
4.3.2.2.2 Color Segmentation Uniting with
Quantitative Measures 99
5 Conclusions 107
Bibliography 111
Curriculum Vita 117
[1] P. Whittle, “The Psychophysics of Contrast Brightness,” In A. L. Gilchrist (Ed.), Lightness, Brightness, and Transparency, pp. 35-110. Hillsdale, NJ: Lawrence Erlbaum Associates, 1994.
[2] Semiconductor Equipment and Materials International (SEMI) Standard, “New Standard: Definition of Measurement Index (SEMU) for Luminance Mura in FPD Image Quality Inspection,” draft number: 3324, pp. 1-6, 2002.
[3] Y. Mori, R. Yoshitake, T. Tamura, T. Yoshizawa and S. Tsuji, “Evaluation and Discrimination Method of “Mura” in Liquid Crystal Displays by Just Noticeable Difference Observation,” Proceedings of SPIE (The International Society for Optical Engineering), Optomechatronic Systems III, vol. 4902, pp. 715-722, Oct. 2002.
[4] D. G. Lee, I. H. Kim, M. C. Jeong, B. K. Oh, and W. Y. Kim, “Mura Analysis Method by Using JND Luminance and The SEMU Definition,” Proceedings of SID (Society for Information Display), pp. 1467-1470, 2003.
[5] T. Tamura, M. Baba and T. Furuhata, “Effect of The Background Luminance on Just Noticeable Difference Contrast of ‘Mura’ in LCDs,” Proceedings of SID (Society for Information Display), pp. 1527-1530, 2003.
[6] R. S. Berns, “Billmeyer and Saltzman’s Principles of Color Technology, 3rd Edition,” John Wiley and Sons, 2000.
[7] http://cit.dixie.edu/vt/reading/gamuts.asp, “Illustration of The CIE L*a*b* Color Space”.
[8] K. N. Plataniotis and A. N. Venetsanopoulos, “Color Image Processing and Applications,” Springer, 2000.
[9] Video Electronics Standards Association (VESA): Flat Panel Display Measurements Standard, version 2.0.
[10] Y. Mori, K. Tanahashi, and S. Tsuji, “Quantitative Evaluation of Visual Performance of Liquid Crystal Displays,” Proceedings SPIE (The International Society for Optical Engineering), The Algorithms and Systems for Optical Information Processing, vol. 4113, pp. 242-249, 2000.
[11] W. K. Pratt, S. S. Sawkar, and K. O’Reilly, “Automatic Blemish Detection in Liquid Crystal Flat Panel Displays,” Proceedings of SPIE (The International Society for Optical Engineering), vol. 3306, pp. 2-13, 1998.
[12] V. Gibour and T. Leroux, “Automated, Eye-like Analysis of Mura Defects,” Proceedings of SID (Society for Information Display), pp. 1440-1443, 2003.
[13] L. Lucchese and S. K. Mitra, “Color Image Segmentation: A State-of-The-Art Survey,” Proc. The Indian National Science Academy (INSA-A), vol. 67, A, no.2, pp. 207-221, New Delhi, India, Mar. 2001.
[14] W. Y. Ma and B. S. Manjunath, “Edge Flow: A Technique for Boundary Detection and Image Segmentation,” IEEE Trans. Image Processing, vol. 9, no. 8, pp. 1375-1388, 2000.
[15] Y. Deng, and B. S. Manjunath, “Unsupervised Segmentation of Color-texture Regions in Images and Video,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 23, no. 8, pp. 800-810, Aug. 2001.
[16] J. Canny, “A Computational Approach to Edge Detection,” IEEE Trans. Pattern Anal. Machine Intell., vol. 8, no. 6, pp. 679-698, Nov. 1986.
[17] A. Cumani, “Edge Detection in Multispectral Images,” CVGIP: Graphical Models and Image Processing, vol. 53, no. 1, pp. 40-51, Jan. 1991.
[18] W. Y. Ma and B.S. Manjunath, “Edge Flow: A Framework of Boundary Detection and Image Segmentation,” Proc. IEEE Conf. on Computer Vision Pattern Recognition, pp. 744-749, June 1997.
[19] C. Xu and J. L. Prince, “Snakes, Shapes, and Gradient Vector Flow,” IEEE Trans. Image Processing, vol. 7, no.3, pp. 359-369, Mar. 1998.
[20] L. Vincent and P. Soille, “Watersheds in Digital Space: An Efficient Algorithm Based on Immersion Simulations,” IEEE Trans. Pattern Anal. Machine Intell., vol. 13, no. 6, pp. 583-598, June 1991.
[21] S. C. Zhu and A. Yuille, “Region Competition: Unifying Snakes, Region Growing, and Bayes/MDL for Multiband Image Segmentation,” IEEE Trans. Pattern Anal. Machine Intell., vol. 18, no. 9, pp.884-900, Sep. 1996.
[22] K. Haris, S. N. Efstratiadis, N. Maglaveras, and A. K. Katsaggelos, “Hybrid Image Segmentation Using Watersheds and Fast Region Merging,” IEEE Trans. Image Processing, vol. 7, no. 12, pp. 1684-1699, Dec. 1998.
[23] Y. Deng, B.S. Manjunath and H. Shin*, “Color Image Segmentation,” Proc. IEEE Conf. on Computer Vision Pattern Recognition, vol. 2, pp. 446-451, June 1999.
[24] G. T. Herman, B. M. Carvalho, “Multiseeded Segmentation Using Fuzzy Connectedness,” IEEE Trans. Pattern Anal. Machine Intell., vol. 23, no. 5, pp. 460-474, May 2002.
[25] I. Vanhamel, I. Pratikakis, and H. Sahli, “Multiscale Gradient Watersheds of Color Images,” IEEE Trans. Image Processing, vol. 12, no. 6, pp. 617-626, June 2003.
[26] M. A. Ruzon and C. Tomasi. “Edge, Junction, and Color Detection Using Color Distribution,” IEEE Trans. Pattern Anal. Machine Intell., vol. 23, no.11, pp. 1281-1295, Nov. 2001.
[27] D. Comaniciu and P. Meer, “Mean Shift: A Robust Approach toward Feature Space Analysis,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 24, no. 5, pp. 603-619, May 2002.
[28] Y. Cheng, “Mean Shift, Mode Seeking, and Clustering,” IEEE Trans. Pattern Anal. Machine Intell., vol. 17, no. 8, pp. 790-799, Aug. 1995.
[29] J. Shi and J. Malik, “Normalized cuts and image segmentation,” Proc. IEEE Conf. on Computer Vision Pattern Recognition, pp. 731-737, June 1997.
[30] D. Comaniciu and P. Meer, “Robust Analysis of Feature Spaces: Color Image Segmentation,” Proc. IEEE Conf. on Computer Vision Pattern Recognition, pp. 750-755, June 1997.
[31] T. Hofmann, J. Puzicha, and J. M. Buhmann, “Unsupervised Texture Segmentation in a Deterministic Annealing Framework,” IEEE Trans. Pattern Anal. Machine Intell., vol. 20, no. 8, pp. 803-818, Aug. 1998.
[32] D. Comaniciu and P. Meer, “Mean Shift Analysis and Applications,” Proc. IEEE Conf. on Intl. Conf. on Computer Vision, vol. 2, pp. 1197-1203, Kerkyra, Greece, Sep. 1999.
[33] M. A. Ruzon and C. Tomasi, “Color Edge Detection with The Compass Operator,” Proc. IEEE Conf. on Computer Vision Pattern Recognition, vol. 2, pp. 160-166, June 1999.
[34] J. Shi and J. Malik, “Normalized Cuts and Image Segmentation,” IEEE Trans. Pattern Anal. Machine Intell., vol. 22, no. 8, pp. 888-905, Aug. 2000.
[35] H. D Cheng and Y. Sun, “A Hierarchical Approach to Color Image Segmentation Using Homogeneity,” IEEE Trans. Image Processing, vol. 9, no. 12, pp. 2071-2082, Dec. 2000.
[36] T. W. Lee and M. S. Lewicki, “Unsupervised Image Classification, Segmentation, and Enhancement Using ICA Mixture Models,” IEEE Trans. Image Processing, vol. 11, no. 3, pp. 270-279, Mar. 2002.
[37] Z. Tu and S. C. Zhu, “Image Segmentation by Data-Driven Markov Chain Monte Carlo,” IEEE Trans. Pattern Anal. Machine Intell., vol. 24, no. 5, pp. 657-673, May 2002.
[38] C. Carson, S. Belongie, H. Greenspan, and J. Malik , “Blobworld Image Segmentation Using Expectation-Maximization and Its Application to Image Querying,” IEEE Trans. Pattern Anal. Machine Intell., vol. 24, no. 8, pp. 1026-1038, Aug. 2002.
[39] O. J. Tobias and R. Seara, “Image Segmentation by Histogram Thresholding Using Fuzzy Sets,” IEEE Trans. Image Processing, vol. 11, no. 12, pp. 1457-1465, Dec. 2002.
[40] T. Gevers, “Adaptive Image Segmentation by Combining Photometric Invariant Region and Edge Information,” IEEE Trans. Pattern Anal. Machine Intell., vol. 24, no. 6, pp. 848-852, June 2002.
[41] H. D. Cheng, X. H. Jiang, Y. Sun, and J. Wang, “Color Image Segmentation: Advances and Prospects,” Pattern Recognit., vol. 34, no. 6, pp. 2259-2281, Dec. 2001.
[42] Y. J. Zhang, “A Survey on Evaluation Methods for Image Segmentation,” Pattern Recognit., vol. 29, no.8, pp. 1335-1346, Aug. 1996.
[43] Y. J. Zhang, “A Review of Recent Evaluation Methods for Image Segmentation,” Proc. 6th Int. Symp. on Signal processing and its applications, pp. 148-151, Kuala Lumpur, Malaysia, Aug. 2001.
[44] M. D. Heath, S. Sarkar, T. Sanocki, and K. W. Bowyer, “A Robust Visual Method for Assessing The Relative Performance of Edge-detection Algorithms,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 19, no. 12, pp. 1338-1359, Dec. 1997.
[45] A. Hoover, G. Jean-Baptiste, X. Jiang, P. J. Flynn, H. Bunke, D. B. Goldgof, K. Bowyer, D. W. Eggert, A. Fitzgibbon, and R. B. Fisher, “An Experimental Comparison of Range Image Segmentation Algorithms,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 18, no. 7, pp. 673-689, July 1996.
[46] P. L. Correia and F. Pereira, “Objective Evaluation of Video Segmentation Quality,” IEEE Trans. Image Processing, vol. 12, no.2, pp. 186-200, Feb. 2003.
[47] D. D. Martin, C. C. Fowlkes, D. Tal, and J. Malik, “A Database of Human Segmented Natural Images and Its Application to Evaluating Segmentation Algorithms and Measuring Ecological Statistics,” Proc. IEEE Int. Conf. on Computer vision, vol. 2, pp. 416-423, Vancouver, Canada, July 2001.
[48] D. R. Martin, C. C. Fowlkes, and J. Malik, “Learning to Detect Natural Image Boundaries Using Local Brightness, Color, and Texture Cues,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 26, no. 5, pp. 530-549, May 2004.
[49] J. Liu and Y. H. Yang, “Multiresolution Color Image Segmentation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 16, no. 7, pp. 689-700, July 1994.
[50] M. Borsotti, P. Campadelli, and R. Schettini, “Quantitative Evaluation of Color Image Segmentation Results,” Pattern Recognit. Letters, vol. 19, no. 8, pp. 741-747, June 1998.
[51] Intel corp. patent, “Anti-Aliasing Diffractive Aperture and Optical System Using The Same,” US. Patent: 5940217, Aug. 1999.
[52] “Aliasing Reduction in Discrete Imaging System Using Pupil Function Controlling,” Proceedings of Acta Opt. Sin., vol. 19, no.3, pp.289-294, 1999.
[53] G. Wyszecki and W. Stiles, “Color Science: Concepts and Methods, Quantitative Data and Formulae, 2nd Edition,” New York: John Wiley and Sons, 1982.
[54] International Electrotechnical Commission (IEC) 61966-2-1, http://www.iec.ch, “sRGB – Default RGB colour space,” Oct. 1999.
[55] G. Sharma and H. J. Trussell, “Digital Color Image,” IEEE Trans. Image Processing, vol. 6, no. 7, pp. 901-932, July 1997.
[56] J. Y. Hardeberg, “Acquisition and Reproduction of Colour Images: Colorimetric and Multispectral Approaches,” PhD dissertation, Ecole Nationale Supérieure des Télécommunications’, Paris, France, 1999.
[57] ITU-R Recommendation BT. 500-11, “Methodology for The Subjective Assessment of The Quality of Television Pictures”, Geneva, 2002 (available at http://www.itu.org).
[58] S. Siegel, “Nonparametric Statistics for The Behavioral Sciences,” McGraw-Hill Kogakusha Ltd., Tokyo, 1956.
[59] R. A. Ronald and F. Yates, “Statistical Methods for Research Workers, 14th Edition,” Oliver and Boyd Ltd., Edinburgh, 1970.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top