(3.237.20.246) 您好!臺灣時間:2021/04/15 10:55
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

我願授權國圖
: 
twitterline
研究生:施子庭
研究生(外文):Shih, Tzu-Ting
論文名稱:以深度物件偵測開發抗變形之曲面匹配技術
論文名稱(外文):Deformation invariant surface matching technique based on deep object detection
指導教授:黃思皓黃思皓引用關係陳安斌陳安斌引用關係
指導教授(外文):Huang, Szu-HaoChen, An-Pin
口試委員:姜林杰祐李東穎
口試委員(外文):ChiangLin, Chieh-YowLee, Tung-Ying
口試日期:2018-09-13
學位類別:碩士
校院名稱:國立交通大學
系所名稱:資訊管理研究所
學門:電算機學門
學類:電算機一般學類
論文種類:學術論文
論文出版年:2018
畢業學年度:107
語文別:英文
論文頁數:103
中文關鍵詞:物件偵測圖像匹配三維變形深度學習擴增實境
外文關鍵詞:object detectionimage matchingdeformation invariantspline interpolationaugmented reality
相關次數:
  • 被引用被引用:0
  • 點閱點閱:162
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:46
  • 收藏至我的研究室書目清單書目收藏:0
物件偵測是一門在生活中可見的技術,而其中一個常見的應用即為擴增實境(Augmented Reality,AR)。
擴增實境得實時地提供對於現實世界中諸多事物的資訊,諸如車用輔助駕駛系統、體育賽事直播等皆可見其應用;
而背後物件偵測技術的性能將直接地影響擴增實境的效果。
近年來受益於深度學習技術發展,用於找尋常見物品分類的技術有長足進步。
然而,對於特定應用下用於偵測指定圖樣之相關技術則沒有對應的發展。

本研究旨在改善擴增實境應用的效果,其提出一基於深度學習之新穎的圖片偵測技術,得以在任意畫面中找尋指定的樣板圖片並同時估算有效的可視影像範圍。
此技術採用關鍵點(keypoint)配對的方法來進行圖像偵測,本方法以取自樣板圖片與目標畫面的關鍵點與描述符(descriptor)作為輸入,並輸出合理的關鍵點配對清單。
而其所列之合理配對清單,係在符合二維多重樣條函數(polyharmonic spline)近似下,得以精確配對的內點(inlier);在此策略下,本方法能承受樣板圖片之旋轉、位移,縮放、變形乃至遮蔽。

本研究提出兩大貢獻:
其一,本方法得以在樣板圖片被扭曲的條件下偵測目標。在如此受限的條件下,本方法仍可達成相當高的匹配精度(precision),而對於錯誤的配對亦維持較低的誤差,此成效使得本方法得以應用於擴增實境並表現比過往更佳的呈現效果。
其二,此方法得以使用人為建立的資料集進行訓練,並轉移模型用於自現實收取的影像;此成效使模型訓練的成本得大幅降低。
Object detection is an essential task that has several usages in our life.
One of them is augmented reality, which could enrich our lives by providing the information and visualizing virtual contents in the real world.
Previous works on object detection achieve a notable accuracy in discovering common classes of objects.
However, it still lacks the practical techniques to detect the specific pattern.

We develop the spline network, a deep learning based surface matching method, to detect the known template pattern in the unknown scene, regardless of its translation, rotation, scaling, deformation and obscuration.
The spline network consumes the paired keypoints and descriptors as input and gives the list of inlier pairs based on the polyharmonic spline interpolation.
This system benefits from two properties:
First, the keypoint based detection technique natively integrated with the tolerance of the obscuration.
Second, the spline-based error function enriches the model with the capacity to figure out the correspondence on a deformed object.

This work is designed to practice the object detection for the augmented reality.
It not only performs the keypoint matching, but it could also estimate the visible area of the template pattern, as it is necessary to give a clear boundary on rendering in usage.
Further, we introduce a data simulation framework.
It profits from using the generated data on training the model.
This work could sufficiently reduce the difficulty in collecting the training data.
摘要 - i
Abstract - ii
Table of Contents - iii
List of Figures - v
List of Tables - vii
1 Introduction - 1
1-1 Motivation - 1
1-2 Background - 3
1-3 ResearchGoal - 6
1-4 Contribution - 7
1-5 Organization - 8
2 Literature Review - 9
2-1 Handcrafted Interest Point Detector - 9
2-2 Learning-based Interest Point Detector- 10
2-3 Region-based Object Detection Approaches - 12
2-4 Match Strategies - 15
2-5 Summary - 16
3 Proposed Method - 19
3-1 Definition of Terms - 19
3-2 Problem Formulation - 20
3-3 System Overview - 21
3-4 Dataset Preparation - 23
3-5 The Spline Network - 25
3-5-1 Main Network - 26
3-5-2 Dense Layer - 28
3-5-3 Context Normalization - 29
3-6 Objective Function - 30
4 Implementation - 34
4-1 Network Architecture - 34
4-2 Initialization - 34
4-3 Input Normalization - 35
4-4 Training Phase - 36
4-5 Hyperparameter Selection - 37
5 Experiments - 40
5-1 Dataset - 41
5-2 Comparison to Keypoint Matching Strategies - 43
5-3 Matching Effectiveness - 45
5-3-1 Performance Evaluation - 45
5-3-2 Result - 47
5-3-3 Importance of Descriptor - 49
5-3-4 Generalization to Unknown Template Image - 52
5-4 Bounding Box Location Estimation - 53
5-4-1 PerformanceMeasurement - 54
5-4-2 Bounding Box Decision Strategies - 55
5-4-3 Result and Discussion - 56
5-5 Demonstration - 56
5-5-1 Data Retrieval and Implementation Details - 58
5-5-2 Results and Discussion - 59
6 Conclusions- 68
References - 70
Appendix A Result of Hyperparameter Selection Experiment - 80
Appendix B Matching Effectiveness on Each Dataset - 83
Appendix C Detailed Bounding Box Location Estimation Results - 95
Appendix D Demonstration on the video dataset: popcorn - 100
[1] M. Wikia, “J.A.R.V.I.S.” http://marvel-movies.wikia.com/wiki/J.A.R.V.I.S., 2018, [On- line; accessed 21-February-2018].
[2] T. Lin, M. Maire, S. J. Belongie, L. D. Bourdev, R. B. Girshick, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, “Microsoft COCO: common objects in context,” CoRR, vol. abs/1405.0312, 2014. [Online]. Available: http://arxiv.org/abs/1405.0312
[3] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “ImageNet: A Large-Scale Hierarchical Image Database,” in CVPR09, 2009.
[4] S.L.Kim,H.J.Suk,J.H.Kang,J.M.Jung,T.H.Laine,andJ.Westlin,“Usingunity3dto facilitate mobile augmented reality game development,” in Internet of Things (WF-IoT), 2014 IEEE World Forum on. IEEE, 2014, pp. 21–26.
[5] W. Lee, W. Woo, and J. Lee, “Tarboard: Tangible augmented reality system for table-top game environment,” in 2nd International Workshop on Pervasive Gaming Applications, PerGames, vol. 5, no. 2.1, 2005.
[6] N.Pinto,D.D.Cox,andJ.J.DiCarlo,“Why is real-world visual object recognition hard?” PLoS computational biology, vol. 4, no. 1, p. e27, 2008.
[7] J. Schmidhuber, “Deep learning in neural networks: An overview,” Neural networks, vol. 61, pp. 85–117, 2015.
[8] M. D. Zeiler and R. Fergus, “Visualizing and understanding convolutional networks,” in European conference on computer vision. Springer, 2014, pp. 818–833.
[9] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, 2012, pp. 1097–1105.
[10] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
[11] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
[12] M. Everingham, S. M. A. Eslami, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisser- man, “The pascal visual object classes challenge: A retrospective,” International Journal of Computer Vision, vol. 111, no. 1, pp. 98–136, Jan. 2015.
[13] M. A. Fischler and R. C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” Communications of the ACM, vol. 24, no. 6, pp. 381–395, 1981.
[14] H. P. Moravec, “Obstacle avoidance and navigation in the real world by a seeing robot rover.” STANFORD UNIV CA DEPT OF COMPUTER SCIENCE, Tech. Rep., 1980.
[15] C. Harris and M. Stephens, “A combined corner and edge detector.” in Alvey vision con- ference, vol. 15, no. 50. Citeseer, 1988, pp. 10–5244.
[16] C. Harris, “Geometry from visual motion,” in Active vision. MIT press, 1993, pp. 263– 284.
[17] C. Schmid and R. Mohr, “Local grayvalue invariants for image retrieval,” IEEE transac- tions on pattern analysis and machine intelligence, vol. 19, no. 5, pp. 530–535, 1997.
[18] D. G. Lowe, “Object recognition from local scale-invariant features,” in Computer vision, 1999. The proceedings of the seventh IEEE international conference on, vol. 2. Ieee, 1999, pp. 1150–1157.
[19] ——, “Distinctive image features from scale-invariant keypoints,” International journal of computer vision, vol. 60, no. 2, pp. 91–110, 2004.
[20] H. Bay, T. Tuytelaars, and L. Van Gool, “Surf: Speeded up robust features,” in European conference on computer vision. Springer, 2006, pp. 404–417.
[21] M. Calonder, V. Lepetit, C. Strecha, and P. Fua, “Brief: Binary robust independent el- ementary features,” in European conference on computer vision. Springer, 2010, pp. 778–792.
[22] E. Rosten and T. Drummond, “Machine learning for high-speed corner detection,” in Eu- ropean conference on computer vision. Springer, 2006, pp. 430–443.
[23] W. Hartmann, M. Havlena, and K. Schindler, “Predicting matchability,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 9–16.
[24] S. Zagoruyko and N. Komodakis, “Learning to compare image patches via convolutional neural networks,” CoRR, vol. abs/1504.03641, 2015. [Online]. Available: http: //arxiv.org/abs/1504.03641
[25] D. DeTone, T. Malisiewicz, and A. Rabinovich, “Toward geometric deep slam,” arXiv preprint arXiv:1707.07410, 2017.
[26] Y. Verdie, K. M. Yi, P. Fua, and V. Lepetit, “TILDE: A Temporally Invariant Learned DEtector,” in Proceedings of the Computer Vision and Pattern Recognition, 2015.
[27] K. M. Yi, E. Trulls, V. Lepetit, and P. Fua, “LIFT: Learned Invariant Feature Transform,” in Proceedings of the European Conference on Computer Vision, 2016.
[28] P.J.Rousseeuw,“Leastmedianofsquaresregression,”JournaloftheAmericanstatistical association, vol. 79, no. 388, pp. 871–880, 1984.
[29] P. Torr and A. Zisserman, “Robust computation and parametrization of multiple view re- lations,” in Computer Vision, 1998. Sixth International Conference on. IEEE, 1998, pp. 727–732.
[30] P. H. Torr and A. Zisserman, “Mlesac: A new robust estimator with application to esti- mating image geometry,” Computer vision and image understanding, vol. 78, no. 1, pp. 138–156, 2000.
[31] D. Nasuto and J. B. R. Craddock, “Napsac: High noise, high dimensional robust estimation-it’s in the bag,” in Proc. Brit. Mach. Vision Conf., 2002, pp. 458–467.
[32] Z.Zhang,R.Deriche,O.Faugeras,andQ.-T.Luong,“Arobusttechniqueformatchingtwo uncalibrated images through the recovery of the unknown epipolar geometry,” Artificial intelligence, vol. 78, no. 1-2, pp. 87–119, 1995.
[33] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014.
[34] R. Girshick, “Fast r-cnn,” arXiv preprint arXiv:1504.08083, 2015.
[35] S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” in Advances in neural information processing systems, 2015, pp. 91–99.
[36] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg, “Ssd: Single shot multibox detector,” in European conference on computer vision. Springer, 2016, pp. 21–37.
[37] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real- time object detection,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 779–788.
[38] B. Han, J. Sim, and H. Adam, “Branchout: Regularization for online ensemble tracking with convolutional neural networks,” in Proceedings of IEEE International Conference on Computer Vision, 2017, pp. 2217–2224.
[39] M. Kristan, A. Leonardis, J. Matas, B. Mocanu, R. Tapu, and T. Zaharia, “The Visual Object Tracking VOT2017 challenge results,” in ICCV 2017 : International Conference on Computer Vision Workshops. Venice, Italy: The Computer vision foundation, Oct. 2017, pp. 1949 – 1972. [Online]. Available: https://hal.archives-ouvertes.fr/hal-01691097
[40] M. T. Ribeiro, S. Singh, and C. Guestrin, “”why should I trust you?”: Explaining the predictions of any classifier,” in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, August 13-17, 2016, 2016, pp. 1135–1144.
[41] C. R. Qi, H. Su, K. Mo, and L. J. Guibas, “Pointnet: Deep learning on point sets for 3d classification and segmentation,” Proc. Computer Vision and Pattern Recognition (CVPR), IEEE, vol. 1, no. 2, p. 4, 2017.
[42] J. Bian, W.-Y. Lin, Y. Matsushita, S.-K. Yeung, T.-D. Nguyen, and M.-M. Cheng, “Gms: Grid-based motion statistics for fast, ultra-robust feature correspondence,” in Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on. IEEE, 2017, pp. 2828–2837.
[43] E.Rublee,V.Rabaud,K.Konolige,andG.Bradski,“Orb:Anefficientalternativetosiftor surf,” in Computer Vision (ICCV), 2011 IEEE international conference on. IEEE, 2011, pp. 2564–2571.
[44] K. M. Yi, E. Trulls, Y. Ono, V. Lepetit, M. Salzmann, and P. Fua, “Learning to find good correspondences,” in Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), no. CONF, 2018.
[45] Y. Wu, J. Lim, and M.-H. Yang, “Online object tracking: A benchmark,” in IEEE Confer- ence on Computer Vision and Pattern Recognition (CVPR), 2013.
[46] E. Karami, S. Prasad, and M. Shehata, “Image matching using sift, surf, brief and orb: performance comparison for distorted images,” arXiv preprint arXiv:1710.02726, 2017.
[47] I. Sobel and G. Feldman, “A 3x3 isotropic gradient operator for image processing,” a talk at the Stanford Artificial Project in, pp. 271–272, 1968.
[48] A. Haar, “Zur theorie der orthogonalen funktionensysteme,” Mathematische Annalen, vol. 69, no. 3, pp. 331–371, 1910.
[49] J. R. Quinlan, “Induction of decision trees,” Machine learning, vol. 1, no. 1, pp. 81–106, 1986.
[50] M.Ozuysal,M.Calonder,V.Lepetit,andP.Fua,“Fastkeypointrecognitionusingrandom ferns,” IEEE transactions on pattern analysis and machine intelligence, vol. 32, no. 3, pp. 448–461, 2010.
[51] N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in Com- puter Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Confer- ence on, vol. 1. IEEE, 2005, pp. 886–893.
[52] C.CortesandV.Vapnik,“Support-vectornetworks,”Machinelearning,vol.20,no.3,pp. 273–297, 1995.
[53] E. Simo-Serra, E. Trulls, L. Ferraz, I. Kokkinos, P. Fua, and F. Moreno-Noguer, “Dis- criminative learning of deep convolutional feature point descriptors,” in Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 118–126.
[54] M. Jaderberg, K. Simonyan, A. Zisserman et al., “Spatial transformer networks,” in Ad- vances in neural information processing systems, 2015, pp. 2017–2025.
[55] Y. Verdie, K. M. Yi, P. Fua, and V. Lepetit, “Learning to assign orientations to feature points,” in Computer Vision and Pattern Recognition (CVPR), no. EPFL-CONF-217982, 2016.
[56] K. He, X. Zhang, S. Ren, and J. Sun, “Spatial pyramid pooling in deep convolutional networks for visual recognition,” in european conference on computer vision. Springer, 2014, pp. 346–361.
[57] T.-Y. Lin, P. Dollar, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature Pyramid Networks for Object Detection,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, jul 2017, pp. 936–944. [Online]. Available: http://arxiv.org/abs/1612.03144
[58] E.H.Adelson,C.H.Anderson,J.R.Bergen,P.J.Burt,andJ.M.Ogden,“Pyramidmethods in image processing,” RCA engineer, vol. 29, no. 6, pp. 33–41, 1984.
[59] J. Redmon and A. Farhadi, “Yolo9000: Better, faster, stronger,” arXiv preprint arXiv: 1612.08242, 2016.
[60] K. Kang, H. Li, J. Yan, X. Zeng, B. Yang, T. Xiao, C. Zhang, Z. Wang, R. Wang, X. Wang et al., “T-cnn: Tubelets with convolutional neural networks for object detection from videos,” IEEE Transactions on Circuits and Systems for Video Technology, 2017.
[61] G.Huang,Y.Sun,Z.Liu,D.Sedra,andK.Q.Weinberger,“Deepnetworkswithstochastic depth,” in European Conference on Computer Vision. Springer, 2016, pp. 646–661.
[62] L. Breiman, “Random forests,” Machine learning, vol. 45, no. 1, pp. 5–32, 2001.
[63] X. Han, T. Leung, Y. Jia, R. Sukthankar, and A. C. Berg, “Matchnet: Unifying feature and metric learning for patch-based matching,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 3279–3286.
[64] A. Del Bue, “A factorization approach to structure from motion with shape priors,” in ComputerVisionandPatternRecognition,2008.CVPR2008.IEEEConferenceon. IEEE, 2008, pp. 1–8.
[65] V. Rabaud and S. Belongie, “Re-thinking non-rigid structure from motion,” in Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on. IEEE, 2008, pp. 1–8.
[66] M. Prasad, A. Fitzgibbon, A. Zisserman, and L. Van Gool, “Finding nemo: Deformable object class modelling using curve matching,” in Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on. IEEE, 2010, pp. 1720–1727.
[67] G.Huang,Z.Liu,L.vanderMaaten,andK.Q.Weinberger,“Denselyconnectedconvolu- tional networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017.
[68] T.-P. Tân, “Tamsui,” 1935, collected in National Taiwan Museum of Fine Arts, Taichung, Taiwan.
[69] L. A. Shirmun and S. S. Abi-Ezzi, “The cone of normals technique for fast processing of curved patches,” in Computer Graphics Forum, vol. 12, no. 3. Wiley Online Library, 1993, pp. 261–272.
[70] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel, “Backpropagation applied to handwritten zip code recognition,” Neural computa- tion, vol. 1, no. 4, pp. 541–551, 1989.
[71] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” arXiv preprint arXiv:1502.03167, 2015.
[72] V. Nair and G. E. Hinton, “Rectified linear units improve restricted boltzmann machines,” in Proceedings of the 27th international conference on machine learning (ICML-10), 2010, pp. 807–814.
[73] J. Han and C. Moraga, “The influence of the sigmoid function parameters on the speed of backpropagation learning,” in International Workshop on Artificial Neural Networks. Springer, 1995, pp. 195–201.
[74] A.L.Maas,A.Y.Hannun,andA.Y.Ng,“Rectifiernonlinearitiesimproveneuralnetwork acoustic models,” in Proc. icml, vol. 30, no. 1, 2013, p. 3.
[75] D. Ulyanov, A. Vedaldi, and V. S. Lempitsky, “Instance normalization: The missing ingredient for fast stylization,” CoRR, vol. abs/1607.08022, 2016. [Online]. Available: http://arxiv.org/abs/1607.08022
[76] J. L. Ba, J. R. Kiros, and G. E. Hinton, “Layer normalization,” arXiv preprint arXiv: 1607.06450, 2016.
[77] P.J.Huberetal.,“Robustestimationofalocationparameter,”Theannalsofmathematical statistics, vol. 35, no. 1, pp. 73–101, 1964.
[78] T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár, “Focal loss for dense object detec- tion,” IEEE transactions on pattern analysis and machine intelligence, 2018.
[79] P.-T. De Boer, D. P. Kroese, S. Mannor, and R. Y. Rubinstein, “A tutorial on the cross- entropy method,” Annals of operations research, vol. 134, no. 1, pp. 19–67, 2005.
[80] X.GlorotandY.Bengio,“Understandingthedifficultyoftrainingdeepfeedforwardneural networks,” in Proceedings of the thirteenth international conference on artificial intelli- gence and statistics, 2010, pp. 249–256.
[81] R. W. Hamming, “Error detecting and error correcting codes,” Bell System technical jour- nal, vol. 29, no. 2, pp. 147–160, 1950.
[82] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
[83] G.Huang,“Dwellinginthefuchunmountains—themasterwuyongscroll,”1347–1350, currently kept in the National Palace Museum, Taipei, Taiwan.
[84] Walt Disney Studios, “Incredibles 2,” 2018.
[85] Capcom, “Monster hunter: World,” 2018. [Online]. Available: http://www. monsterhunterworld.com/us/
[86] W. Kruczyński, “Kalsoy,” 2017. [Online]. Available: https://yourshot.nationalgeographic. com/photos/10988576/
[87] J. J. Bojan, “Face to face in a river in borneo,” 2017. [Online]. Available: https://yourshot.nationalgeographic.com/photos/10950766/
[88] Walt Disney Studios, “Avengers: Infinity war,” 2018.
[89] CoMix Wave Films, “Your name,” 2016.
[90] J. Shao, “Linear model selection by cross-validation,” Journal of the American statistical Association, vol. 88, no. 422, pp. 486–494, 1993.
[91] J.Duchon,“Splinesminimizingrotation-invariantsemi-normsinsobolevspaces,”inCon- structive theory of functions of several variables. Springer, 1977, pp. 85–100.
[92] H. Hofmann, K. Kafadar, and H. Wickham, “Letter-value plots: Boxplots for large data,” had.co.nz, Tech. Rep., 2011.
[93] G. Donato and S. J. Belongie, Approximation methods for thin plate spline mappings and principal warps. Citeseer, 2003.
[94] V. van Gogh, “The starry night,” 1889, currently kept in Museum of Modern Art, New York City.
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
系統版面圖檔 系統版面圖檔