跳到主要內容

臺灣博碩士論文加值系統

(3.237.38.244) 您好!臺灣時間:2021/07/24 16:53
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:裴振宇
研究生(外文):Cheng-Yu Pei
論文名稱:以三維背景特徵模型為基礎之增添式實境
論文名稱(外文):Three Dimensional Background Feature Model for Augmented Reality
指導教授:洪一平洪一平引用關係
指導教授(外文):Yi-Ping Hung
學位類別:碩士
校院名稱:國立臺灣大學
系所名稱:資訊工程學研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2007
畢業學年度:95
語文別:英文
論文頁數:46
中文關鍵詞:增添式實境相機位置估測不變性區域特徵點
外文關鍵詞:augmented realitycamera pose estimationinvariant local feature
相關次數:
  • 被引用被引用:0
  • 點閱點閱:122
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
在本論文中,我們提出了一個三維背景特徵模型(3DBFM)來記錄場景中顯著的特徵點的三維位置及特徵點的外觀。藉由建立影像中出現的特徵點與三維背景模型之對應關係,我們使用以ICP為基礎之相機參數估算演算法計算出對應的相機外在參數,並使用這些相機參數於增添式實境上,讓虛擬物品正確地呈現在影像之中。另外,在我們持續拍攝影片中,三維背景模型也會不斷的更新其儲存資料,包含三維位置及特徵點外觀。對於其他沒對應到三維背景模型之影像中的特徵點,我們會觀察它們一段時間,計算出其三維位置並加入三維背景模型,故三維背景模型會漸漸增加其作用範圍。我們的方法好處在於在瞬間光線變化及部分遮蔽的狀況下,依然可以持續運行不受干擾。甚至在於環境中全部特徵點被遮蔽時,只要一重新觀察到三維背景模型所記錄的特徵點,就可以立刻回復原先正常的狀態。這樣的概念和特性,使得我們的方法更能應用增添式實境系統。
In this thesis, we present a descriptor based approach for augmented reality by using a 3D background feature model (3DBFM). 3DBFM contains 3D positions of scene objects and their image appearance distributions. To describe image appearances, we use a new descriptor, contrast context histogram (CCH), which has been shown high matching accuracies but less computation time. By matching the image features with the features in the 3DBFM, we can get 3D-2D correspondences. Then, we adopt iterated closet point (ICP) based algorithm to estimate the camera pose. According to the camera pose, new scene points, which are not in the 3DBFM, can be learned. The experiments showed that our approach can match features under significant changes of illumination and scales. Even long term occlusion occurs; the system can still work after matching feature without any additional penalty.
口試委員會審定書 i
誌謝 ii
中文摘要 iii
英文摘要 iv
Chapter 1 Introduction 1
Chapter 2 Background Knowledge 6
2.1. Camera Model 6
2.2 Invariant Local Features 9
2.3 Perspective N Point Problem 12
Chapter 3 Our Approach 14
3.1 3D Background Feature Model 14
3.2 Feature Extraction and Matching in 3DBFM 15
3.3 Camera Pose Estimation: 17
3.4 Outlier Removal 21
3.5 Expanding 3DBFM 22
3.6 Updating 3DBFM 24
3.7 System Overview 25
Chapter 4 Experiments 27
Chapter 5 Conclusions 40
Reference 41
[1]http://www.jurassicpark.com/
[2]http://www.shinobithemovie.com/
[3]L. Rosenblum and M. Macedonia, “Tangible Augmented Interfaces for Structural Molecular Biology,” IEEE Computer Graphics and Applications, vol. 25, no. 2, pp. 13-17, 2005.
[4]H. Tamura, H. Yamamoto, and A. Katayama, “Mixed Reality: Future Dreams Seen at the Border between Real and Virtual Worlds,” IEEE Computer Graphics and Applications, vol. 21, no. 6, pp. 64-70, 2001.
[5]A.D. Cheok, K.H. Goh, W. Liu, F. Farzbiz, S.W. Fong, S.Z. Teo, Y. Li, and X. Yang, “Human Pacman: A Mobile Wide-Area Entertainment System Based on Physical, Social, and Ubiquitous Computing,” Personal and Ubiquitous Computing, vol. 8, no. 2, pp. 71-81, 2004.
[6]http://www.jp.playstation.com/scej/title/eoj/
[7]T.H.D. Nguyen, T.C.T Qui, K. Xu, A.D. Cheok, S.L Teo, Z.Y. Zhou, A. Mallawaarachchi, S.P. Lee, W. Liu, H.S. Teo, L.N. Thang, Y. Li, and H. Kato, “Real-Time 3D Human Capture System for Mixed-Reality Art and Entertainment.” IEEE trans. Visualization and Computer Graphics, vol. 11, no. 6, pp. 706-721, 2005.
[8]C.-R. Huang, C.-S. Chen, and P.-C. Chung, “Tangible Photorealistic Virtual Museum,” IEEE Computer Graphics and Applications, vol. 25, no. 1, pp.15-17, 2005.
[9]M.C. Juan, M Alcaniz, C. Monserrat, C. Botella, R.M. Banos, and B. Guerrero, “Using Augmented Reality to Treat Phobias,” IEEE Computer Graphics and Applications, vol. 25, no. 6, pp. 31-37, 2005.
[10]D. Balazs, and E. Attila, “Volumetric Medical Intervention Aiding Augmented Reality Device,” Information and Communication Technologies, vol. 1, pp. 1091-1096, 2006.
[11]Y.-P Hung, C.-S Chen, Y.-P Tsai and S.-W Lin, “Augmenting Panoramas with Object Movies by Generating Novel Views with Disparity-Based View Morphing,” J. Visualization and Computer Animation, vol. 13, no. 4, 2002, pp. 237-247
[12]W. Hoff and T. Vincent, “Analysis of head pose accuracy in augmented reality,” IEEE trans. Visualization and Computer Graphics, vol. 6, no. 4. pp. 319-334, 2000.
[13]S. You, U. Neumann, R. Azuma, “Orientation tracking for outdoor augmented reality registration,” IEEE Computer Graphics and Applications, vol. 19, no. 6, pp. 36-42, 1999.
[14]Z. Zhang, “A Flexible New Technique for Camera Calibration,” IEEE trans. Pattern Analysis and Machine Intelligence, vol. 22, no. 11, pp. 1330-1334, 2000.
[15]H. Kato and M. Billinghurst, “Marker Tracking and HMD Calibration for a Video-based Augmented Reality Conferencing System,” Proc. IEEE and ACM International Workshop on Augmented Reality, pp. 85-94, 1999.
[16]M. Billinghurst, A. Cheok, S Prince, and H. Kato, “Real world teleconferencing,” IEEE Computer Graphics and Applications, vol. 22, no. 6, pp.11-13, 2002.
[17]J. Fruend, M. Grafe, C. Matysczok, and A. Vienenkoetter, “AR-based training and support of assembly workers in automobile industry,” Proc. The First IEEE International Augmented Reality Toolkit Workshop, 2002.
[18]J. Gausenmeier, C. Matysczok, and R. Radkowski, “AR-based Modular Construction System for Automobile Advance Development,” Proc. IEEE International Augmented Reality Toolkit Workshop, pp.72-73, 2003.
[19]J.M.S Dias, N. Barata, P. Santos, A. Correia, P. Nande, and R. Bastos, “In your hand computing: tangible interfaces for mixed reality,” Proc. IEEE International Augmented Reality Toolkit Workshop, pp.29-31, 2003.
[20]H. Kato, K. Tachibana, M. Tanabe, T. Nakajima, and Y. Fukuda, “MagicCup: a tangible interface for virtual objects manipulation in table-top augmented reality,” Proc. IEEE International Augmented Reality Toolkit Workshop, pp. 75-76, 2003.
[21]A.J. Davison, “Real-Time Simultaneous Localisation and Mapping with a Single Camera,” Proc. IEEE International Conference on Computer Vision, vol. 2 pp. 1403-1410, 2003.
[22]A.J. Davison, I.D. Reid, N.D. Molton, and O. Stasse, “MonoSLAM: Real-Time Single Camera SLAM, ” IEEE trans. on Pattern Analysis and Machine Intelligence, vol. 29, no. 6, pp. 1052-1067, 2007.
[23]M. L. Yuan, S.K. Ong, and A.Y.C. Nee, “Registration Using Natural Features for Augmented Reality Systems,” IEEE trans on Visualization and Computer Graphics, vol. 12, no. 4, pp. 569-580, 2006.
[24] D.G. Lowe, “Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision, vol. 60, no. 2, pp. 91–110, 2004.
[25] I. Gordon and D.G. Lowe, “Scene Modelling, Recognition and Tracking with Invariant Image Features,” IEEE and ACM International Symposium on Mixed and Augmented Reality, pp. 110-119, 2004.
[26]C.-R. Huang, C.-S. Chen and P.-C. Chung, “Contrast Context Histogram – A Discriminating Local Descriptor for Image Matching,” International Conference on Pattern Recognition, vol. 4, pp. 53–56, 2006.
[27] Z. Zhang, R. Deriche, O. Faugeras, and Q. Luong, “A robust technique for matching two uncalibrated images through the recovery of the unknown epipolar geometry,” Artificial Intelligence, vol. 78, pp. 87–119, 1995.
[28]C. Harris and M. Stephens, “A Combined Corner and Edge Detector,” Proc. The Fourth Alvey Vision Conference, pp. 147–151, 1988.
[29] C. Schmid and R. Mohr, “Local Grayvalue Invariants for Image Retrieval,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 19, no. 5, pp. 530–534, 1997.
[30]K. Mikolajczyk and C. Schmid, “A performance evaluation of local descriptors,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 27, no. 10, pp. 1615–1630, 2005.
[31] Y. Ke and R. Sukthankar, “PCA-SIFT: A More Distinctive Representation for Local Image Descriptors,” Proc. Computer Vision and Pattern Recognition, vol. 2, pp. 506–513, 2004.
[32]D. Comaniciu, V. Ramesh, and P. Meer, “Real-time Tracking of Non-rigid Objects Using Mean Shift,” Proc. IEEE Conference on Computer Vision and Pattern Recognition, pp. 142–151, 2000.
[33]S. Belongie, J. Malik, and J. Puzicha, “Shape matching and object recognition using shape contexts,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 24, no. 4, pp. 509–522, 2002
[34]D.F. DeMenthon and L.S. Davis, “Model-Based Object Pose in 25 Line of Code,” International Journal of Computer Vision, vol. 15, pp. 123-141, 1995.
[35]M.A. Fischler and R.C. Bolls, “Random Sample Consensus: A Paradigmfor Model Fitting with Application to Image Analysis and Automated Cartography,” Comm. ACM, vol. 24, pp. 381-395, 1981.
[36]R.M. Haralick et al., “Analysis and Solutions of the Three Point Perspective Pose Estimation Problem,” Proc. IEEE International Conference on Computer Vision and Pattern Recognition, pp. 592-598, 1991.
[37]R. Horaud, B. Canio, and O. Leboullenx, “An Analytic Solution for the Perspective 4-Point Problem,” Computer Vision, Graphics, and Image Understanding, no. 1, pp. 33-44, 1989.
[38]D.G. Lowe, “Robust Model-Based Motion Tracking through the Integration of Search and Estimation,” International Journal of Computer Vision, vol. 8, no. 2, pp. 113-122, 1992.
[39]J.S.C. Yuan, “A General Photogrammetric Method for Determining Object Position and Orientation,” IEEE Trans. Robotics and Automation, vol. 5, pp. 129-142, 1989.
[40]C.-S. Chen, and W.-Y. Chang, “On Pose Recovery for Generalized Visual Sensors,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 26, no. 7, 2004.
[41]R. Gupta and R. Hartley, “Linear Pushbroom Cameras,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 19, no. 9, pp. 963-975, Sept. 1997.
[42]F. Huang, S.K. Wei, and R. Klette, “Geometrical Fundamentals of Polycentric Panoramas,” Proc. IEEE International Conference on Computer Vision, vol. 1, pp 560-565, July 2001.
[43]K. Mikolajczyk and C. Schmid, “Indexing based on scale invariant interest points,” Proc. International Conference on Computer Vision, pp. 525–531, 2001
[44]B.K.P. Horn, “Closed-Form Solution of Absolute Orientation Using Unit Quaternions,” J. Optical Soc. Am., A, vol. 4, pp. 629-642, 1987.
[45]K.S. Arun, T.S. Huang, and S.D. Blostein, “Least-Squares Fitting of Two 3-D Point Sets,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 9, no. 5, pp. 698-700, 1987.
[46]B.K.P. Horn, H.M. Hilden, and S. Negahdaripour, “Closed-form Solution of Absolute Orientation Using Orthonormal Matrices,” J. Optical Soc. Am., A, vol. 5, no. 7, pp. 1127-1135, 1988
[47]W.M. Walker, L. Shao, and R.A. Volz, “Estimating 3-D Location Parameters Using Dual Number Quaternions,” CVGIP: Image Understanding, vol. 54, no. 4, pp. 358-367, 1991.
[48]A. Lorusso, D.W. Eggert, and R.B. Fisher, “A Comparison of Four Algorithms for Estimating 3-D Rigid Transformations,” Proc. Sixth British Machine Vision Conf., pp. 237-246, 1995.
[49]R.M. Haralick and L.G. Shapiro, Computer and Robot Vision volume 2, chapter 17, pp.227-229.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top