(3.239.192.241) 您好!臺灣時間:2021/03/02 13:15
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

我願授權國圖
: 
twitterline
研究生:余國華
研究生(外文):Kuo Hua Yu
論文名稱:虛擬物品置入真實場景的擴充實境技術
論文名稱(外文):Augmented Reality for Embedding Virtual Objects in a Real Video Sequence
指導教授:陳稔陳稔引用關係
指導教授(外文):Zen Chen
學位類別:碩士
校院名稱:國立交通大學
系所名稱:資訊工程系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:1999
畢業學年度:87
語文別:中文
論文頁數:57
中文關鍵詞:擴充實境虛擬實境相機校正電腦視覺投影幾何遮蔽問題
外文關鍵詞:Augmented RealityVirtual RealityCamera CalibrationComputer VisionProjective GeometryOcclusion Problem
相關次數:
  • 被引用被引用:4
  • 點閱點閱:390
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:3
近年來,擴充實境的研究開始受到重視,相關應用也相繼問世,但目前研究發表的論文,多需要知道相機的參數,也就是需要經過相機校正的程序,但這個步驟無疑大大地限制了擴充實境的應用;因此,在本論文中,設計了一個方法,適用在未知拍攝相機參數的情況下,將一虛擬幾何物件以半自動方式擺設進入連續拍攝的影像之中。
我們所使用的方法,是透過 fundamental matrix 和 epipolar geometry 等,經由圖檔點對應而求得的投影幾何資訊,來推導出影像兩兩間的關係,進而求得虛擬物件所應該展現的投影關係。其中,我們將計算出每一張影像,定義於同一 projective space下的投影矩陣,利用該投影矩陣來進行場景的重建、虛擬物件擺設以及遮蔽問題處理等後續動作。
在系統進行之先,我們需要先行以人工方式,將虛擬物件擺設進入參考圖中,傳統的方式通常要求使用者在兩張圖中擺設,以便利用3D空間定位。在我們的方法中,使用者只需透過調整一虛擬相機之相關參數,在第一張參考圖中擺設虛擬物體,我們便可自動在第二張圖中繪製出其所應該呈現的成像結果,使用者可以觀察是否合理並滿意,如果需要的話,在於第一張圖中,繼續調整相關的相機參數,直至滿意為止。擺設完成之後,便決定了虛擬物體與真實場景之相互關係。另外,對於遮蔽之問題,我們也有一個處理遮蔽的演算法,能夠自動解決部分的遮蔽問題,使得影像看起來更真實。
我們所提出的這套擴充實境方式,在操作上很簡單,儘量減少了需要人工介入的部份。
Augmented reality technique has been receiving a lot of attention in recent years. And it is a thriving research field that can find various applications. But in most of the current research, it is needed to acquire the related information of real camera. In other words, a camera-calibration process is required. It''s no doubt that such a process limits the scope of applications. Therefore, in this thesis, we propose a semi-automatic way to combine a virtual object with an image sequence taken by a camera without calibration.
We derive the relation between two images through the fundamental matrix, which is obtained from the point correspondences of these two images. Furthermore, we can acquire the projection matrixes of each image under a unique projective space. By using each projection matrix, we can proceed the real scene reconstruction, place the virtual object in the real scene, and solve the occlusion between virtual object and real objects.
To place the virtual object into a video sequence, we need to know the relation between the real scene and the virtual object. Typically, it requires the user to describe the pose of the virtual object in two basic images, then the 3D position of virtual object in the real scene is determined. We use a virtual camera to render the virtual object in the first basic image, then the user use the proposed object placement constrains to decide the relation between real scene and virtual object. Then the virtual object can be automatically rendered in the second image. The user evaluates the goodness of the object placement and invoke, if necessary, an iterative modification of the virtual camera projection matrix in the first image. We also have an algorithm for resolving occlusion between the virtual objects and real objects automatically.
It is quite simple to operate our augmented reality system. We just try to keep the user labor work to an amount as minimum as possible.
第一章 緒 論
1.1 研究動機與目標
1.2 相關研究
1.3 研究流程概述
1.4 論文組織
第二章 電腦視覺定位技術
2.1電腦視覺定位理論
2.1.1 相機模型 (Camera Model)
2.1.2 傳統之相機參數調校
2.2 投影幾何理論
2.2.1. 影像間的投影幾何關係
2.2.2 Fundamental Matrix
2.2.3 Epipole 之計算
2.2.4 兩張影像之相對投影矩陣計算
2.2.5 多張影像空間一致性之維持
2.3擴充實境處理程序
第三章 虛擬物件擺設
3.1物件擺設之探討
3.1.1物件擺設問題之描述
3.1.2物件擺設之方法分析
3.1.3物件擺設之基本假設
3.2物件擺設方法之流程
3.3 擺設物件理論推導
3.3.1. 投影矩陣 H 的計算
3.3.2. 第二張圖在Euclidean Space投影矩陣之計算
3.3.3. 空間轉換矩陣T的計算
第四章 虛擬物件投影繪製與遮蔽
4.1 虛擬物件之投影繪製
4.2 遮蔽問題分析與基本假設
4.3 遮蔽演算法
4.4 區域劃分(REGION SEGMENT)演算法
4.5 特徵點萃取
4.6 特徵點自動點對應演算法
第五章 擴充實境系統實作與成果
5.1處理影像之取得與系統執行環境
5.2空間位置之實驗結果數據與分析
5.2.1 投影幾何計算數據
5.2.2 Fundamental Matrix檢驗
5.2.3 空間投影位置正確性驗證
5.2.4 基準影像之投影矩陣及轉換矩陣數據
5.3遮蔽問題之實驗結果與數據
5.3.1 處理影像暨最後處理結果
5.3.2 區域劃分之結果
5.3.3 特徵點萃取之結果
5.3.4 Correlation點對應尋找及深度計算之結果
5.4系統操作及成果展示
第六章 結論與未來發展
6.1結論
6.2未來發展
[1] Grigore Burdea, and Philppe Coiffect, VIRTURAL REALITY TECHNOLOGY, John Wiley & Sons, New York N.Y. 1994.
[2] Joe Gradecki, THE VIRTUAL REALITY PROGRAMMERS''S KIT, John Wiley & Sons, New York N.Y. 1994.
[3] 柯庭潔, "利用虛擬實境技術進行個人電腦組裝訓練",國立交通大學, 碩士論文, 民國87年。
[4] Milgram, P., Rastogi, A., Grodski, J.J., "Telerobotic Control Using Augmented Reality", Robot and Human Communication (RO-MAN''95), Japan, 1995.
[5] Rastogi, A. Design of an Interface for Teleoperation in Unstructured Environments using Augmented Reality Displays. MASc Thesis, University of Toronto, 1996.
[6] D. Drascic, "Stereoscopic Vision and Augmented Reality", Scientific Computing and Automation, 9(7), 31-34, June 1993
[7] S. Feiner, B. MacIntyre, and D.Seligmann. Knowledge-based Augmented Reality. Communicatioins of the ACM, 36(7), pages 53-62, July 1994.
[8] Design Considerations for a Computer-Vision-Enabled Ophthalmic Augmented Reality Environment.J W. Berger, M.E. Leventon, N. Hata, W. Wells. Dep. Ophthalmology, SEI, University of Pennsylvania, Philadelphia, PA. September 5, 1996.
[9] M. Bajura and U. Neumann. Dynamic registration correction in video-based Augmented Reality systems. Computer Graphics and Application, pages 52-60, 1995.
[10] Y. Seo, M.H. Ahn and Ki Sang Hong,Video Augmenteation by Image-based Rendering under the Perspective Camera Model, MVIP Lab and CV Lab, POSTECH, Korea. ICPR 98.
[11] M. Tuceryan, D. S. Greer, R. T. Whitaker, D. E. Breen, C.Crampton. E.Rose, and K. H. Ahlers. Calibration Requirements and Procedures for a Monitor-Based Augmented Reality System. IEEE Transactions on Visualization and Computer Graphics. Vol.1. No.3 . September 1995.
[12] R. T. Whitaker, C. Crampton, D. E. Breen, M. Ruceryan, and E. Rose. Object Calibration for Augmented Reality. In Proceedings of EUROGRAPHICS ''95, pages C-15~C-27, 1995.
[13] Kirakos N. Kutulakos, and James R. Vallino, "Calibration-Free Augmented Reality", IEEE Transactions on Visualization and Computer Graphics, Vol.4 No.1 January-March 1998.
[14] Zhengyou Zhang, Rachid Deriche, Olivier Faugeras, Quang-Tuan Luong , "A Robust Technique for matching two uncalibrated images through the recovery of the unknown epipolar geometry", Artificial Intelligence 78 (1995) 87-119.
[15] Faugeras, O.D. "What can be Seen in Three dimensions with an Uncalibrated Stereo Rig?" Proceedings ECCV2, p.563-578, 1992.
[16] M. Bajura, H. Fuchs, and R. Ohbuchi. Merging Virtual Objects with the Real World: Seeing Ultrasound Imagery within the Patient. In Computer Graphics (Proceedings of the SIGGRAPH Conference), pages 203-210, Chicago, IL, July 1992.
[17] W. Lorensen, H. Cline, C. Nafis, R. Kikinis, D. Altobelli, and L. Gleason. Enhancing Reality in the Operating Room. In Proceedings of Visualization ''93 Conference, pages 410-415, Los Alamitos, CA, October 1993.
[18] S. Balcisoy and D. Thalmann. Interaction between Real and Virtual Humans in Augmented Reality. Proc. Computer Animation''97, IEEE CS Press, 1997, pp. 31-38.
[19] http://www.ai.mit.edu/
[20] R. Azuma, "A Survey of Augmented Reality", Presence: Teleoperation and Virtual Environment, vol. 6, no. 4, 355-385, August 1997.
[21] http://www.cs.unc.edu/~us/
[22] http://www.cs.columbia.edu/graphics/
[23] http://www.boeing.com/assocproducts/art/tech_focus.html
[24] Marie-Odile Berger and Gilles Simon. Robust Image Composition Algorithms for Augmented Reality. INRIA Lorraine/CRIN-CNRS. BP101, 615 rue du Jardin Botanique, 54602 Vandoeuvre les Nancy cedex, France.
[25] M.-O. Berger. G. Simon, S. Petitjean and B. Wrobel-Dautcourt. Mixing Synthesis and Video Images of Outdoor Environments: Application to the Bridges of Paris. CRIN-CNRS & INRIA Lorraine, Batiment LORIA, BP 239, 54506 Vandoeuvre les Nancy cedex, France.
[26] Kutulakos, K. N. and J. R. Vallino (1996). "Affine Object Representations for Calibration-Free Augmented Reality." Proceedings of 1996 IEEE Virtual Reality Annual International Symposium : 25-36.
[27] D. E. Breen, E. Rose, and R. Whitaker. Interactive Occlusion and Automatic Object Placement for Augmented Reality. Technical Report ECRC-95-02, Munich, Germany, 1995.
[28] M.-O. Berger. Resolving Occlusion in Augmented Reality: a Contour Based Approach without 3D Reconstruction. CRIN/CNRS &INRIA Lorraine, BP 239 54506 Vandoeuvre les Nancy, France.
[29] Hartley, R. "Projective Reconstruction and Invariants from Multiple Images," PAMI, Vol. 16, No. 10, p.1036-1040, 1994.
[30] G. Rothwell, G. Csurka, and O. Faugeras, "A Comparison of Projective Reconstruction Methods for Pairs of Views", INRIA Report. ISSN 0249-6399. http://www.inria.fr/Equipes/PRISME-eng.html
[31] S.M. Smith and J.M. Brady. SUSAN - a new approach to low level image processing. Int. Journal of Computer Vision, 23(1):45--78, May 1997.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
系統版面圖檔 系統版面圖檔