跳到主要內容

臺灣博碩士論文加值系統

(3.236.110.106) 您好!臺灣時間:2021/07/29 16:29
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:古全鈞
研究生(外文):Chuan-Chun Ku
論文名稱:運用移動向量之影像深度偵測演算法
論文名稱(外文):Depth Detection Algorithm using Motion Vector
指導教授:龔志賢龔志賢引用關係
指導教授(外文):C. H. Kung
口試委員:楊中平龔志賢龔志銘
口試委員(外文):C. P. YoungC. H. KungC. M. Kung
口試日期:2012-01-18
學位類別:碩士
校院名稱:長榮大學
系所名稱:資訊管理學系碩士班
學門:電算機學門
學類:電算機一般學類
論文種類:學術論文
論文出版年:2012
畢業學年度:100
語文別:中文
論文頁數:59
中文關鍵詞:立體視覺三維重建機器人導航
外文關鍵詞:stereo visionthree-dimensional image reconstructionrobot navigation
相關次數:
  • 被引用被引用:0
  • 點閱點閱:269
  • 評分評分:
  • 下載下載:37
  • 收藏至我的研究室書目清單書目收藏:0
立體視覺一直是電腦視覺領域的重要研究領域之一,近幾年來,隨著科技快速發展進步,立體視覺技術已經被廣泛的應用在各個領域,例如:即時影像追蹤、機器人導航、虛擬實境等。本篇論文將以電腦為平台,配置雙攝影機擷取即時影像,搭配本論文提出之運用移動向量之影像深度偵測演算法計算影像深度,建構出即時立體影像深度計算系統。
由於攝影機擷取之影像,格式為MPEG1、MPEG2與MPEG4等,皆使用移動預估與移動補償技術(Motion Estimation/Motion Compensation),因此每一幀(Frame)中皆有移動向量(Motion Vector)資訊。本論文提出之方法,首先針對第一張影像做計算,取得所需移動向量資訊後,並與第二張影像之移動向量做比對,進而計算出影像深度。所提出之運用移動向量之影像深度偵測演算法只需計算移動向量後進行比對計算,因而可以快速有效率地取得影像深度,可供未來更快速三維影像重建、2D轉換3D影像或機器人導航等使用。

In this thesis, the design and the construction of an innovative system for stereo vision using the motion vector are described. Images captured by the camera, the format of MPEG1, MPEG2 and MPEG4, are using motion estimation and motion compensation technologies. Two superimposed stereo images are received simultaneously as a complex image and using the propose algorithm, the disparities and depth of objects of the scene can be obtained. Since taking images of a single object from various angles by two cameras and finding the same points in pictures through the proposed algorithm, the same point has different position in different pictures. This thesis proposes an innovative scheme using the motion vector to restore the actual coordinates of positions. Based on the data of the previous and current images, the proposed scheme could take less time to calculate the depth of the image effectively. In the future, the proposed scheme can be employed for the three-dimensional image reconstruction, 2D to 3D image conversion and robot navigation.
中文摘要 I
Abstract II
第一章 緒論 1
第二章 相關研究及應用 2
2.1 立體視覺系統 2
2.2 單鏡頭立體視覺 5
2.3 多鏡頭立體視覺 9
2.4 相機透視投影 11
2.5 移動預估與補償演算法 13
2.6 立體視覺配對 15
第三章 研究方法及內容 19
3.1 研究方法之架構 19
3.2 運用移動向量之影像深度偵測演算法 20
第四章 實證分析 25
4.1 演算法之實現 25
4.2 系統配備 35
4.3 驗證數據之分析 36
第五章 結論與建議 45
參考文獻 46
附錄 50

[1] Atsuto Maki, Peter Nordlund and Jan-Olof Eklundh, “Attentional Scene Segmentation: Integrating Depth and Motion from Phase,” A Computational Vision and Active Perception Laboratory (CVAP) Department of Numerical Analysis and Computing Science Royal Institute of Technology, S-100 44 Stockholm, Sweden, March 2002.
[2] D. Marr and T. Poggio, “A Theory of Human Stereo Vision,” A. I. Memo No. 451, November 1977.
[3] Mark Williams, “Spatial Data Acquisition from Motion Video,” Departments of Computer and Information Science University of Otago, PO Box 56, Dunedin, New Zealand, December 1996.
[4] Goesta H. Granlund, “Issues in Robot Vision,” Computer Vision Laboratory, Linkoping University, 581 83 Linkoping, Sweden, June 2003.
[5] Nikolaos P. Papanikolopoulos, “Visual Tracking of a Moving Target by a Camera Mounted on a Robot: A Combination of Control and Vision,” IEEE Transactions on Robotics and Automation feabruary 1993.
[6] Brett Browning and Manuela M. Veloso, “Real-Time, Adaptive Color-based Robot Vision,” Carnegie Mellon University, August 2005.
[7] Ezio Malis and Eric Marchand, “Experiments with robust estimation techniques in real-time robot vision,” IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, IROS'06, Beijing, China, October 2006.
[8] Pieter Jonker, Jurjen Caarls and Eric Wouter Bokhove, “Fast and Accurate Robot Vision for Vision based Motion,” Pattern Recognition Group, Department of Applied Physics, Delft University of Technology Lorentzweg 1, 2628 CJ delft, The Netherlands, 2001.
[9] Theodore Pachidis and John Lygouras, “A Pseudo Stereo Vision System as a Sensor for Real Time Path Control of a Robot,” IEEE Instrumentation and Measurment Technology Conference, Anchorage, AK, USA, 21-23 May 2002.
[10] Andrew I. Comport, Éric Marchand, François Chaumette, C. “Robust model-based tracking for robot vision,” IEEE/RSJ Int. Conf on Inteligent Robots and Systems, IROS’04, Sendai, Japan, September 2004.
[11] Hsiao-Yu Wang and Hung-Yuan Chung, “Design and Implementation of Dynamic Target Tracking with Stereo Vision System,” IMECS 2011, March 16-18, 2011, Hong Kong.
[12] Javier Civera, Andrew J. Davison and J.M.M Montiel, “Inverse Depth to Depth Conversion for Monocular SLAM,” Dpto. Informatica. Universidad de Zaragoza and Department of Computing Imperial College London, April 2007.
[13] Kazuhiro Fukui and Osamu Yamaguchi, “Face Recognition Using Multiviewpoint Patterns for Robot Vision,” Corporate Research and Development Center, TOSHIBA Corporation 1, KomukaiToshibacho, Saiwaiku, Kawasaki 2128582 Japan, 2005.
[14] Don Murray, Jim Little, “Using real-time stereo vision for mobile robot navigation,” Computer Science Dept. University of British Columbia Vancouver, BC, Canada V6T 1Z4, 2000.
[15] M. Okutomi and T. Kanade. A multiple-baseline stereo. IEEE Transactions on Pattern Analysis and Machine Intelligence, 15(4):353-363, 1993.
[16] J. M. PORTA, J. J. VERBEEK, B. J. A. KROSE, “Active Appearance-Based Robot Localization Using Stereo Vision,” in: Autonomous Robots 18, 59–80, 2005.
[17] Murali Subbarao, “Parallel Depth Recovery by Changing Camera Parameters,” Department of Electrical Engineering, State University of New York at Stony Brook Stony Brook, NY 11794-2350, USA, 1988.
[18] T. Chen, “Adaptive temporal interpolation using bidirectional motion estimation and compensation,” in Image Processing. 2002. Proceedings. 2002 International Conference on, 2002, vol. 2, p. II–313.
[19] E. B. Bellers, J. G. W. M. Janssen, and M. Penners, “Motion compensated frame rate conversion for motion blur reduction,” in SID INTERNATIONAL SYMPOSIUM DIGEST OF TECHNICAL PAPERS, 2007, vol. 38, p. 1454.
[20] M. E. Al-Mualla, “Motion field interpolation for frame rate conversion,” in Circuits and Systems, 2003. ISCAS’03. Proceedings of the 2003 International Symposium on, 2003, vol. 2, p. II–652.
[21] G. Dane and T. Q. Nguyen, “Motion vector processing for frame rate up conversion,” in Acoustics, Speech, and Signal Processing, 2004. Proceedings.(ICASSP’04). IEEE International Conference on, 2004, vol. 3, p. iii–309.
[22] Jiao Liangbao, Chen Jiao, Cao Xuehong, Chen Rui “Hierarchical Support-weight Block Matching Approach in Depth Extraction,”in International Journal of Digital Content Technology and its Applications. Volume 5, Number 6, June 2011.
[23] Junhwan Kim, Vladimir Kolmogorov, and Ramin Zabih, “Visual correspondence using energy minimization and mutual information,” in ICCV '03: Proceedings of the Ninth IEEE International Conference on Computer Vision, page 1033, Washington, DC, USA, 2003. IEEE Computer Society.
[24] F. Piti´e, “Using One Graph-Cut to Fuse Multiple Candidate Maps in Depth Estimation,” in IEEE European Conference on Visual Media Production (CVMP), London, UK, 2009, 1-8
[25] William Sealy Gosset, “Student’s t-distribution,” England, 1908.
[26] Ton J. Cleophas andAeilko H. Zwinderman “Bonferroni t-Test”, Statistical Analysis of Clinical Data on a Pocket Calculator, 2011.
[27] Agostino Martinelli, Davide Scaramuzza and Roland Siegwart, “Automatic Self-Calibration of a Vision System during Robot Motion” Swiss Federal Institute of Technology Lausanne (EPFL) CH-1015 Lausanne, Switzerland, May 2006.

QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊