跳到主要內容

臺灣博碩士論文加值系統

(216.73.216.132) 您好!臺灣時間:2025/11/30 01:22
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:蔡君龍
研究生(外文):Chun-Lung Tsai
論文名稱:利用單一相機視角平行且共平面轉動之 多重基準線立體視覺技術
論文名稱(外文):Multiple Baseline Stereoscopic Visual Technique by Use of Singular Camera with View Angle of Horizontal and Rotational Plane
指導教授:林惠勇
指導教授(外文):Huei-Yung Lin
口試委員:賴尚宏陳祝嵩張勤振林惠勇
口試委員(外文):Shang-Hong LaiChu-Song ChenChin-Chen ChangHuei-Yung Lin
口試日期:2013-07-26
學位類別:碩士
校院名稱:國立中正大學
系所名稱:電機工程研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2013
畢業學年度:101
語文別:中文
論文頁數:92
中文關鍵詞:基於影像特徵之對應點匹配多重基準線立體視覺三維資訊量測
外文關鍵詞:Feature-Based Correspondence AlgorithmsMultiple Baseline Stereo3D Information Measuring
相關次數:
  • 被引用被引用:0
  • 點閱點閱:603
  • 評分評分:
  • 下載下載:13
  • 收藏至我的研究室書目清單書目收藏:0
本論文提出一個基於單一相機的三維資訊量測系統,以實現媲美多台相機所得到的精確三維資訊量測結果,並可同時降低多台相機所造成的成本支出。在此系統中,基於物體在世界空間不變的原理,以旋轉平台帶動單一相機旋轉一周產生的多張不同水平視角立體影像,取代掉一般立體視覺系統的單水平視角立體影像,並進一步地校正得到的多張不同水平視角立體影像,接著計算多張不同水平視角立體影像的對應點,且藉由多視角影像的優勢進行錯誤對應點去除與優化的動作;此外,本論文還導入影像分割來對影像進行區域劃分,並進一步地將影像中缺乏對應點的區域進行三維資訊補強動作。最後,再利用差異值平方和分析對應點的相似度,並搭配相機參數與基於相機之相對位置的權重函數整合及優化多重視角影像的深度值,以此達到以單一相機取代多台相機的三維資訊量測效果,並取得比傳統單水平視角立體視覺系統更為寬廣目標環境三維資訊。
The thesis proposes a 3D information measuring system to realize a precise 3D information measuring results equivalent to that obtained from multiple cameras. In the system, based on the spatial invariant principle for an object, we use a rotational platform to drive a singular camera to rotate along a circle and generate multiple various horizontal view angle stereoscopic image to replace ordinary stereo vision system of single level view angle stereoscopic images. We further rectify the obtained multiple various horizontal view angle stereoscopic image, and calculate relative points of the multiple various horizontal view angle stereoscopic image, and through advantages of multiple view angle image to conduct inaccurate relative points removing and optimizing actions. Finally, by use of sum of squares to analyze the similarity of relative points and matched up with camera parameters and based on camera relative position weighted function integration and optimized multiple view angle image depth values, we achieve 3D information measuring effect with a single camera and acquires the 3D information which is wider than traditional single horizontal view angle stereoscopic vision system.
摘要
Abstract
誌謝
圖目錄
表目錄
中英文字對照
1 緒論
1.1 研究動機
1.2 論文架構
2 相關研究
2.1 立體影像匹配
2.1.1 基於相關性的對應方法
2.1.2 基於特徵的對應方法
2.2 多重基準線立體視覺
3 基礎原理與應用
3.1 立體視覺理論
3.1.1 立體視覺
3.1.2 同軸幾何
3.1.3 影像校正
3.2 影像特徵點描述
3.2.1 加速強健性特徵(SURF)
4 多重基準線之深度資訊量測
4.1 系統概述
4.2 系統平台介紹
4.3 基於旋轉之多重水平視角立體影像校正
4.3.1 立體影像校正前處理
4.3.2 校正板對應點偵測
4.3.3 立體影像校正
4.4 對應點擷取與匹配
4.4.1 影像亮度校正
4.4.2 特徵點擷取與匹配
4.4.3 匹配特徵點過濾
4.5 基於多重基準線之對應點優化
4.5.1 對應點座標轉換
4.5.2 交集對應點
4.5.3 對應點優化
4.6 三維資訊補強
4.6.1 影像分割
4.6.2 區域資訊補強
4.7 深度資訊估測與整合
4.7.1 基於多重基準線之深度資訊估測
4.7.2 深度資訊優化
4.7.3 整合權重資訊
5 實驗驗證與分析
5.1 實驗設備
5.1.1 硬體裝置規格
5.1.2 軟體開發環境
5.2 實驗設置
5.3 實驗結果
5.3.1 實驗一
5.3.2 實驗二
5.3.3 實驗三
6 結論與未來展望
參考文獻
[1] F. Blais, M. Picard, and G. Godin, “Accurate 3d acquisition of freely moving objects,” in Proceedings of the 2nd International Symposium on 3D Data Processing, Visualization and Transmission(3DPVT), pp. 422–429, 2004.

[2] Q. Chen and T. Wada, “A light modulation/demodulation method for realtime 3d imaging,” in Fifth International Conference on 3-D Digital Imaging and Modeling(3DIM), pp. 15–21, IEEE, 2005.

[3] B. Curless, “From range scans to 3d models,” ACM SIGGRAPH Computer Graphics, vol. 33, no. 4, pp. 38–41, 1999.

[4] J. P. Lavelle, S. R. Schuet, and D. J. Schuet, “High-speed 3d scanner with realtime 3d processing,” in Photonics Technologies for Robotics, Automation, and Manufacturing, pp. 179–188, International Society for Optics and Photonics,
2004.

[5] K. Ikeuchi, “Modeling from reality,” in Proceedings. Third International Conference on 3-D Digital Imaging and Modeling, pp. 117–124, IEEE, 2001.

[6] C. Loop and Z. Zhang, “Computing rectifying homographies for stereo vision,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 1, IEEE, 1999.

[7] Z. Zhang, “Determining the epipolar geometry and its uncertainty: A review,” International journal of computer vision, vol. 27, no. 2, pp. 161–195, 1998.

[8] Z. Zhang, “A flexible new technique for camera calibration,” IEEE Transactions on Pattern Analysis and Machine Inteligence, vol. 22, no. 11, pp. 1330–1334, 2000.

[9] A. Fusiello and L. Irsara, “Quasi-euclidean uncalibrated epipolar rectification,” in 19th International Conference Pattern Recognition, pp. 1–4, IEEE, 2008.

[10] D. Scharstein and R. Szeliski, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,” International journal of computer vision, vol. 47, no. 1-3, pp. 7–42, 2002.

[11] A. F. Bobick and S. S. Intille, “Large occlusion stereo,” International Journal of Computer Vision, vol. 33, no. 3, pp. 181–200, 1999.

[12] O. Veksler, “Stereo correspondence by dynamic programming on a tree,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 384–390, IEEE, 2005.

[13] W. T. Freeman, E. C. Pasztor, and O. T. Carmichael, “Learning low-level vision,” International journal of computer vision, vol. 40, no. 1, pp. 25–47, 2000.

[14] J. Sun, N.-N. Zheng, and H.-Y. Shum, “Stereo matching using belief propagation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 7, pp. 787–800, 2003.

[15] Q. Yang, L. Wang, and N. Ahuja, “A constant-space belief propagation algorithm for stereo matching,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1458–1465, IEEE, 2010.

[16] Q. Yang, L. Wang, R. Yang, H. Stewénius, and D. Nistér, “Stereo matching with color-weighted correlation, hierarchical belief propagation, and occlusion handling,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 3, pp. 492–504, 2009.

[17] Y. Boykov, O. Veksler, and R. Zabih, “Fast approximate energy minimization via graph cuts,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 11, pp. 1222–1239, 2001.

[18] E. Vincent and R. Laganiere, “Matching feature points in stereo pairs: A comparative study of some matching strategies,” Machine Graphics and Vision, vol. 10, no. 3, pp. 237–260, 2001.

[19] T. Kanade, M. Okutomi, and T. Nakahara, “A multiple-baseline stereo method,” in Proc. ARPA Image Understanding Workshop, pp. 409–426, 1992.

[20] J. Jeon, K. Kim, C. Kim, and Y.-S. Ho, “A robust stereo-matching algorithm using multiple-baseline cameras,” in IEEE Pacific Rim Conference on Communications, Computers and signal Processing, vol. 1, pp. 263–266, IEEE, 2001.

[21] F. Dornaika and R. Chung, “Stereo correspondence from motion correspondence,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 1, IEEE, 1999.

[22] P.-K. Ho and R. Chung, “Stereo-motion with stereo and motion in complement,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 2, pp. 215–220, 2000.

[23] C. Strecha and L. Van Gool, “Motion—stereo integration for depth estimation,” in Computer Vision—ECCV 2002, pp. 170–185, Springer, 2002.

[24] J. S. Ku, K. M. Lee, and S. U. Lee, “Multi-image matching for a general motion stereo camera model,” in IEEE Transactions on Image and Processing International Conference, vol. 2, pp. 608–612, 1998.

[25] P. Schaeren, B. Schneuwly, and W. Guggenbuehl, “Three-dimensional scene acquisition by motion-induced stereo,” in Robotics-DL tentative, pp. 356–365, International Society for Optics and Photonics, 1992.

[26] R. I. Hartley, “Theory and practice of projective rectification,” International Journal of Computer Vision, vol. 35, no. 2, pp. 115–127, 1999.

[27] J.-Y. Bouguet, “Camera calibration toolbox for matlab,” 2004.

[28] R. Tsai, “A versatile camera calibration technique for high-accuracy 3d machine vision metrology using off-the-shelf tv cameras and lenses,” IEEE Journal of Robotics and Automation.

[29] R. I. Hartley, “In defense of the eight-point algorithm,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, no. 6, pp. 580–593, 1997.

[30] P. M. Hans, “Towards automatic visual obstacle avoidance,” in Proceedings of the 5th international joint conference on Artificial intelligence, pp. 584–584, 1977.

[31] C. Harris and M. Stephens, “A combined corner and edge detector,” in Alvey vision conference, vol. 15, p. 50, Manchester, UK, 1988.

[32] D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International journal of computer vision, vol. 60, no. 2, pp. 91–110, 2004.

[33] H. Bay, T. Tuytelaars, and L. Van Gool, “Surf: Speeded up robust features,” in Computer Vision–ECCV, pp. 404–417, Springer, 2006.

[34] L. Juan and O. Gwun, “A comparison of sift, pca-sift and surf,” International Journal of Image Processing (IJIP), vol. 3, no. 4, pp. 143–152, 2009.

[35] Y. Ke and R. Sukthankar, “Pca-sift: A more distinctive representation for local image descriptors,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2, pp. II–506, IEEE, 2004.

[36] M. A. Fischler and R. C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” Communications of the ACM, vol. 24, no. 6, pp. 381–395, 1981.

[37] J. Shi and J. Malik, “Normalized cuts and image segmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 8, pp. 888–905, 2000.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top