跳到主要內容

臺灣博碩士論文加值系統

(216.73.216.106) 您好!臺灣時間:2026/04/04 08:35
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:呂俊賢
研究生(外文):Chun Hsien Lu
論文名稱:雙攝影機搭配魚眼鏡頭進行大場景3D重建之研究
論文名稱(外文):Study of Binocular Cameras with Fish-eye Lens for Reconstructing 3D Large Scenes
指導教授:蕭瑛星
指導教授(外文):Ying Shing Shiao
學位類別:碩士
校院名稱:國立彰化師範大學
系所名稱:電機工程學系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2011
畢業學年度:99
語文別:中文
論文頁數:76
中文關鍵詞:魚眼鏡頭
相關次數:
  • 被引用被引用:0
  • 點閱點閱:741
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
本文利用雙眼立體視覺的方法搭配魚眼鏡頭進行一個大場景的3D重建,並探討室內室外場景的重建成功度。利用極線已校正為水平的雙眼攝影機,以影像灰階梯度做為局部匹配的特徵,尋找對應點及計算場景的視差圖,再利用場景的魚眼影像與雙眼攝影機之一的影像,進行顏色樣板匹配找出對應區塊,把雙眼立體視覺建立的視差圖貼到與魚眼影像相對應的區塊上,蒐集完魚眼場景影像的視差圖之後,利用LabView的3D繪圖功能,繪製場景的3D模型。實驗結果顯示室內場景的重建率為15%及37%,室外場景的重建率為28%及34%。這些實驗結果初步驗證用兩部一般鏡頭攝影機,搭配一部有魚眼鏡頭的攝影機,可以完成大場景的3D重建。
ABSTRACT
This thesis proposes a method combining binocular cameras with normal lens
and one camera with a fisheye lens to reconstruct large 3D scenes. The success
rate for reconstructing indoor and outdoor scenes is discussed. The calibrated
epipolar camera line and grayscale gradient features are used to find the
corresponding point used to calculate the scene disparity map. The color matching
method is used to find the most similar scenes in the fisheye and normal images.
The binocular image disparity map is copied onto the appropriate place in the
fisheye image. After building the whole scene disparity map, the 3D plot toolbox
designed using LabView is used to depict the 3D large scenes. The experimental
results show the reconstruction rate for indoor scenes is 15 % and 37 % ,
respectively. The reconstruction rate for outdoor scenes is 28 % and 34 %,
respectively. The experimental results verify that the binocular cameras with the
normal lens and one camera with the fisheye lens can complete large scene 3D
reconstruction.
摘 要 I
ABSTRACT II
謝 誌 III
目 錄 IV
圖目錄 VI
表目錄 IX
第一章 緒論 1
1.1 前言 2
1.2 文獻探討 4
1.2.1 攝影機校正方法 4
1.2.2 主動式3D 場景重建的方法 6
1.2.3 被動式3D 場景重建的方法 7
1.3 研究動機與目的 9
1.4 論文架構 10
第二章 成像原理 11
2.1 攝影機參數校正 11
2.1.1 攝影機模型 12
2.1.2 攝影機校正 17
2.2 全像攝影技術 24
第三章 立體視覺與魚眼匹配 31
3.1 立體視覺法 31
3.2 魚眼影像與一般影像之匹配 35
3.3 顏色樣板匹配法 40
第四章 實驗結果 44
4.1 室內場景 46
4.2 室外場景 58
4.3 匹配失敗 68
第五章 結論與建議 69
參考文獻 71

圖1.1 Google Maps取像設備與3D街景圖[1] 1
圖1.2 運用3Ds MAX建構的場景[4] 2
圖1.3 利用攝影機擷取三張不同角度之校正板影像 6
圖2.1 攝影機模型示意圖與針孔成像模型 12
圖2.2 攝影機透視投影及鏡頭失真 14
圖2.3 鏡頭失真示意圖 17
圖2.4 攝影機座標系統與世界座標系統 18
圖2.5 sing( )的攝影機模型影響:(a)同向取正 (b)逆向取負 21
圖2.6 實驗用校正板 23
圖2.7 攝影機增加視野之方法 24
圖2.8 全方位影像及經過攤平處理為全景影像[37] 25
圖2.9 透過魚眼鏡頭擷取之影像 26
圖2.10 折射鏡組感測器與其光路圖[38] 27
圖2.11 魚眼鏡頭典型之三種投影函數曲線 28
圖2.12 不同CCD尺寸大小與魚眼成像區域示意圖 30
圖3.1 標準立體視覺幾何示意圖 32
圖3.2 標準立體視覺幾何俯視圖 34
圖3.3 同距離不同焦距的差別[40] 35
圖3.4 不同焦距對應的視角[41] 36
圖3.5 實驗場景 37
圖3.6 同一距離與場景不同焦距之影像比較 37
圖3.7 (a)- (e)為各個位置校正板的對齊結果,(f)為未對齊結果 38
圖3.8 顏色樣板比對流程圖 43
圖4.1 實驗設備 44
圖4.2 實驗流程圖 45
圖4.3 左右攝影機拍攝影像 49
圖4.4 視差圖 49
圖4.5 場景魚眼圖 50
圖4.6 匹配成功處 51
圖4.7 大場景3D模型 52
圖4.8 左右攝影機拍攝影像 54
圖4.9 視差圖 54
圖4.10 場景魚眼圖 55
圖4.11 匹配成功處 56
圖4.12 大場景3D模型 57
圖4.13 左右攝影機拍攝影像 59
圖4.14 視差圖 59
圖4.15 場景魚眼圖 60
圖4.16 匹配成功處 61
圖4.17 大場景3D模型 62
圖4.18 左右攝影機拍攝影像 64
圖4.19 視差圖 64
圖4.20 場景魚眼圖 65
圖4.21 匹配成功處 66
圖4.22 大場景3D模型 67
圖4.23 場景影向及內含位置匹配錯誤之全場景視差圖 68
表4.1 攝影機規格表 46
表4.2 魚眼鏡頭規格表 47
表4.3 鏡頭規格表 47



[1] Google Maps簡介,http://www.techbang.com.tw/posts/1964.
[2] 數位城市,http://www.csrsr.ncu.edu.tw/08CSRWeb/ChinVer /C7Info/
announce_list/view_no_login.php?serial=566.
[3] 程彥榮,以環場影像陣列建構虛擬實境之研究,國立東華大學資訊工程學系碩士論文,民國九十二年。
[4] 3Ds Max建立的虛擬場景介紹,http://www.grabc4d.org/viewthread. php?tid=20754.
[5] 何文峰,大型场景三维重建中的深度图像配准,北京大學工學碩士學位論文,民國九十三年。
[6] G. Christopher, and D. Kostas, “Paracatadioptric Camera Calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 24, No. 5, pp. 687-695, 2002.
[7] J. P. Barreto and H. Araujo, “Geometry Properties of Central Catadioptric Line Images and Application in Calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 27, No. 8, pp. 1327-1333, 2005.
[8] S. B. Kang, “Catadioptric self-calibration,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Hilton Head Island, SC, USA, Vol. 1, pp. 201-207, 2000.

[9] D. Scaramuzza, A. Martinelli, and R. Siegwart, “A Flexible Technique for Accurate Omnidirectional Camera Calibration and Structure from Motion,” Proceedings of IEEE International Conference of Vision Systems (ICVS'06), New York, January 5-7, 2006.
[10] R. Tsai, “A Versatile Camera Calibration Technique for High-accuracy 3D Machine Vision Metrology using off-the-shelf TV Cameras and Lenses,” IEEE Journal of Robotics and Automation, Vol. 3, Issue 4, pp. 323-344, August, 1987.
[11] K. Ohno and S. Tadokoro, “Dense 3D Map Building based on LRF Data and Color Image Fusion,” IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2792-2797, 2-6 Aug. 2005
[12] C. C. Wang, C. Thorpe, and S. Thrun, “Online Simultaneous Localization And Mapping with Detection And Tracking of Moving Objects: Theory and Results from a Ground Vehicle Crowded Urban Areas,” Proc. of ICRA, pp. 842-849, 2003.
[13] D. Hähnel, R. Triebet, W. Burgard, and S. Thrun, “Map Building with Mobile Robots in Dynamic Environments,” Proc. of ICRA, pp. 1557-1563, 2003.
[14] J. Leonard, J .D. Tardos, S. Thrun, and H. Choset, editors., “Workshop Notes of the ICRA Workshop on Concurrent Mapping and Localization for Autonomous Mobile Robots,” ICRA, 2002.
[15] S. Thrun, M. Diel and, and D. Hahnel, “Scan Alignment and 3-D Surface Modeling with a Helicopter Platform,” The 4th International Conference on Field and Service Robotics, pp. 287-297, 2003.
[16] H. Baltzakis, and P. Trahanias, “Closing Multiple Loops while Mapping Features in Cyclic Environments,” Int. Conf. on Intelligent Robots and Systems, pp. 717–722, 2003.
[17] G. Dissanayake, H. Durrant-Whyte, and T. Bailey, “A Computationally Efficient Solution to the Simultaneous Localisation and Map Nuilding (SLAM) Problem,” ICRA’2000 Workshop on Mobile Robot Navigation and Mapping, pp. 1009–1014, 2000.
[18] J.S. Gutmann and K. Konolige, “Incremental Mapping of Large Cyclic Environments,” In Proc. of the IEEE Int. Symp. on Computational Intelligence in Robotics and Automation (CIRA), pp. 318–325, 1999.
[19] J. Erickson, “Living the Dream - an Overview of the Mars Exploration Project,” IEEE Robotics and Automation Magazine, Vol. 13, No. 2, pp. 12-18, 2006.
[20] J.J. Biesiadecki, E.T. Baumgartner, R.G. Bonitz., B.K. Cooper, F.R. Hartman, P.C. Leger, M.W. Maimone, S.A. Maxwell, A. Trebi-Ollennu, E.W. Tunstel, and J.R. Wright, “Mars Exploration Rover Surface Operations: Driving Opportunity at Meridiani Planum,” IEEE Robotics and Automation Magazine , Vol. 13, No. 2, pp. 63-71, 2006.
[21] R.A. Lindemann, D.B. Bickler, B.D. Harrington, G.M. Ortiz, and C.J. Voothees, “Mars exploration rover mobility development,” IEEE Robotics and Automation Magazine, Vol. 13, No. 2, pp. 19-26, 2006.
[22] M. Ai-Chang, J. Bresina, L. Charest, A. Chase, J.C.-J. Hsu, A. Jonsson, B. Kanefsky, P. Morris, Kanna Rajan, J. Yglesias, B.G. Chan, W.C. Dias, and P.F. Maldague, “MAPGEN: Mixed-initiative Planning and Scheduling for the Mars Exploration Rover mission,” IEEE Intelligent Systems, Vol. 19, No. 1, pp. 8-12, 2004.
[23] A. Akbarzadeh, J.-M. Frahm, P. Mordohai, B. Clipp, C. Engels, D. Gallup, P. Merrell, M. Phelps, S. Sinha, B. Talton, L. Wang, Q. Yang, H. Stewenius, R. Yang, G. Welch, H. Towles, D. Nister, M. Pollefeys, “Towards Urban 3D Reconstruction from Video,” Third International Symposium on 3D Data Processing, Visualization, and Transmission, 14-16 June 2006.
[24] P. Mordohai, J.-M. Frahm, A. Akbarzadeh, B. Clipp, C. Engels, D. Gallup, P. Merrell, C. Salmi, S. Sinha, B. Talton, L. Wang, Q. Yang , H. Stewénius, H. Towles, G. Welch, R. Yang, M. Pollefeys, and D. Nister, “Real-Time Video-Based Reconstruction of Urban Environments,” ISPRS Working Group V/4 Workshop 3D-ARCH 2007: 3D Virtual Reconstruction and Visualization of Complex Architectures, (ETH Zurich, Switzerland), July 12–13 2007.
[25] S.M. Seitz, and K.N. Kutulakos, “A Theory of Shape by Space Carving,” The Proceedings of the Seventh IEEE International Conference on Computer Vision, Vol.1, pp. 307 – 314, Sep. 1999.
[26] C. Hernandez Esteban, and F. Schmitt., “Multi-stereo 3D Object Reconstruction,” 3D Data Processing, Visualization, and Transmission, pp. 159-166, 2002.
[27] G.K.M. Cheung, S. Baker, T. Kanade., “Visual Hull Alignment and Refinement Acrosstime: a 3D Reconstruction Algorithm Combining Shape-from-silhouette With stereo,” Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 375-382, 2003.
[28] M. Pollefeys and L. Van Gool, “From Images to 3D Models,” Communica- tion of the ACM, vol. 45, No. 7, pp. 50-55, July 2002.
[29] A. Fitzgibbon and A. Zisserman, “Automatic 3D Model Acquisition and Gen- eration of New Images From Video Sequences,” Proc. European Signal Processing Conference, pp. 1261-1269, 1998.
[30] 郭立群,基於機器人視覺之三維場景重建,國立交通大學資訊科學系碩士論文,民國九十一年。
[31] S. Bahadori and L. Iocchi, “A Stereo Vision System for 3D. Reconstruction and Semi-Automatic Surveillance of Museum Areas,” Workshop on Intelligenza Artificiale per i Beni Culturali, Pisa, 2003.
[32] T. Kanade, A. Yoshida, K. Oda, H. Kano, and M. Tanaka, “A Stereo Machine for Video-rate Dense Depth Mapping and Its New Applications,” In Proc. of CVPR'96, 1996.
[33] K. Konolige, “Small Vision Systems: Hardware and Implementation,” In Proc. of 8th International Symposium on Robotics Research, 1997.
[34] J. Gluckman, S. Nayar, and K. Thorek, “Real-time Omni-directional and Panoramic stereo,” Proc. 1998 DARPA Image Understanding Workshop, pp. 299-303, 1998
[35] A. Chaen, K. Yamazawa, N. Yokoya, and H. Takemura, “Omnidirectional Stereo Vision using Hyperomni Vision,” Technical Report 96-12, IEICE, Feb. 1997. in Japanese.
[36] H. Ishiguro, M. Yamamoto, and S. Tsuji. “Omni-directional Stereo,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.14, pp.257–262, 1992.
[37] 全方位影像及經過攤平處理為全景影像,http://www.douban.com/ note/141533538/.
[38] 張創然,魚眼影像的電腦視覺模型及三維度量學,國立台灣大學電機工程學系博士論文,民國九十二年。
[39] 李偉弘,魚眼鏡頭光學軸的定位方式及三維量測應用,私立明志科技大學電機工程學系碩士論文,民國九十六年。
[40] 同距離不同焦距的差別,http://stevenlins.blogspot.com/2008/06/ perspec- tive-focal-length.html?showComment=1307512761989#c519228622265123398.
[41] 不同焦距對應的視角,http://beb.anyday.com.tw/forum/viewthread.php? tid = 13255.
[42] 顏色樣板比對法介紹,http://zone.ni.com/reference/en-XX/help/372916H -01 / nivisionconcepts/ color_pattern_matching/.
[43] 粗略至精細演算法介紹,http://www.csie.leader.edu.tw/pdf/project/93013. pdf.
[44] 蘇柏愷,用於ACELP語音壓縮之廣義脈衝替換搜尋演算法,私立南台科技大學資訊工程學系碩士論文,民國九十六年。

連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top