跳到主要內容

臺灣博碩士論文加值系統

(216.73.216.136) 您好!臺灣時間:2025/09/20 14:39
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:吳尚諭
研究生(外文):Shang-Yu Wu
論文名稱:RGB-D 影像之階層式3D 貼合平行處理之 研究
論文名稱(外文):Parallel Hierarchical 3-D Matching of RGB-D Images
指導教授:石勝文石勝文引用關係
指導教授(外文):Sheng-Wen Shih
口試委員:石勝文藍坤銘張軒庭周家德
口試委員(外文):Sheng-Wen ShihKun-Ming LanXuan-Ting ZhangJia-De Zhou
口試日期:2013-07-12
學位類別:碩士
校院名稱:國立暨南國際大學
系所名稱:資訊工程學系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2013
畢業學年度:101
語文別:中文
論文頁數:47
中文關鍵詞:RGB-D影像3D貼合尺度空間平行處理
外文關鍵詞:RGB-D Image3D MatchingScale SpaceParallel Process
相關次數:
  • 被引用被引用:0
  • 點閱點閱:186
  • 評分評分:
  • 下載下載:21
  • 收藏至我的研究室書目清單書目收藏:0
本文提出一種不同於傳統點對點及點對面的兩兩貼合演算法。我們結合深度資訊
及色彩資訊,提出了一個目標函數來求出兩圖片間之座標轉換矩陣。為了貼合移動
量較大的兩張影像,我們提出在尺度空間(Scale Space) 階層式估測貼合參數的方法。
主要的構想是將影像適度模糊化,使細微特徵暫時性的被忽略,簡化我們的3-D 貼
合問題。由於在影像經過模糊化後,將會消去部份的影像資訊。為了要完全的利用
影像中的資訊,我們將對影像逐步進行模糊化並採用由模糊至精細影像貼合方法,
以便在估測參數時能利用到影像上的微小特徵。本文所提出的方法是以CUDA 來實
作其平行化程式。在實驗結果中將會展示本文所提出的方法可以有效的將兩張圖形
進行貼合。

This thesis proposes a new method for RGB-D image matching which is different from
the traditional point-to-point/point-to-plane matching methods. An objective function is proposed
that fuses both depth and color information for estimating the transformation matrix
between two RGB-D images. A hierarchical scale space parameter estimation method is proposed
for dealing with image matching with large motion. The main idea is to smooth the
input image appropriately so that the minute features are temporarily ignored to simplify the
matching problem of main 3-D structures. Notably, image smoothing will eliminate a portion
of the image information. To fully utilize the RGB-D information, the degree of blurriness
is reduced gradually to introduce the minute image features into the parameter estimation
process in a coarse-to-fine matching approach. The image matching method is implemented
with CUDA parallel processing framework. Experimental results show that the proposed
method can efficiently match two RGB-D images.

致謝................................................................................................................................... i
論文摘要............................................................................................................................ ii
Abstract ............................................................................................................................. iii
目錄.................................................................................................................................. iv
圖目錄............................................................................................................................. vi
表目錄............................................................................................................................. viii
第一章緒論...................................................................................................................... 1
1.1 研究動機. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 相關研究與應用. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.1 點對點貼合. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.2 點對面貼合. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.3 加速貼合. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 研究目的. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.4 論文架構. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
第二章系統架構.............................................................................................................. 7
2.1 Microsoft Kinect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Nvidia GeForce GTX 550 Ti . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.3 系統流程. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
第三章研究方法.............................................................................................................. 10
3.1 兩兩貼合方法. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.1.1 2-D 點到3-D 點間的轉換. . . . . . . . . . . . . . . . . . . . . . 11
3.1.2 兩攝影機間3-D 點的轉換. . . . . . . . . . . . . . . . . . . . . 13
3.1.3 3-D 點到2-D 點間的轉換. . . . . . . . . . . . . . . . . . . . . . 14
3.2 攝影機參數校正. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.3 模糊化與階層式設計. . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.4 平行處理. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.4.1 CUDA 硬體架構. . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.4.2 平行化設計. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
第四章實驗結果.............................................................................................................. 28
4.1 實驗平台. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.2 貼合實驗結果. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.2.1 xy 方向小幅移動實驗. . . . . . . . . . . . . . . . . . . . . . . . 30
4.2.2 xy 方向大幅移動實驗. . . . . . . . . . . . . . . . . . . . . . . . 30
4.2.3 z 軸小幅移動實驗. . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.2.4 z 軸大幅移動實驗. . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.2.5 小幅轉動實驗. . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.2.6 大幅轉動實驗. . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.2.7 RGB 圖形貼合實驗. . . . . . . . . . . . . . . . . . . . . . . . . 38
4.3 階層式貼合比較結果. . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4.3.1 深度平移實驗. . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4.3.2 轉動實驗. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
第五章結論與未來展望.................................................................................................. 44
5.1 結論. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
5.2 未來展望. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
參考文獻........................................................................................................................ 46
[1] P. J. Besl and N. D. McKay, “Method for registration of 3-d shapes,” IEEE Transactions
on Pattern Analysis and Machine Intelligence, vol. 18, no. 2, pp. 239–256, 1992.
[2] K. S. Arun, T. S. Haung, and S. D. Blostein, “Least-squares fitting of two 3-d point
sets,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAMI-9,
pp. 698–700, 1987.
[3] G. Turk and M. Levoy, “Zippered polygon meshes from range images,” in Proceedings
of the 21st annual conference on Computer graphics and interactive techniques,
SIGGRAPH ’94, pp. 311–318, 1994.
[4] Y. Chen and G. Medioni, “Object modelling by registration of multiple range images,”
Image and Vision Computing, vol. 10, no. 3, pp. 145–155, 1992.
[5] A. E. Johnson and S. B. Kang, “Registration and integration of textured 3-d data,” in
IMAGE AND VISION COMPUTING, pp. 234–241, 1996.
[6] A. D. Brett, A. Hill, and C. J. Taylor, “A method of 3d surface correspondence and
interpolation for merging shape examples,” Image and Vision Computing, vol. 17, no. 6,
pp. 635–642, 1999.
[7] C. S. Chen, Y. P. Hung, and J. B. Cheng, “Ransac-based darces: a new approach to fast
automatic registration of partially overlapping range images,” IEEE Transactions on
Pattern Analysis and Machine Intelligence, vol. 21, no. 11, pp. 1229–1234, 1999.
[8] C. Schütz, “Geometric matching for free-form 3d object recognition,” in Proceeds of
Asian Conference on Computer Vision, pp. 1–6, 1995.
[9] J. L. Bentley, “Multidimensional binary search trees used for associative searching,”
Proceeding of the ACM, vol. 18, no. 9, pp. 509–517, 1975.
[10] B. Siemiątkowska and A. Zychewicz, “The application of icp and sift algorithms for mobile
robot localization,” ROMANSY 18 Robot Design, Dynamics and Control, vol. 524,
pp. 391–398, 2010.
[11] H. Bay, T. Tuytelaars, and L. V. Gool, “Surf: Speeded up robust features,” 9th European
Conference on Computer Vision, vol. 3951, pp. 404–417, 2006.
[12] C. Harris and M. Stephens, “A combined corner and edge detector. in: Proceedings of
the alvey vision conference,” Proceedings of the Alvey Vision Conference, pp. 404–417,
1998.
[13] T. Jost and H. Hugli, “A multi-resolution scheme icp algorithm for fast shape registration,”
in Proceedings of 3D Data Processing Visualization and Transmission,
pp. 540–543, 2002.
[14] R. Benjemaa and F. Schmitt, “Fast global registration of 3d sampled surfaces using a
multi-z-buffer technique,” Image and Vision Computing, vol. 17, pp. 113–123, 1999.
[15] S. J. Horng, “Constant time algorithm for template matching on a reconfigurable array
of processors,” The Computer Jurnal, vol. 17, pp. 113–123, 1993.
[16] H. R. Tsaia, S. J. Hornga, S. S. Tsaia, S. S. Leeb, T. W. Kaoc, and C. H. Chend, “Optimal
speed-up parallel image template matching algorithms on processor arrays with
a reconfigurable bus system,” Computer Vision and Image Understanding, vol. 71,
pp. 393–412, 1998.
[17] A. Chariot and R. Keriven, “Gpu-boosted online image matching,” in International
Conference on Pattern Recognition, pp. 1–4, 2008.
[18] S. W. Yang, C. C. Wang, and C. H. Chang, “Ransac matching: Simultaneous registration
and segmentation,” in IEEE International Conference on Robotics and Automation,
pp. 1905–1912, 2010.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top