跳到主要內容

臺灣博碩士論文加值系統

(44.192.49.72) 您好!臺灣時間:2024/09/11 06:16
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:李岱維
研究生(外文):Tai-WeiLi
論文名稱:基於混合式光流法之三維圖像重建
論文名稱(外文):Three-Dimensional Image Reconstruction via Mixed Optical Flow Methods
指導教授:王大中
指導教授(外文):Ta-Chung Wang
學位類別:碩士
校院名稱:國立成功大學
系所名稱:航空太空工程學系
學門:工程學門
學類:機械工程學類
論文種類:學術論文
論文出版年:2019
畢業學年度:107
語文別:中文
論文頁數:56
中文關鍵詞:立體視覺光流法卷積神經網路
外文關鍵詞:Stereo VisionOptical FlowConvolutional Neural Network
相關次數:
  • 被引用被引用:0
  • 點閱點閱:201
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
三維圖像重建是指將二維圖片的像素點經由計算,可以取得該點的深度資訊,計算出物體中每一個點的距離,即可在電腦中重建出三維場景。在建築、藝術、電子遊戲產業和醫療方面均有廣泛地應用,如人工眼角膜即是將患者的眼睛進行掃描,再利用3D列印技術印製出,使得角膜移植手術的成本和等待時間大幅縮短。單張照片無法取得距離資訊,必須將同一物體由不同的角度進行拍攝,藉由兩張照片所產生的差異推導幾何關係可以獲得深度資訊。為了確定物體上的某一點在不同兩張照片所指的是同一個點,一般使用特徵點匹配法進行圖像特徵匹配,雖然具有很好的強健性,但運算需要花費非常久的時間。而光流法是指像素點在前後兩張圖片的位移量,可以大幅降低計算時間。本論文採用兩種光流法:傳統的Farneback法和利用卷積神經網路 (CNN) 的FlowNet。兩種光流法分別獲得物體的細部特徵和大部分輪廓,最終結果將圖像以曲面圖顯示,可針對小物體進行三維圖像重建。
Three-dimensional (3D) image reconstruction is the process of building a 3D model from images. It is widely used in architecture, art, video game development, and healthcare. For example, a patient's eyes can be scanned and the resulting data can be used to 3D print an artificial cornea. This would greatly reduce the cost and waiting time for corneal transplantation. The depth information of an object is derived from the geometric relationship between two images. To ensure that the point on the object is the same in two different images, feature point matching is typically used. Although it has good robustness, the calculation takes a long time. Optical flow, which refers to the motion of an object in the image, can reduce calculation time. This thesis uses two optical flow methods, namely FarneBack and FlowNet (using a convolutional neural network). These two methods can obtain the detailed features and general appearance of an object, respectively. The experimental results are visualized as a 3D surface plot. The 3D image reconstruction of a small object is performed.
摘要 I
ABSTRACT II
致謝 III
CONTENTS IV
LIST OF FIGURES VI
LIST OF TABLES VIII
NOMENCLATURE IX
CHAPTER 1 INTRODUCTION 1
1.1 Motivation 1
1.2 Literature Review 3
1.3 Outline of This Research 7
CHAPTER 2 STEREO VISION 8
2.1 Non-Parallel Optical Axes System 11
2.2 Unit Conversion 14
2.3 Camera Calibration 16
2.4 Depth Measurement 18
CHAPTER 3 OPTICAL FLOW METHODS 19
3.1 Farneback Method 21
3.1.1 Polynomial Expansion 21
3.1.2 Image Pyramid 24
3.2 FlowNet 25
3.2.1 Network Architectures 27
3.2.2 Training Dataset 29
3.2.3 Evolution and Practical Application 30
3.3 Mixed Optical Flow 32
CHAPTER 4 EXPERIMENT 33
4.1 Experiment Hardware 34
4.2 Optical Flow Result 37
4.3 Depth Measurement Results 41
4.4 Visualization 44
4.5 Discussion 52
CHAPTER 5 CONCLUSION AND FUTURE WORK 53
REFERENCE 54

[1]C. A. Matamoros. (2019). Unique 3D model of Notre Dame cathedral could help reconstruction efforts | Euronews. Available: https://www.euronews.com/2019/04/18/unique-3d-model-of-notre-dame-cathedral-could-help-reconstruction-efforts
[2]J. Duckworth. (2019). Assassin’s Creed Unity Could Help Rebuild Notre Dame - Game Rant. Available: https://gamerant.com/assassins-creed-unity-notre-dame-cathedral/
[3]S. Zheng, Y. Zhou, R. Huang, L. Zhou, X. Xu, and C. Wang, A method of 3D measurement and reconstruction for cultural relics in museums, vol. ISPRS-International Archives of the Photogrammetry, Remote Sensing Spatial Information Sciences, pp. 145-149, 2012.
[4]E. Auvinet, J. Meunier, J. Ong, G. Durr, M. Gilca, and I. Brunette, Methodology for the construction and comparison of 3D models of the human cornea, in 2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2012, pp. 5302-5305: IEEE.
[5]G. Farnebäck, Two-frame motion estimation based on polynomial expansion, in Scandinavian conference on Image analysis, 2003, pp. 363-370: Springer.
[6]E. Ilg, N. Mayer, T. Saikia, M. Keuper, A. Dosovitskiy, and T. Brox, Flownet 2.0: Evolution of optical flow estimation with deep networks, in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 2462-2470.
[7]A. Dosovitskiy, P. Fischer, E. Ilg, P. Hausser, and C. Hazirbas, Flownet: Learning optical flow with convolutional networks, in Proceedings of the IEEE international conference on computer vision, 2015, pp. 2758-2766.
[8]T.-W. Hui, X. Tang, and C. Change Loy, Liteflownet: A lightweight convolutional neural network for optical flow estimation, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 8981-8989.
[9]F. Remondino and S. El‐Hakim, Image‐based 3D modelling: a review, The photogrammetric record, vol. 21, no. 115, pp. 269-291, 2006.
[10]S. Foix, G. Alenya, and C. Torras, Lock-in time-of-flight (ToF) cameras: A survey, IEEE Sensors Journal, vol. 11, no. 9, pp. 1917-1926, 2011.
[11]D. Scharstein and R. Szeliski, High-accuracy stereo depth maps using structured light, in IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2003, vol. 1: IEEE.
[12]J. G. D. França, M. A. Gazziro, A. N. Ide, and J. H. Saito, A 3D scanning system based on laser triangulation and variable field of view, in IEEE International Conference on Image Processing 2005, 2005, vol. 1, pp. I-425: IEEE.
[13]S. Schuon, C. Theobalt, J. Davis, and S. Thrun, High-quality scanning using time-of-flight depth superresolution, in 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 2008, pp. 1-7: IEEE.
[14]J. Geng, Structured-light 3D surface imaging: a tutorial, Advances in Optics and Photonics, vol. 3, no. 2, pp. 128-160, 2011.
[15]M. F. Costa, Surface inspection by an optical triangulation method, Optical Engineering, vol. 35, 1996.
[16]J. Aloimonos, Shape from texture, Biological cybernetics, vol. 58, no. 5, pp. 345-360, 1988.
[17]R. Zhang, P.-S. Tsai, J. E. Cryer, and M. Shah, Shape-from-shading: a survey, IEEE transactions on pattern analysis machine intelligence, vol. 21, no. 8, pp. 690-706, 1999.
[18]K. Konolige, Small vision systems: Hardware and implementation, in Robotics research: Springer, 1998, pp. 203-212.
[19]D. G. Lowe, Distinctive image features from scale-invariant keypoints, International journal of computer vision, vol. 60, no. 2, pp. 91-110, 2004.
[20]E. Rublee, V. Rabaud, K. Konolige, and G. R. Bradski, ORB: An efficient alternative to SIFT or SURF, in ICCV, 2011, vol. 11, no. 1, p. 2: Citeseer.
[21]H. Bay, T. Tuytelaars, and L. Van Gool, Surf: Speeded up robust features, in European conference on computer vision, 2006, pp. 404-417: Springer.
[22]Z. Wu, S. Song, A. Khosla, F. Yu, and L. Zhang, 3d shapenets: A deep representation for volumetric shapes, in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 1912-1920.
[23]B. K. Horn and B. G. Schunck, Determining optical flow, ARTIFICIAL INTELLIGENCE vol. 17, no. 1-3, pp. 185-203, 1981.
[24]I.-C. Wang, Stereo Vision System with Non-parallel Optic Axes for Small Object Contour Detection, master, Institute of Civil Aviation, NCKU, 2018.
[25]L.-H. Tang, A Dense Matching Method with Feature Based Descriptors for Non-Parallel Optical Axes Images, master, Institute of Civil Aviation, NCKU, 2018.
[26]J. J. Gibson, The perception of the visual world. The Riverside Press, Cambridge, 1950.
[27]G. Farnebäck, Polynomial expansion for orientation and motion estimation, Linköping University Electronic Press, 2002.
[28]A. Krizhevsky, I. Sutskever, and G. E. Hinton, Imagenet classification with deep convolutional neural networks, in Advances in neural information processing systems, 2012, pp. 1097-1105.
[29]P. Weinzaepfel, J. Revaud, Z. Harchaoui, and C. Schmid, DeepFlow: Large displacement optical flow with deep matching, in Proceedings of the IEEE International Conference on Computer Vision, 2013, pp. 1385-1392.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊