# 臺灣博碩士論文加值系統

(44.192.49.72) 您好！臺灣時間：2024/09/11 06:16

:::

### 詳目顯示

:

• 被引用:0
• 點閱:201
• 評分:
• 下載:0
• 書目收藏:0
 三維圖像重建是指將二維圖片的像素點經由計算，可以取得該點的深度資訊，計算出物體中每一個點的距離，即可在電腦中重建出三維場景。在建築、藝術、電子遊戲產業和醫療方面均有廣泛地應用，如人工眼角膜即是將患者的眼睛進行掃描，再利用3D列印技術印製出，使得角膜移植手術的成本和等待時間大幅縮短。單張照片無法取得距離資訊，必須將同一物體由不同的角度進行拍攝，藉由兩張照片所產生的差異推導幾何關係可以獲得深度資訊。為了確定物體上的某一點在不同兩張照片所指的是同一個點，一般使用特徵點匹配法進行圖像特徵匹配，雖然具有很好的強健性，但運算需要花費非常久的時間。而光流法是指像素點在前後兩張圖片的位移量，可以大幅降低計算時間。本論文採用兩種光流法：傳統的Farneback法和利用卷積神經網路 (CNN) 的FlowNet。兩種光流法分別獲得物體的細部特徵和大部分輪廓，最終結果將圖像以曲面圖顯示，可針對小物體進行三維圖像重建。
 Three-dimensional (3D) image reconstruction is the process of building a 3D model from images. It is widely used in architecture, art, video game development, and healthcare. For example, a patient's eyes can be scanned and the resulting data can be used to 3D print an artificial cornea. This would greatly reduce the cost and waiting time for corneal transplantation. The depth information of an object is derived from the geometric relationship between two images. To ensure that the point on the object is the same in two different images, feature point matching is typically used. Although it has good robustness, the calculation takes a long time. Optical flow, which refers to the motion of an object in the image, can reduce calculation time. This thesis uses two optical flow methods, namely FarneBack and FlowNet (using a convolutional neural network). These two methods can obtain the detailed features and general appearance of an object, respectively. The experimental results are visualized as a 3D surface plot. The 3D image reconstruction of a small object is performed.
 摘要 IABSTRACT II致謝 IIICONTENTS IVLIST OF FIGURES VILIST OF TABLES VIIINOMENCLATURE IXCHAPTER 1 INTRODUCTION 11.1 Motivation 11.2 Literature Review 31.3 Outline of This Research 7CHAPTER 2 STEREO VISION 82.1 Non-Parallel Optical Axes System 112.2 Unit Conversion 142.3 Camera Calibration 162.4 Depth Measurement 18CHAPTER 3 OPTICAL FLOW METHODS 193.1 Farneback Method 213.1.1 Polynomial Expansion 213.1.2 Image Pyramid 243.2 FlowNet 253.2.1 Network Architectures 273.2.2 Training Dataset 293.2.3 Evolution and Practical Application 303.3 Mixed Optical Flow 32CHAPTER 4 EXPERIMENT 334.1 Experiment Hardware 344.2 Optical Flow Result 374.3 Depth Measurement Results 414.4 Visualization 444.5 Discussion 52CHAPTER 5 CONCLUSION AND FUTURE WORK 53REFERENCE 54
 [1]C. A. Matamoros. (2019). Unique 3D model of Notre Dame cathedral could help reconstruction efforts | Euronews. Available: https://www.euronews.com/2019/04/18/unique-3d-model-of-notre-dame-cathedral-could-help-reconstruction-efforts[2]J. Duckworth. (2019). Assassin’s Creed Unity Could Help Rebuild Notre Dame - Game Rant. Available: https://gamerant.com/assassins-creed-unity-notre-dame-cathedral/[3]S. Zheng, Y. Zhou, R. Huang, L. Zhou, X. Xu, and C. Wang, A method of 3D measurement and reconstruction for cultural relics in museums, vol. ISPRS-International Archives of the Photogrammetry, Remote Sensing Spatial Information Sciences, pp. 145-149, 2012.[4]E. Auvinet, J. Meunier, J. Ong, G. Durr, M. Gilca, and I. Brunette, Methodology for the construction and comparison of 3D models of the human cornea, in 2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2012, pp. 5302-5305: IEEE.[5]G. Farnebäck, Two-frame motion estimation based on polynomial expansion, in Scandinavian conference on Image analysis, 2003, pp. 363-370: Springer.[6]E. Ilg, N. Mayer, T. Saikia, M. Keuper, A. Dosovitskiy, and T. Brox, Flownet 2.0: Evolution of optical flow estimation with deep networks, in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 2462-2470.[7]A. Dosovitskiy, P. Fischer, E. Ilg, P. Hausser, and C. Hazirbas, Flownet: Learning optical flow with convolutional networks, in Proceedings of the IEEE international conference on computer vision, 2015, pp. 2758-2766.[8]T.-W. Hui, X. Tang, and C. Change Loy, Liteflownet: A lightweight convolutional neural network for optical flow estimation, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 8981-8989.[9]F. Remondino and S. El‐Hakim, Image‐based 3D modelling: a review, The photogrammetric record, vol. 21, no. 115, pp. 269-291, 2006.[10]S. Foix, G. Alenya, and C. Torras, Lock-in time-of-flight (ToF) cameras: A survey, IEEE Sensors Journal, vol. 11, no. 9, pp. 1917-1926, 2011.[11]D. Scharstein and R. Szeliski, High-accuracy stereo depth maps using structured light, in IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2003, vol. 1: IEEE.[12]J. G. D. França, M. A. Gazziro, A. N. Ide, and J. H. Saito, A 3D scanning system based on laser triangulation and variable field of view, in IEEE International Conference on Image Processing 2005, 2005, vol. 1, pp. I-425: IEEE.[13]S. Schuon, C. Theobalt, J. Davis, and S. Thrun, High-quality scanning using time-of-flight depth superresolution, in 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 2008, pp. 1-7: IEEE.[14]J. Geng, Structured-light 3D surface imaging: a tutorial, Advances in Optics and Photonics, vol. 3, no. 2, pp. 128-160, 2011.[15]M. F. Costa, Surface inspection by an optical triangulation method, Optical Engineering, vol. 35, 1996.[16]J. Aloimonos, Shape from texture, Biological cybernetics, vol. 58, no. 5, pp. 345-360, 1988.[17]R. Zhang, P.-S. Tsai, J. E. Cryer, and M. Shah, Shape-from-shading: a survey, IEEE transactions on pattern analysis machine intelligence, vol. 21, no. 8, pp. 690-706, 1999.[18]K. Konolige, Small vision systems: Hardware and implementation, in Robotics research: Springer, 1998, pp. 203-212.[19]D. G. Lowe, Distinctive image features from scale-invariant keypoints, International journal of computer vision, vol. 60, no. 2, pp. 91-110, 2004.[20]E. Rublee, V. Rabaud, K. Konolige, and G. R. Bradski, ORB: An efficient alternative to SIFT or SURF, in ICCV, 2011, vol. 11, no. 1, p. 2: Citeseer.[21]H. Bay, T. Tuytelaars, and L. Van Gool, Surf: Speeded up robust features, in European conference on computer vision, 2006, pp. 404-417: Springer.[22]Z. Wu, S. Song, A. Khosla, F. Yu, and L. Zhang, 3d shapenets: A deep representation for volumetric shapes, in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 1912-1920.[23]B. K. Horn and B. G. Schunck, Determining optical flow, ARTIFICIAL INTELLIGENCE vol. 17, no. 1-3, pp. 185-203, 1981.[24]I.-C. Wang, Stereo Vision System with Non-parallel Optic Axes for Small Object Contour Detection, master, Institute of Civil Aviation, NCKU, 2018.[25]L.-H. Tang, A Dense Matching Method with Feature Based Descriptors for Non-Parallel Optical Axes Images, master, Institute of Civil Aviation, NCKU, 2018.[26]J. J. Gibson, The perception of the visual world. The Riverside Press, Cambridge, 1950.[27]G. Farnebäck, Polynomial expansion for orientation and motion estimation, Linköping University Electronic Press, 2002.[28]A. Krizhevsky, I. Sutskever, and G. E. Hinton, Imagenet classification with deep convolutional neural networks, in Advances in neural information processing systems, 2012, pp. 1097-1105.[29]P. Weinzaepfel, J. Revaud, Z. Harchaoui, and C. Schmid, DeepFlow: Large displacement optical flow with deep matching, in Proceedings of the IEEE International Conference on Computer Vision, 2013, pp. 1385-1392.
 國圖紙本論文
 推文當script無法執行時可按︰推文 網路書籤當script無法執行時可按︰網路書籤 推薦當script無法執行時可按︰推薦 評分當script無法執行時可按︰評分 引用網址當script無法執行時可按︰引用網址 轉寄當script無法執行時可按︰轉寄

 1 結合視差與光流進行環場鳥瞰影像之障礙物偵測機制 2 基於光流及深度學習的超車偵測技術 3 排除難/無匹配點之非監督式單眼深度估計 4 基於深度卷積神經網路之立體影像視差圖估測及其在ADAS行車影像之應用 5 基於長短期記憶模型之停車場空位偵測方法 6 結合物件偵測與立體視覺於水下魚體即時測距 7 深度學習之表情辨識系統 8 深度學習之手勢辨識系統 9 眼在手系統之影像控制研究 10 以影像邊緣特徵加速光流法於視覺測距

 無相關期刊

 1 結合深度學習與光流法於行人影像追蹤 2 應用影像光流法於土石流表面顆粒之追蹤 3 長滯空電動飛機設計之動力匹配法 4 利用數值分析模擬閘極引發源極漏電流之雙閘極電晶體 5 機器學習輔佐非平衡格林函數模擬雙閘極金氧半場效電晶體 6 平板電腦遊戲對中風患者上肢動作及日常生活功能之成效 7 超音波導引之智慧微創手術訓練系統 8 使用卷積神經網絡偵測沃夫巴金森懷特症候群之三角波 9 利用跨句強化關聯嵌入與文件注意力之關係擷取 10 高效無人機輔助之設備間通訊網路 11 應用於車聯網環境考量取貨服務的計程車派遣策略 12 分時多工巨量天線系統之領航訊號資源管理 13 挑選軟體團隊建設活動之評估與量測 14 基於檢測與擴展區域結構之目標節點社群探索 15 高頻無線通訊微波介質陶瓷材料Mg2TiO4與Mg4Ta2O9 之研製

 簡易查詢 | 進階查詢 | 熱門排行 | 我的研究室