跳到主要內容

臺灣博碩士論文加值系統

(44.210.149.205) 您好!臺灣時間:2024/04/17 08:21
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:黃揚晟
研究生(外文):Yang-Cheng Huang
論文名稱:運用類神經網路之三維點雲比對
論文名稱(外文):3D Point Cloud Registration Using Neural Networks
指導教授:張文中
指導教授(外文):Wen-Chung Chang
口試委員:張文中顏炳郎王銀添鄭銘揚林錫寬
口試委員(外文):Wen-Chung ChangPing-Lang YenYin-Tien WangMing-Yang ChengShir-Kuan Lin
口試日期:2018-07-25
學位類別:碩士
校院名稱:國立臺北科技大學
系所名稱:電機工程系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2018
畢業學年度:106
語文別:中文
論文頁數:62
中文關鍵詞:立體像素化點雲比對類神經網路機器學習深度學習卷積神經網路
外文關鍵詞:VoxelizationPoint Cloud RegistrationNeural NetworksMachine LearningDeep LearningConvolutional Neural Networks
相關次數:
  • 被引用被引用:0
  • 點閱點閱:167
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
為提升工業界生產效能,產業自動化的重要性與日俱增,因此智慧自動化三維物件視覺檢測、定位導引與辨識等應用需求大增,其核心技術”三維點雲比對”的研究益形重要。針對此議題,本論文提出利用類神經網路(Neural Networks, NNs) 與卷積神經網路(Convolutional Neural Networks, CNNs)之具體改良方法。在對神經網路模型進行預先訓練的部分,訓練樣本集是利用對模型點雲進行隨機旋轉以及稀疏化後,再與多種幾何特徵與描述子進行搭配來完成建構,並以監督式學習(Supervised learning)的概念對模型進行訓練。預測架構部分,提出以單一神經網路模型搭配最近點迭代匹配法(Iterative Closest Point, ICP)和以多個神經網路模型所構成的架構。在完成模型訓練後,即可以上述架構搭配點雲質心進行運算,以完成物件點雲與模型點雲之剛性對齊,並取得兩點雲之間的轉換關係。實驗部分運用上述之訓練與預測架構, 針對數種物件進行完整的運算流程,包括樣本集建置、訓練、比對以及預測,完成一套完整的三維點雲比對架構。本論文所提出之點雲比對方法已經實驗驗證其可行性及有效性。
In order to enhance the performance of industrial production, manufacturing automation has been playing a critical role. Applications focused on automated 3D visual object detection, positioning, and identification are thus becoming important in manufacturing industry. Hence, 3D point cloud registration appears to be one of the core technologies. Therefore, some practical optimization methods are proposed in this thesis. Specifically, neural networks (NNs) and convolutional neural networks (CNNs) are actively integrated to accomplish 3D pose estimation. The training sample set for offline supervised learning is constructed by various geometric features and descriptors with labels based on randomly rotating and down-sampling the model point cloud. A number of optimized 3D point cloud registration architecture using both neural networks and ICP are proposed that make rigid alignment between the model point cloud and data point cloud which further determine the transformation between the two point clouds. The entire 3D point cloud registration task including sample set construction, training, registration, and inference has been successfully accomplished for typical objects. The proposed approaches have been validated by experiments.
中文摘要 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .i
英文摘要 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .ii
誌謝 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .iii
目錄 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .iv
表目錄 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .vi
圖目錄 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .viii
第一章 緒論. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1
1.1 研究動機及目的 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1
1.2 文獻回顧 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1
1.3 論文具體成果 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2
1.4 論文章節瀏覽 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2
第二章 系統簡介. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3
2.1 系統目的及概述 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3
2.2 系統設備 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5
2.3 系統架構 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5
2.4 系統流程 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7
第三章 現有三維點雲比對方法 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9
3.1 網格稀疏化 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9
3.2 RANdom SAmple Consensus (RANSAC) 演算法. . . . . . . . . . . . . . . . . . . . . . . . . . . . .10
3.2.1 RANSAC 演算法概述 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10
3.2.2 RANSAC 演算法運用於點雲比對 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11
3.3 Iteractive Closest Point (ICP) 演算法之精密剛性比對. . . . . . . . . . . . . . . . . . . . . . . .12
第四章 三維點雲比對效率之優化技術 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15
4.1 監督式學習(Supervised learning). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15
4.2 類神經網路(Neural networks, NNs) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16
4.2.1 類神經網路模型概述 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16
4.2.2 激勵函數(Activation function). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .17
4.2.3 誤差倒傳遞演算法(Error back propagation) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .20
4.2.4 小批次訓練(Mini-batches) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .22
4.2.5 軸對齊邊界盒特徵
(Axis-aligned bounding box features, AABB). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23
4.2.6 方向邊界盒特徵(Oriented bounding box features, OBB). . . . . . . . . . . . . . . . . . . .24
4.2.7 中位數區間均值特徵
(Average around median features, AAM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .26
4.3 卷積神經網路(Convolutional neural networks, CNNs) . . . . . . . . . . . . . . . . . . . . . . .27
4.3.1 卷積神經網路模型概述 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .27
4.3.2 原點均值向量全域描述子
(Average origin vector global descriptor, AOV) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .29
4.4 神經網路比對架構. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .29
4.4.1 粗略比對–單位四元數標籤 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .30
4.4.2 精確比對–旋轉角標籤 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .30
第五章 實驗結果. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .33
5.1 實驗規劃 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .33
5.1.1 實驗測試物件 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .34
5.2 三維點雲比對實驗結果. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .34
5.2.1 運用類神經網路之粗略比對 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .35
5.2.2 運用類神經網路之精確比對 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .38
5.2.3 運用卷積神經網路之粗略比對 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .40
5.2.4 運用卷積神經網路之精確比對 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .42
5.2.5 方法比較. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .44
第六章 結論. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .55
6.1 成果整理 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .55
6.2 未來工作規劃 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .55
參考文獻 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .57
[1] P. Besl and H. McKay, “A method for registration of 3-D shapes,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 14, no. 2, pp. 239 –256, Feb. 1992.
[2] N. H. Quach and M. Liu, “Visual based tracking of planar robot arms: a scheme using projection matrix,” in Proc. of 2003 IEEE International Conference on Robotics, Intelligent Systems and Signal Processing, vol. 1, Changsha, Hunan, China, 2003, pp. 588–593 vol.1.
[3] D. Chetverikov, D. Stepanov, and P. Krsek, “Robust euclidean alignment of 3D point sets: the trimmed iterative closest point algorithm,” Image and Vision Computing, vol. 23, no. 3, pp. 299 – 309, Mar. 2005.
[4] S. Li, J. Wang, Z. Liang, and L. Su, “Tree point clouds registration using an improved ICP algorithm based on kd-tree,” in Proc. of 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 2016, pp. 4545–4548.
[5] H. T. Kim, S. B. Kang, H. S. Kang, Y. J. Cho, N. G. Park, and J. O. Kim, “Optical distance control for a multi focus image in camera phone module assembly,” in Proc. of 2009 International Symposium on Optomechatronic Technologies, Istanbul, Turkey, 2009, pp. 52–58.
[6] M. A. Fischler and R. C. Bolles, “Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography,” Commun. ACM, vol. 24, no. 6, pp. 381–395, Jun. 1981.
[7] C.-S. Chen, Y.-P. Hung, and J.-B. Cheng, “RANSAC-based DARCES: a new approach to fast automatic registration of partially overlapping range 57images,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 21, no. 11, pp. 1229–1234, Nov. 1999.
[8] J. Han, F. Wang, Y. Guo, C. Zhang, and Y. He, “An improved RANSAC registration algorithm based on region covariance descriptor,” in Proc. of 2015 Chinese Automation Congress (CAC), Wuhan, China, 2015, pp. 746–751.
[9] C. Y. Tsai, C. W. Wang, and W. Y. Wang, “Design and implementation of a RANSAC RGB-D mapping algorithm for multi-view point cloud registration,” in Proc. of 2013 CACS International Automatic Control Conference, Nantou, Taiwan, 2013, pp. 367–370.
[10] Y. Onmek, J. Triboulet, S. Druon, A. Meline, and B. Jouvencel, “Evaluation of underwater 3D reconstruction methods for archaeological objects: Case study of anchor at mediterranean sea,” in Proc. of 2017 3rd International Conference on Control, Automation and Robotics (ICCAR), Nagoya, Japan, Apr. 2017, pp. 394–398.
[11] W.-C. Chang, V.-T. Nguyen, and P.-R. Chu, “Reconstruction of 3D contour with an active laser-vision robotic system,” Asian Journal of Control, vol. 14, no. 2, pp. 400–412, Mar. 2012.
[12] W.-C. Chang, “A reconfigurable stereo visual control system,” in Proc. of the Yale Graduate Student Symposium, New Haven, CT, U.S.A., May 1996.
[13] W.-C. Chang, A. S. Morse, and G. D. Hager, “A calibration-free, self-adjusting stereo visual control system,” in Proc. of the 13th World Congress, International Federation of Automatic Control, vol. A. San Francisco, CA, U.S.A.: IFAC, 1996, pp. 343–348.
[14] W.-C. Chang and A. S. Morse, “Control of a rigid robot using an uncalibrated stereo vision system,” in Proc. of the 1997 American Control Conference, Albuquerque, NM, Jun. 1997.
[15] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalckbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of go with deep neural networks and tree search,” nature, vol. 529, no. 7587, p. 484, Jan. 2016.
[16] D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, Y. Chen, T. Lillicrap, F. Hui, L. Sifre, G. Van Den Driessche, T. Graepel, and D. Hassabis, “Mastering the game of go without human knowledge,” nature, vol. 550, no. 7676, p. 354, Oct. 2017.
[17] M. Kallenberg, K. Petersen, M. Nielsen, A. Y. Ng, P. Diao, C. Igel, C. M.Vachon, K. Holland, R. R. Winkel, N. Karssemeijer, and M. Lillholm, “Unsupervised deep learning applied to breast density segmentation and mammographic risk scoring,” IEEE Transactions on medical imaging, vol. 35, no. 5, pp. 1322–1331, May 2016.
[18] S. Levine, P. Pastor, A. Krizhevsky, J. Ibarz, and D. Quillen, “Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection,” The International Journal of Robotics Research, vol. 37, no. 4-5, pp. 421–436, Jun. 2018.
[19] G. Pang and U. Neumann, “3D point cloud object detection with multi-view convolutional neural network,” in Proc. of 2016 23rd International Conference on Pattern Recognition (ICPR). IEEE, 2016, pp. 585–590.
[20] Z. Zhang, L. Zhang, Y. Tan, L. Zhang, F. Liu, and R. Zhong, “Joint discriminative dictionary and classifier learning for als point cloud classification,” IEEE Transactions Geosci. Remote Sens, vol. 56, pp. 524–538, Jan. 2017.
[21] Z. Zhang, L. Zhang, X. Tong, B. Guo, L. Zhang, and X. Xing, “Discriminative-dictionary-learning-based multilevel point-cluster features for als point-cloud classification,” IEEE Transactions on Geoscience and Remote Sensing, vol. 54, no. 12, pp. 7309–7322, Dec. 2016.
[22] C. R. Qi, L. Yi, H. Su, and L. J. Guibas, “Pointnet++: Deep hierarchical feature learning on point sets in a metric space,” in Proc. of Advances in Neural Information Processing Systems, 2017, pp. 5099–5108.
[23] C. R. Qi, H. Su, K. Mo, and L. J. Guibas, “Pointnet: Deep learning on point sets for 3D classification and segmentation,” in Proc. of The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 652–660.
[24] M. Maimaitimin, K. Watanabe, and S. Maeyama, “Surface-common-feature descriptor of point cloud data for deep learning,” in Proc. of 2016 IEEE International Conference on Mechatronics and Automation (ICMA). IEEE, 2016, pp. 525–529.
[25] W.-C. Chang and V.-T. Pham, “An efficient neural network with performance-based switching of candidate optimizers for point cloud matching,” in Proc. of 2018 The 6th International Conference on Control, Mechatronics and Automation, Tokyo, Japan, 2018.
[26] E. Gedat, P. Fechner, R. Fiebelkorn, and R. Vandenhouten, “Multiple human skeleton recognition in rgb and depth images with graph theory, anatomic refinement of point clouds and machine learning,” in Proc. of 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC). IEEE, 2016, pp. 627–631.
[27] T. Zhou and B. E. Shi, “Simultaneous learning of the structure and kinematic model of an articulated body from point clouds,” in Proc. of 2016 International Joint Conference on Neural Networks (IJCNN). IEEE, 2016, pp.5248–5255.
[28] R. Rusu, N. Blodow, and M. Beetz, “Fast point feature histograms (FPFH) for 3D registration,” in Proc. of 2009 IEEE International Conference on Robotics and Automation, Kobe, Japan, 2009, pp. 3212–3217.
[29] T. Hackel, J. D. Wegner, N. Savinov, L. Ladicky, K. Schindler, and M. Pollefeys, “Large-scale supervised learning for 3D point cloud labeling: Semantic3d. net,” Photogrammetric Engineering & Remote Sensing, vol. 84, no. 5, pp. 297–308, May 2018.
[30] T. Miyato, S.-i. Maeda, M. Koyama, and S. Ishii, “Virtual adversarial training: a regularization method for supervised and semi-supervised learning,” arXiv preprint arXiv:1704.03976, Apr. 2017.
[31] A. Conneau, D. Kiela, H. Schwenk, L. Barrault, and A. Bordes, “Supervised learning of universal sentence representations from natural language inference data,” arXiv preprint arXiv:1705.02364, May 2017.
[32] D. Nguyen and B. Widrow, “Improving the learning speed of 2-layer neural networks by choosing initial values of the adaptive weights,” in Proc. of 1990 IJCNN International Joint Conference on Neural Networks, 1990. IEEE, 1990, pp. 21–26.
[33] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Proc. of Advances in neural information processing systems, 2012, pp. 1097–1105.
[34] W. Zaremba, I. Sutskever, and O. Vinyals, “Recurrent neural network regularization,” arXiv preprint arXiv:1409.2329, Sep. 2014.
[35] V. Nair and G. E. Hinton, “Rectified linear units improve restricted Boltzmann machines,” in Proc. of the 27th international conference on machine learning (ICML-10), 2010, pp. 807–814.
61[36] R. Hecht-Nielsen, “Theory of the backpropagation neural network,” in Proc. of International 1989 Joint Conference on Neural Networks, 1988, pp. 593–605.
[37] W. Li, R. Zhao, T. Xiao, and X. Wang, “Deepreid: Deep filter pairing neural network for person re-identification,” in Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 152–159.
[38] O. Dekel, R. Gilad-Bachrach, O. Shamir, and L. Xiao, “Optimal distributed online prediction using mini-batches,” Journal of Machine Learning Research, vol. 13, no. 1, pp. 165–202, Jan. 2012.
[39] S. Gottschalk, M. C. Lin, and D. Manocha, “Obbtree: A hierarchical structure for rapid interference detection,” in Proc. of the 23rd annual conference on Computer graphics and interactive techniques. ACM, 1996, pp. 171–180.
[40] G. v. d. Bergen, “Efficient collision detection of complex deformable models using aabb trees,” Journal of graphics tools, vol. 2, no. 4, pp. 1–13, Nov. 1997.
[41] S. Ding, M. Mannan, and A. N. Poo, “Oriented bounding box and octree based global interference detection in 5-axis machining of free-form surfaces,” Computer-Aided Design, vol. 36, no. 13, pp. 1281–1294, Nov. 2004.
[42] M. C. Lin, D. Manocha, and J. Cohen, “Collision detection: Algorithms and applications,” in Proc. of 1996 Workshop on the Algorithmic Foundations of Robotics. Citeseer, 1997, pp. 129–142.
[43] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, Nov. 1998.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊