跳到主要內容

臺灣博碩士論文加值系統

(98.82.120.188) 您好!臺灣時間:2024/09/16 09:45
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:鄭元豪
研究生(外文):Yuan-Hao JHENG
論文名稱:基於深度學習之3D醫療護具特徵再構建與變形
指導教授:王文俊王文俊引用關係
指導教授(外文):Wen-June Wang
學位類別:碩士
校院名稱:國立中央大學
系所名稱:電機工程學系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2019
畢業學年度:107
語文別:中文
論文頁數:67
中文關鍵詞:再構建與變形3D醫療護具深度學習點雲自編碼網路
外文關鍵詞:reconstruction and deformation3D medical protectordeep learningpoint cloudAutoEncoder
相關次數:
  • 被引用被引用:0
  • 點閱點閱:149
  • 評分評分:
  • 下載下載:1
  • 收藏至我的研究室書目清單書目收藏:1
本論文旨在設計一個基於深度學習的網路架構來進行3D醫療護具的再構建與變形,分別針對三種不同的病症構建出其相應的3D醫療護具,手的部分為媽媽手及腕隧道,腳的部分為鞋墊。現階段製作3D醫療護具的方式為針對每位病患不同大小的手、腳手動進行繪製,相當耗費時間及人力,因此本文透過深度學習的方式訓練一個AutoEncoder自編碼網路,讓網路自動構建出符合輸入資料尺寸的3D醫療護具,節省中間人工繪製的時間,達到精準且有效率製作3D醫療護具的目的。
本文以自身手、腳的3D掃描資料當作訓練資料,然後以人工的方式繪製訓練資料相應的3D醫療護具當作訓練ground truth,接著對資料進行表面平均採點的動作,讓訓練資料及訓練ground truth皆以點雲資料的形式輸入到自編碼網路中進行訓練,網路在編碼及解碼的過程中會學習中間層latent code的主要特徵,隨著網路訓練次數的增加,解碼器再構建出來的結果會越來越接近ground truth,網路訓練完成後會保留該訓練權重,接著我們縮放及旋轉自身手、腳的3D掃描資料當作測試資料,然後一樣以點雲資料的形式輸入到已經訓練好的自編碼網路中,網路會使用訓練好的權重對測試資料進行3D醫療護具再構建與變形的動作,網路輸出即為符合該測試資料尺寸的點雲形式的3D醫療護具,為了評估網路再構建的輸出結果好壞,我們使用MMD-CD及JSD兩種驗證指標對其進行驗證,最後將點雲形式的3D醫療護具還原成面的形式再透過3D列印機將網路再構建的3D醫療護具打印出來。
The purpose of this dissertation is to design a network architecture based on deep learning to reconstruct and deform the 3D medical protector. There are three different types of protector to target the de Quervain Syndrome, carpal tunnel syndrome for hands and insoles for correcting feet, respectively. Usually, traditional methods in protector production are that designers draw the protectors manually, which spend a lot of time. Hence, we train an AutoEncoder network to make the 3D medical protector be reconstructed automatically and satisfy the size of the input data. The costs of time and labor can be reduced; meanwhile, the goal with effectivity and accuracy can be achieved finally for producing the 3D medical protector.
Firstly, we use 3D scanner to collect the data of my hands and feet as training data, after then, the corresponding protector is built manually and it will be regarded as the training ground truth in this study. The points of the training data and ground truth are sampled uniformly, and then, inputting them to the AutoEncoder deep net architecture. The network will learn the main features of latent code during the encoding and decoding processes. As the training steps increase, the results of the decoder reconstruction will be closer and closer to the ground truth. When the training is completed, the trained weights will be saved. In addition, we zoomed and rotated the 3D scan data of my hands and feet as verification data, then, the verification data is inputted to the trained AutoEncoder network as well. The network will reconstruct the 3D medical protector which can satisfy the size of verification data. To quantitatively evaluate performances of the experimental results, we apply MMD-CD and JSD verification metric to verify. Consequently, the suitable 3D medical protector is printed by 3D printer.
摘要 ii
Abstract iii
致謝 iv
圖目錄 vii
表目錄 ix
第一章 緒論 1
1.1研究動機與背景 1
1.2文獻回顧 1
1.3研究目標 3
1.4論文架構 4
第二章 系統架構與軟硬體介紹 5
2.1系統架構 5
2.2硬體介紹 5
2.3軟體介紹 10
第三章 主要方法與自編碼網路 12
3.1 3D資料前置處理 14
3.1.1 去除雜訊 14
3.1.2表面及邊緣平滑化 16
3.1.3製作不同尺寸、角度的資料 17
3.1.4對3D資料做平均表面採點 19
3.2深度學習3D醫療護具再構建與變形 20
3.3點雲資料的性質 22
3.3.1最大池化(max pooling) 24
3.4 AutoEncoder自編碼網路 25
3.4.1網路架構 28
3.4.2度量學習損失(metric learning loss) 29
3.4.3 最小匹配距離(minimum matching distance) 32
第四章 資料訓練及測試 33
4.1 AutoEncoder自編碼網路的訓練資料 33
4.2 AutoEncoder自編碼網路的測試資料 34
第五章 實驗結果 37
5.1自編碼網路再構建測試 37
5.1.1 測試結果 39
5.2 3D醫療護具再構建與變形 41
5.2.1 再構建與變形結果 45
第六章 結論與未來展望 53
6.1結論 53
6.2未來展望 54
參考文獻 55
[1] D. Maturana and S. Scherer, "Voxnet: A 3D convolutional neural network for real-time object recognition," Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Hamburg, Germany, 2015, pp. 922-928.
[2] Z. Wu et al., "3D shapenets: A deep representation for volumetric shapes," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 2015, pp. 1912-1920.
[3] Y. Li, S. Pirk, H. Su, C. R. Qi, and L. J. Guibas, "FPNN: Field probing neural networks for 3D data," Neural Information Processing Systems, pp. 307-315, 2016.
[4] D. Z. Wang and I. Posner, "Voting for voting in online point cloud object detection," Robotics: Science and Systems, vol. 1, no. 3, pp. 10.15607, 2015.
[5] H. Su, S. Maji, E. Kalogerakis, and E. G. Learned-Miller, "Multi-view convolutional neural networks for 3D shape recognition," Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 2015, pp. 945-953.
[6] C. R. Qi, H. Su, M. Niessner, A. Dai, M. Yan, and L. J. Guibas, "Volumetric and multi-view cnns for object classification on 3D data," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 2016, pp. 5648-5656.
[7] C. R. Qi, H. Su, K. Mo, and L. J. Guibas, "Pointnet: Deep learning on point sets for 3D classification and segmentation," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 2017, pp. 652-660.
[8] C. R. Qi, L. Yi, H. Su, and L. J. Guibas, "Pointnet++: Deep hierarchical feature learning on point sets in a metric space," Neural Information Processing Systems, Long Beach, CA, USA, 2017, pp. 5099-5108.
[9] B.-S. Hua, M.-K. Tran, and S.-K. Yeung, "Pointwise convolutional neural networks," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 2018, pp. 984-993.
[10] J. Bruna, W. Zaremba, A. Szlam, and Y. LeCun, "Spectral networks and locally connected networks on graphs," arXiv preprint arXiv:1312.6203, 2013.
[11] J. Masci, D. Boscaini, M. Bronstein, and P. Vandergheynst, "Geodesic convolutional neural networks on riemannian manifolds," Proceedings of the IEEE International Conference on Computer Vision Workshops, Santiago, Chile, 2015, pp. 37-45.
[12] Y. Fang et al., "3D deep shape descriptor," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 2015, pp. 2319-2328.
[13] K. Guo, D. Zou, and X. J. A. T. o. G. Chen, "3D mesh labeling via deep convolutional neural networks," ACM Transactions on Graphics, vol. 35, no. 1, pp. 3, 2015.
[14] D. P. Kingma and M. J. a. p. a. Welling, "Auto-encoding variational bayes," arXiv preprint arXiv:1312.6114, 2013.
[15] M. E. Yumer and N. J. Mitra, "Learning semantic deformation flows with 3D convolutional networks," Proceedings of the European Conference on Computer Vision, Amsterdam, Netherlands, 2016, pp. 294-311.
[16] A. Kurenkov et al., "Deformnet: Free-form deformation network for 3D shape reconstruction from a single image," Proceedings of the 2018 IEEE Winter Conference on Applications of Computer Vision, Lake Tahoe, NV, USA, 2018, pp. 858-866.
[17] D. Jack et al., "Learning free-form deformations for 3D object reconstruction," arXiv preprint arXiv:1803.10932, 2018.
[18] H. Fan, H. Su, and L. J. Guibas, "A point set generation network for 3D object reconstruction from a single image," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 2017, pp. 605-613.
[19] P. Achlioptas, O. Diamanti, I. Mitliagkas, and L. J. Guibas, "Learning representations and generative models for 3D point clouds," arXiv preprint arXiv:1707.02392, 2017.
[20] J. Lin, "Divergence measures based on the Shannon entropy," IEEE Transactions on Information Theory, vol. 37, no. 1, pp. 145-151, 1991.
[21] 林新醫院. (2019). 媽媽手(狹窄性肌腱滑膜炎). Available: http://www.lshosp.com.tw/%E8%A1%9B%E6%95%99%E5%9C%92%E5%9C%B0/%E5%BE%A9%E5%81%A5%E7%A7%91/%E5%AA%BD%E5%AA%BD%E6%89%8B/
[22] 許維志、陳威宏. (2019). 腕隧道症候群. Available: http://www.skh.org.tw/Neuro/CTS.htm
[23] D. Chen. (Mar. 2019). 高足弓與扁平足. Available: http://lovespine.pixnet.net/blog/post/357577964-%E9%81%B8%E4%B8%80%E9%9B%99%E5%A5%BD%E9%9E%8B
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊