跳到主要內容

臺灣博碩士論文加值系統

(44.192.247.184) 您好!臺灣時間:2023/02/07 13:51
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:邱奕宏
研究生(外文):Yi-Hung Chiu
論文名稱:以深度學習與4D列印反向設計3D複雜曲面──以人臉面具為例
論文名稱(外文):Inverse design of Complex 3D Gridshell by Deep Learning and 4D Printing: A Case Study of Face Mask Design
指導教授:莊嘉揚
指導教授(外文):Jia-Yang Juang
口試委員:劉益宏陳俊杉蔡佳霖
口試委員(外文):Yi-Hung LiuChuin-Shan ChenJia-Lin Tsai
口試日期:2022-01-25
學位類別:碩士
校院名稱:國立臺灣大學
系所名稱:機械工程學研究所
學門:工程學門
學類:機械工程學類
論文種類:學術論文
論文出版年:2022
畢業學年度:110
語文別:中文
論文頁數:90
中文關鍵詞:4D 列印形狀記憶聚合物形狀變形反向設計深度學習FCN
外文關鍵詞:4D printingShape memory polymerShape morphingInverse designDeep learningFully Convolutional Network (FCN)
DOI:10.6342/NTU202200214
相關次數:
  • 被引用被引用:0
  • 點閱點閱:171
  • 評分評分:
  • 下載下載:38
  • 收藏至我的研究室書目清單書目收藏:0
4D列印奠基在3D列印技術之上,利用形狀記憶效應使物件經過外界如熱或光等刺激後能再次變形,其優勢是在印製空心或懸空的網格結構時能省下大量的支撐材,並進而大幅加速製造的速度。過去雖有研究使用形狀記憶聚合物組成的平面網格透過4D列印來進行立體網格的製作,但由於變形機制的高度非線性與鄰近網格的相互牽連,反向設計的過程非常困難。因此,本研究探討了形狀記憶聚合物作為平面網格材料的設計空間,希望能以深度學習自動化反向設計的過程。其中本研究利用熔融堆疊式的3D列印機列印SMP55時儲存的預應力作為4D列印的機制,結合PLA產生的遇熱會彎曲的雙層結構,產生共四種的單元網格配置的平面網格設計空間。本研究先以人為反向設計藉由嘗試錯誤的方法,搭配有限元素法與繪圖軟體反向設計三個日本能面,驗證了此設計空間的多樣性。接著深度學習反向設計的部分,本研究將人臉面具的平面網格設計以多項式的參數來生成大量的隨機人臉面具,並搭配有限元素模擬產生對應的變形形狀作為深度學習模型訓練的資料集。模型架構上本研究選擇通常用於影像分割(Image segmentation)任務的全卷積網路(Fully convolutional network)模型進行反向設計,模型會根據目標形狀的深度照片來生成平面網格設計。在測試資料集中全卷積網路生成的人臉面具能夠有超過0.95的相素準確度與0.9的平均並交比,代表網格設計變形形狀的深度照片也有約0.9的結構相似性與7.5的均方誤差。雖然模型在資料集外如日本能面的反向設計結果不是很理想,卻已足夠證明此一方法的可行性。本研究也以日本能面為例以泡熱水實驗與石膏鋪膜的方法改善了人臉面具的製程,其結果不僅能夠驗證有限元素的模擬,也能製造出與能面相似的面具。
4D printing is a technology that is built upon 3D printing. By utilizing shape memory effect, 3D-printed objects have the ability to deform again due to certain external stimuli such as heat and light. It can greatly reduce the waste of supporting material and printing time for certain structures such as hanging or hollow structure. However, it is very difficult to inverse design this process because of the nonlinearity of the morphing mechanism and the entanglement between different parts. In this study, we use deep learning techniques to overcome this difficulty and automate the inverse design process. Specifically, this research studies a 2D grid design space that deforms to 3D gridshell through 4D printing process. The 2D grid is composed of rectangular arranged double-layered segments. Each layer is made of shape memory polymer (SMP55) or PLA, resulting in four combinations for each segment. The size and material combination of each 2D grid is specified to control both global and local curvatures of the deformed gridshell which can achieve a variety of complex structures. Three traditional Japanese “Noh masks” are chosen as the target shapes because Noh masks are an ideal model system as each mask has unique aesthetic features. We use parametric polynomial functions describe face features and generate random mask designs. Combining with the deformed shapes simulated by FEM software, we produced a dataset which contained 60k data. We use fully convolutional network (FCN) which typically used in image segmentation task to inverse design 2D grids based on the depth images of the desire shapes. The trained FCN model can predict 2D grid designs with over 0.95 pixel accuracy and 0.9 mean IOU. Also, by the calculation of structural similarity, the average similarity of 3D gridshells deformed from FCN-output designs and target 3D gridshells is 0.9. Although the model is limited to the distribution of the training dataset and performs poorly on Noh masks, it is still a successful proof of concept that deep learning can be utilized in the inverse design problem of 4D printing 2D grid design.
誌謝 I
摘要 II
Abstract III
目錄 V
表目錄 VIII
圖目錄 IX
第1章 緒論 1
1.1 研究動機與目的 1
1.2 論文架構 3
第2章 相關理論與文獻回顧 4
2.1 形狀記憶材料 4
2.1.1 形狀記憶效應 4
2.1.2 形狀記憶材料的分類 6
2.1.3 4D列印 8
2.2 人臉模型 10
2.2.1 人臉模型與面具的特色 10
2.2.2 3D人臉模型的表示 13
2.3 深度學習 15
2.3.1 神經網路組成與神經元分類 15
2.3.2 深度學習模型訓練方法 18
2.3.3 全卷積網路 20
2.4 實驗室研究回顧 21
第3章 實驗流程與使用工具材料 23
3.1 研究架構與概述 23
3.2 3D列印 24
3.2.1 3D列印步驟 24
3.2.2 SMP55 (Shape Memory Polymer 55) 26
3.2.3 PLA 28
3.3 平面網格 29
3.3.1 網格單元規格與列印參數 29
3.4 面具製造 32
3.4.1 平面網格的3D列印 32
3.4.2 泡熱水實驗 33
3.4.3 面具鋪模 34
3.5 有限元素模擬 38
3.5.1 單元網格模擬 38
3.5.2 人臉面具網格模擬 41
3.6 深度學習資料庫 44
3.6.1 參數化設計-眼睛與眉毛 45
3.6.2 參數化設計-鼻子 47
3.6.3 參數化設計-嘴巴與人中 48
3.6.4 參數化設計-輪廓與全域曲率 49
3.6.5 參數化設計小結 50
3.6.6 Ansys Workbench API 52
3.6.7 模擬資料處理 53
3.7 深度學習 55
3.8 人為設計方法 57
第4章 結果與討論 58
4.1 深度學習訓練 58
4.1.1 基本設定 58
4.1.2 超參數優化 59
4.1.3 訓練過程分析 63
4.2 面具製造 67
4.3 隨機網格面具反向設計 70
4.4 能面反向設計 76
第5章 結論與未來展望 82
5.1 結論 82
5.2 未來展望 83
參考文獻 84
[1]Z. Ding, O. Weeger, H. J. Qi, and M. L. Dunn, "4D rods: 3D structures via programmable 1D composite rods," Materials & Design, vol. 137, pp. 256-265, 2018, doi: 10.1016/j.matdes.2017.10.004.
[2]X. Ning et al., "Assembly of advanced materials into 3D functional structures by nethods inspired by origami and kirigami: A review," Advanced Materials Interfaces, vol. 5, no. 13, 2018, doi: 10.1002/admi.201800284.
[3]M. R. Garza, E. A. Peraza-Hernandez, D. J. Hartl, and H. E. Naguib, "Self-folding origami surfaces of non-zero gaussian curvature," in Behavior and Mechanics of Multifunctional Materials XIII, 2019, doi: 10.1117/12.2514906.
[4]G. P. T. Choi, L. H. Dudte, and L. Mahadevan, "Programming shape using kirigami tessellations," Nature Materials, vol. 18, no. 9, pp. 999-1004, 2019, doi: 10.1038/s41563-019-0452-y.
[5]A. M. Abdullah, X. Li, P. V. Braun, J. A. Rogers, and K. J. Hsia, "Kirigami‐inspired self‐assembly of 3D structures," Advanced Functional Materials, vol. 30, no. 14, 2020, doi: 10.1002/adfm.201909888.
[6]B. An et al., "Thermorph: Democratizing 4D printing of self-folding materials and interfaces.," in Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 2018, pp. 1-12. doi: 10.1145/3173574.3173834.
[7]Z. Ding, C. Yuan, X. Peng, T. Wang, H. J. Qi, and M. L. Dunn, "Direct 4D printing via active composite materials," Science Advances, vol. 3, no. 4, 2017, doi: 10.1126/sciadv.1602890.
[8]G. F. Hu, A. R. Damanpack, M. Bodaghi, and W. H. Liao, "Increasing dimension of structures by 4D printing shape memory polymers via fused deposition modeling," Smart Materials and Structures, vol. 26, no. 12, 2017, doi: 10.1088/1361-665X/aa95ec.
[9]W. M. van Rees, E. Vouga, and L. Mahadevan, "Growth patterns for shape-shifting elastic bilayers," Proceedings of the National Academy of Sciences of the United States of America, vol. 114, no. 44, pp. 11597-11602, 2017, doi: 10.1073/pnas.1709025114 %J Proceedings of the National Academy of Sciences.
[10]J. W. Boley et al., "Shape-shifting structured lattices via multimaterial 4D printing," Proceedings of the National Academy of Sciences of the United States of America, vol. 116, no. 42, pp. 20856-20862, 2019, doi: 10.1073/pnas.1908806116.
[11]R. Guseinov, C. Mcmahan, J. Perez, C. Daraio, and B. Bickel, "Programming temporal morphing of self-actuated shells," Nature Communications, vol. 11, no. 1, p. 237, 2020, doi: 10.1038/s41467-019-14015-2.
[12]F. Momeni, H. Mehdi, N. Seyed, X. Liu, and J. Ni, "A review of 4D printing," Materials & Design, vol. 122, pp. 42-79, 2017, doi: 10.1016/j.matdes.2017.02.068.
[13]J. Choi, O. C. Kwon, W. Jo, H. J. Lee, and M. W. Moon, "4D Printing Technology: A Review," 3D Printing and Additive Manufacturing, vol. 2, no. 4, pp. 159-167, 2015, doi: 10.1089/3dp.2015.0039.
[14]Z. Liu, D. Zhu, S. P. Rodrigues, K. T. Lee, and W. Cai, "Generative model for the inverse design of metasurfaces," Nano Letters, vol. 18, no. 10, pp. 6570-6576, 2018, doi: 10.1021/acs.nanolett.8b03171.
[15]H. Aharoni, Y. Xia, X. Zhang, R. D. Kamien, and S. Yang, "Universal inverse design of surfaces with thin nematic elastomer sheets," Proceedings of the National Academy of Sciences of the United States of America, vol. 115, no. 28, pp. 7206-7211, 2018, doi: 10.1073/pnas.1804702115.
[16]徐亦賢, "3D列印之超彈性材料蒲松比精密量測與4D列印之平面網格變形──面具製作搭配反向設計," 機械工程學研究所, 國立臺灣大學, 2021.
[17]M. Mirza and S. Osindero, "Conditional generative adversarial nets," arXiv preprint, p. arXiv:1411.1784, 2014,
[18]A. Radford, L. Metz, and S. Chintala, "Unsupervised representation learning with deep convolutional generative adversarial networks," arXiv preprint 2015, doi: arXiv:1511.06434.
[19]J. Long, E. Shelhamer, and T. Darrell, "Fully convolutional networks for semantic segmentation," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 3431-3440.
[20]K. Otsuka and C. M. Wayman, Shape memory materials. Cambridge University Press, 1998.
[21]I. P. Lipscomb and L. D. Nokes, The application of shape memory alloys in medicine. Wiley-Blackwell, 1996.
[22]T. W. Duerig, K. Melton, and D. Stöckel, Engineering aspects of shape memory alloys. Butterworth-heinemann, 2013.
[23]Q. Zhao, H. J. Qi, and T. Xie, "Recent progress in shape memory polymer: New behavior, enabling materials, and mechanistic understanding," Progress in Polymer Science, vol. 49-50, pp. 79-120, 2015, doi: 10.1016/j.progpolymsci.2015.04.001.
[24]S. Erkeçoglu, A. D. Sezer, and S. Bucak, "Smart delivery systems with shape memory and self-folding polymers," in Smart Drug Delivery System, 2016, ch. Chapter 1.
[25]I. A. Rousseau and T. Xie, "Shape memory epoxy: Composition, structure, properties and shape memory performances," Journal of Materials Chemistry, vol. 20, no. 17, pp. 3431-3441, 2010,
[26]C. M. Yakacki, R. Shandas, D. Safranski, A. M. Ortega, K. Sassaman, and K. Gall, "Strong, tailored, biocompatible shape‐memory polymer networks," Advanced Functional Materials, vol. 18, no. 16, pp. 2428-2435, 2008,
[27]J. Leng, H. Lu, Y. Liu, W. M. Huang, and S. Du, "Shape-memory polymers—a class of novel smart materials," MRS bulletin, vol. 34, no. 11, pp. 848-855, 2009,
[28]C. A. Harper, Handbook of plastics, elastomers, and composites. McGraw-Hill Education, 2002.
[29]A. Lendlein, H. Jiang, O. Jünger, and R. Langer, "Light-induced shape-memory polymers," Nature, vol. 434, no. 7035, pp. 879-882, 2005,
[30]W. Huang, B. Yang, L. An, C. Li, and Y. Chan, "Water-driven programmable polyurethane shape memory polymer: Demonstration and mechanism," Applied Physics Letters, vol. 86, no. 11, p. 114105, 2005,
[31]R. Mohr, K. Kratz, T. Weigel, M. Lucka-Gabor, M. Moneke, and A. Lendlein, "Initiation of shape-memory effect by inductive heating of magnetic nanoparticles in thermoplastic polymers," Proceedings of the National Academy of Sciences, vol. 103, no. 10, pp. 3540-3545, 2006,
[32]M. Bodaghi, A. R. Damanpack, and W. H. Liao, "Triple shape memory polymers by 4D printing," Smart Materials and Structures, vol. 27, no. 6, 2018, doi: 10.1088/1361-665X/aabc2a.
[33]M. Zare, M. P. Prabhakaran, N. Parvin, and S. Ramakrishna, "Thermally-induced two-way shape memory polymers: Mechanisms, structures, and applications," Chemical Engineering Journal, vol. 374, pp. 706-720, 2019, doi: 10.1016/j.cej.2019.05.167.
[34]K. K. Westbrook et al., "Two-way reversible shape memory effects in a free-standing polymer composite," Smart Materials and Structures, vol. 20, no. 6, 2011, doi: 10.1088/0964-1726/20/6/065010.
[35]Y. Wu et al., "Two-way shape memory polymer with “switch–spring” composition by interpenetrating polymer network," Journal of Materials Chemistry A: Materials for Energy and Sustainability, vol. 2, no. 44, pp. 18816-18822, 2014, doi: 10.1039/c4ta03640a.
[36]J. Zhang et al., "Self-healable and recyclable triple-shape PPDO–PTMEG co-network constructed through thermoreversible Diels–Alder reaction," Polymer Chemistry, vol. 3, no. 6, pp. 1390-1393, 2012,
[37]T. Xie, "Tunable polymer multi-shape memory effect," Nature, vol. 464, no. 7286, pp. 267-270, 2010,
[38]S. Tibbits, "4D Printing: Multi-Material Shape Change," Architectural Design, vol. 84, no. 1, pp. 116-121, 2014, doi: https://doi.org/10.1002/ad.1710.
[39]H. A. Alshahrani, "Review of 4D printing materials and reinforced composites: Behaviors, applications and challenges," Journal of Science: Advanced Materials and Devices, vol. 6, no. 2, pp. 167-185, 2021, doi: https://doi.org/10.1016/j.jsamd.2021.03.006.
[40]I. Shishkovsky, I. Yadroitsev, and I. Smurov, "Direct selective laser melting of nitinol powder," Physics Procedia, vol. 39, pp. 447-454, 2012,
[41]X. Kuang, K. Chen, C. K. Dunn, J. Wu, V. C. Li, and H. J. Qi, "3D printing of highly stretchable, shape-memory, and self-healing elastomer toward novel 4D printing," ACS Applied Materials & Interfaces, vol. 10, no. 8, pp. 7381-7388, 2018,
[42]Z. Liu, H. Liu, G. Duan, and J. Tan, "Folding deformation modeling and simulation of 4D printed bilayer structures considering the thickness ratio," Mathematics and Mechanics of Solids, vol. 25, no. 2, pp. 348-361, 2019, doi: 10.1177/1081286519877563.
[43]A. Sydney Gladman, E. A. Matsumoto, R. G. Nuzzo, L. Mahadevan, and J. A. Lewis, "Biomimetic 4D printing," Nature Materials, vol. 15, no. 4, pp. 413-418, 2016, doi: 10.1038/nmat4544.
[44]K. L. Moore and A. F. Dalley, Clinically oriented anatomy. Wolters kluwer india Pvt Ltd, 2018.
[45]N. Kanwisher, J. McDermott, and M. M. Chun, "The fusiform face area: A module in human extrastriate cortex specialized for face perception," Journal of neuroscience, vol. 17, no. 11, pp. 4302-4311, 1997,
[46]M. Mori, K. F. MacDorman, and N. Kageki, "The uncanny valley," IEEE Robotics & Automation Magazine, vol. 19, no. 2, pp. 98-100, 2012,
[47]I. Goodfellow et al., "Generative adversarial nets," Advances in neural information processing systems, vol. 27, 2014,
[48]T. Karras, S. Laine, M. Aittala, J. Hellsten, J. Lehtinen, and T. Aila, "Analyzing and improving the image quality of stylegan," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 8110-8119.
[49]M. J. Lyons, R. Campbell, A. Plante, M. Coleman, M. Kamachi, and S. Akamatsu, "The Noh mask effect: Vertical viewpoint dependence of facial expression perception," Proceedings of the Royal Society of London. Series B: Biological Sciences, vol. 267, no. 1459, pp. 2239-2245, 2000,
[50]S. Minoshita, S. Satoh, N. Morita, A. Tagawa, and T. Kikuchi, "The Noh mask test for analysis of recognition of facial expression," Psychiatry and clinical neurosciences, vol. 53, no. 1, pp. 83-89, 1999,
[51]Gryffindor. "Noh mask: Young woman." [Online]. Available: https://commons.wikimedia.org/wiki/File:Museum_für_Ostasiatische_Kunst_Dahlem_Berlin_Mai_2006_026.jpg
[52]A. F. Abate, M. Nappi, D. Riccio, and G. Sabatino, "2D and 3D face recognition: A survey," Pattern Recognition Letters, vol. 28, no. 14, pp. 1885-1906, 2007, doi: https://doi.org/10.1016/j.patrec.2006.12.018.
[53]X. Chenghua, W. Yunhong, T. Tieniu, and Q. Long, "Depth vs. intensity: which is more important for face recognition?," in Proceedings of the 17th International Conference on Pattern Recognition, 2004., 26-26 Aug. 2004 2004, vol. 1, pp. 342-345 Vol.1. doi: 10.1109/ICPR.2004.1334122.
[54]Y. Cai, Y. Lei, M. Yang, Z. You, and S. Shan, "A fast and robust 3D face recognition approach based on deeply learned face representation," Neurocomputing, vol. 363, pp. 375-397, 2019, doi: https://doi.org/10.1016/j.neucom.2019.07.047.
[55]C. R. Qi, H. Su, K. Mo, and L. J. Guibas, "PointNet: Deep learning on point sets for 3D classification and segmentation," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 652-660.
[56]A. S. Jackson, A. Bulat, V. Argyriou, and G. Tzimiropoulos, "Large pose 3D face reconstruction from a single image via direct volumetric CNN regression," in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 1031-1039.
[57]S. Moschoglou, S. Ploumpis, M. A. Nicolaou, A. Papaioannou, and S. Zafeiriou, "3DFaceGAN: Adversarial nets for 3D face representation, generation, and translation," International Journal of Computer Vision, vol. 128, pp. 2534-2551, 2020,
[58]A. Ioannidou, E. Chatzilari, S. Nikolopoulos, and I. Kompatsiaris, "Deep learning advances in computer vision with 3D data: A survey," ACM Computing Surveys, vol. 50, no. 2, 2017, doi: 10.1145/3042064.
[59]Y. LeCun, Y. Bengio, and G. Hinton, "Deep learning," Nature, vol. 521, no. 7553, pp. 436-444, 2015,
[60]J. Schmidhuber, "Deep learning in neural networks: an overview," Neural networks, vol. 61, pp. 85-117, 2015,
[61]Y. Bengio, A. Courville, and P. Vincent, "Representation learning: A review and new perspectives," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 8, pp. 1798-1828, 2013,
[62]O. Kisi, M. Alizamir, and A. Docheshmeh Gorgij, "Dissolved oxygen prediction using a new ensemble method," Environmental Science and Pollution Research, vol. 27, no. 9, pp. 9589-9603, 2020, doi: 10.1007/s11356-019-07574-w.
[63]D. Tran Anh, T. Duc Dang, and S. Pham Van, "Improved rainfall prediction using combined pre-processing methods and feed-forward neural networks," J, vol. 2, no. 1, pp. 65-83, 2019,
[64]S. S. Kumar. "Why Sigmoid?" [Online]. Available: https://medium.com/n%C3%BAcleoml/why-sigmoid-ee95299e11fd
[65]K. Fukushima and S. Miyake, "Neocognitron: A self-organizing neural network model for a mechanism of visual pattern recognition," in Competition and cooperation in neural nets: Springer, 1982, pp. 267-285.
[66]A. Krizhevsky, I. Sutskever, and G. E. Hinton, "Imagenet classification with deep convolutional neural networks," Advances in neural information processing systems, vol. 25, pp. 1097-1105, 2012,
[67]K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770-778.
[68]D. E. Rumelhart, G. E. Hinton, and R. J. Williams, "Learning internal representations by error propagation," California Univ San Diego La Jolla Inst for Cognitive Science, 1985.
[69]K. Cho et al., "Learning phrase representations using RNN encoder-decoder for statistical machine translation," arXiv preprint arXiv:1406.1078, 2014,
[70]S. Hochreiter and J. Schmidhuber, "Long short-term memory," Neural Computation, vol. 9, no. 8, pp. 1735-1780, 1997,
[71]A. Gupta. "Difference between ANN, CNN and RNN." [Online]. Available: https://www.geeksforgeeks.org/difference-between-ann-cnn-and-rnn/
[72]F. Chollet, Deep learning with Python. Simon and Schuster, 2021.
[73]R. Kohavi, "A study of cross-validation and bootstrap for accuracy estimation and model selection," in Ijcai, 1995, vol. 14, no. 2: Montreal, Canada, pp. 1137-1145.
[74]I. Ignacio. "Diagram showing overfitting of a classifier." [Online]. Available: https://commons.wikimedia.org/wiki/File:Overfitting.svg
[75]Gringer. "Overfitting svg." [Online]. Available: https://commons.wikimedia.org/wiki/File:Overfitting_svg.svg
[76]T. Huang. "機器/深度學習-基礎數學(三):梯度最佳解相關算法(gradient descent optimization algorithms)." [Online]. Available: https://reurl.cc/dGDYdy
[77]D. P. Kingma and J. Ba, "Adam: A method for stochastic optimization," arXiv preprint 2014, doi: arXiv:1412.6980.
[78]J. Bergstra, R. Bardenet, Y. Bengio, and B. Kégl, "Algorithms for hyper-parameter optimization," Advances in neural information processing systems, vol. 24, 2011,
[79]J. Snoek, H. Larochelle, and R. P. Adams, "Practical bayesian optimization of machine learning algorithms," Advances in neural information processing systems, vol. 25, 2012,
[80]J. Bergstra and Y. Bengio, "Random search for hyper-parameter optimization," Journal of machine learning research, vol. 13, no. 2, 2012,
[81]S. T. Inc. "Shape Memory Polymer │SMP Technologies Inc." [Online]. Available: http://www.smptechno.com/index_en.html
[82]INPLUS.TW. "3D列印線材中的變形金剛." [Online]. Available: https://inplus.tw/archives/5863
[83]Y. Yang, Y. Chen, Y. Wei, and Y. Li, "3D printing of shape memory polymer for functional part fabrication," The International Journal of Advanced Manufacturing Technology, vol. 84, no. 9, pp. 2079-2095, 2016,
[84]M. Mehrpouya, H. Vahabi, S. Janbaz, A. Darafsheh, T. R. Mazur, and S. Ramakrishna, "4D printing of shape memory polylactic acid (PLA)," Polymer, vol. 230, p. 124080, 2021,
[85]A. Paszke et al., "Pytorch: An imperative style, high-performance deep learning library," Advances in neural information processing systems, vol. 32, pp. 8026-8037, 2019,
[86]Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, "Image quality assessment: from error visibility to structural similarity," IEEE transactions on image processing, vol. 13, no. 4, pp. 600-612, 2004,
[87]I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. Courville, "Improved training of wasserstein gans," arXiv preprint arXiv:1704.00028, 2017,
[88]L. Van der Maaten and G. Hinton, "Visualizing data using t-SNE," Journal of machine learning research, vol. 9, no. 11, 2008,
[89]S. T. Roweis and L. K. Saul, "Nonlinear dimensionality reduction by locally linear embedding," science, vol. 290, no. 5500, pp. 2323-2326, 2000,
[90]L. P. Kaelbling, M. L. Littman, and A. W. Moore, "Reinforcement learning: A survey," Journal of artificial intelligence research, vol. 4, pp. 237-285, 1996,
[91]H. Yang et al., "Facescape: A large-scale high quality 3d face dataset and detailed riggable 3d face prediction," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 601-610.
[92]C. Cao, Y. Weng, S. Zhou, Y. Tong, and K. Zhou, "Facewarehouse: A 3d facial expression database for visual computing," IEEE Transactions on Visualization and Computer Graphics, vol. 20, no. 3, pp. 413-425, 2013,
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊