跳到主要內容

臺灣博碩士論文加值系統

(44.192.94.177) 您好!臺灣時間:2024/07/17 01:06
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:吳宗謀
研究生(外文):WU, ZONG-MOU
論文名稱:探討連續線繪3D錯視素描技術於機械手臂應用
論文名稱(外文):Exploring the Application of Uninterrupted Line Drawing Techniques in 3D Optical Illusion Sketching Techniques for Robotic Arms
指導教授:張榮貴張榮貴引用關係
指導教授(外文):CHANG, RONG-GUEY
口試委員:張榮貴薛幼苓陳鵬升陳璽煌
口試委員(外文):CHANG, RONG-GUEYHSUEH, YU-LINGCHEN, PENG-SHENGCHEN, SHI-HUANG
口試日期:2023-07-18
學位類別:碩士
校院名稱:國立中正大學
系所名稱:資訊工程研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2023
畢業學年度:111
語文別:中文
論文頁數:52
中文關鍵詞:機械手臂藝術逆透視投影K-means分群物件偵測
外文關鍵詞:Robotic Arm ArtInverse Perspective MappingK-means ClusteringObject Detection
相關次數:
  • 被引用被引用:0
  • 點閱點閱:57
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
目錄
致謝 I
摘要 II
Abstract III
圖目錄 VI
表目錄 VIII
第一章 緒論 1
第一節 機械手臂 1
第二節 3D錯視 4
第三節 研究動機 5
第四節 論文架構 6
第二章 文獻探討 7
第一節 機械手臂於藝術應用 7
第二節 逆透視變換 10
第三節 線藝術 12
第四節 物件偵測 13
第五節 影像處理 15
(一) CLAHE(限制對比度自適應直方圖均衡化) 15
(二) U2-net 16
(三) Sobel 邊緣偵測演算法 18
第六節 K-means 分群演算法 19
第三章 研究方法 20
第一節 系統架構 20
第二節 圖像預處理 21
(一) 圖像影子生成 22
(二) 圖像逆透視變換 22
第三節 點位提取 24
(一) 邊緣點位提取演算法 24
(二) 輪廓點位提取 25
(三) 臉部點位提取 25
(四) 陰影點位提取 26
(五) 眼鏡點位提取 28
第四節 點位路徑規劃 30
(一) 路徑規劃 30
(二) 點位評分機制 31
(三) 線段繪製 32
第五節 Epson C4-A601S機械手臂設定 33
第四章 實驗環境與結果 35
第一節 實驗環境 35
第二節 實驗結果 36
第三節 實驗結果比較及驗證 38
(一) 實驗比較 38
(二) 實驗結果驗證 41
第五章 結論 43
參考文獻 44
[1] "第四次工業革命," [Online]. Available: https://zh.wikipedia.org/zh-tw/File:Industry_4.0.png.
[2] "IFR, International Federation of Robotics," [Online]. Available: https://ifr.org/img/worldrobotics/Executive_Summary_WR_Industrial_Robots_2022.pdf.
[3] "HAROLD COHEN AND AARON—A 40-YEAR COLLABORATION," 23 August 2016. [Online]. Available: https://computerhistory.org/blog/harold-cohen-and-aaron-a-40-year-collaboration/.
[4] "國立臺灣美術館 the big picture 展," [Online]. Available: https://event.culture.tw/mocweb/reg/NTMOFA/Detail.init.ctr?actId=90005&utm_medium=query.
[5] "3DSportSigns," [Online]. Available: https://3dsportsigns.com/.
[6] 林祐聖, "應用機器手臂的一筆劃素描繪圖技術。﹝碩士論文。國立中正大學﹞臺灣博碩士論文知識加值系統。," [Online]. Available: https://hdl.handle.net/11296/mf4fpf.
[7] Gao, Q., Chen, H., Yu, R., Yang, J., & Duan, X. (2019, February), “A robot portraits pencil sketching algorithm based on face component and texture segmentation.,” In 2019 IEEE International Conference on Industrial Technology (ICIT) (pp. 48-53). IEEE.
[8] Gao, F., Zhu, J., Yu, Z., Li, P., & Wang, T. (2020, October), “Making robots draw a vivid portrait in two minutes,” In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 9585-9591). IEEE.
[9] Huang, X., & Belongie, S. (2017), “Arbitrary style transfer in real-time with adaptive instance normalization,” In Proceedings of the IEEE international conference on computer vision (pp. 1501-1510).
[10] Yi, R., Liu, Y. J., Lai, Y. K., & Rosin, P. L. (2019), " Apdrawinggan: Generating artistic portrait drawings from face photos with hierarchical gans," In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 10743-10752).
[11] Luo, R. C., Hong, M. J., & Chung, P. C. (2016, October), “Robot artist for colorful picture painting with visual control system,” In 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 2998-3003). IEEE.
[12] Mallot, H. A., Bülthoff, H. H., Little, J. J., & Bohrer, S. (1991), “Inverse perspective mapping simplifies optical flow computation and obstacle detection,” Biological cybernetics, 64(3), 177-185.
[13] Rezaei, M., & Azarmi, M. (2020), “Deepsocial: Social distancing monitoring and infection risk assessment in covid-19 pandemic,” Applied Sciences, 10(21), 7514..
[14] Muad, A. M., Hussain, A., Samad, S. A., Mustaffa, M. M., & Majlis, B. Y. (2004, November), “ Implementation of inverse perspective mapping algorithm for the development of an automatic lane tracking system,” In 2004 IEEE Region 10 Conference TENCON 2004. (pp. 207-210). IEEE..
[15] P. Vrellis, “A new way to knit (2016),” [線上]. Available: http://artof01.com/vrellis/works/knit.html.
[16] C. Siegel, “Github,” [線上]. Available: https://github.com/christiansiegel.
[17] MaloDrougard, “Github,” [線上]. Available: https://github.com/MaloDrougard/knit.
[18] Papageorgiou, C. P., Oren, M., & Poggio, T. (1998, January), “ A general framework for object detection,” In Sixth international conference on computer vision (IEEE Cat. No. 98CH36271) (pp. 555-562). IEEE.
[19] Dalal, N., & Triggs, B. (2005, June), “Histograms of oriented gradients for human detection,” In 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR'05) (Vol. 1, pp. 886-893). Ieee.
[20] Lowe, D. G. (1999, September), “Object recognition from local scale-invariant features,” In Proceedings of the seventh IEEE international conference on computer vision (Vol. 2, pp. 1150-1157). Ieee..
[21] He, K., Gkioxari, G., Dollár, P., & Girshick, R. (2017), “Mask r-cnn,” In Proceedings of the IEEE international conference on computer vision (pp. 2961-2969)..
[22] Girshick, R. (2015), “Fast r-cnn,” In Proceedings of the IEEE international conference on computer vision (pp. 1440-1448)..
[23] Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016), “You only look once: Unified, real-time object detection,” In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 779-788).
[24] Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C. Y., & Berg, A. C. (2016), “Ssd: Single shot multibox detector,” In Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part I 14 (pp. 21-37). Springer International Publishing.
[25] O. C. Andreu, "ResearchGate," [Online]. Available: https://www.researchgate.net/figure/Two-stage-vs-one-stage-object-detection-models_fig3_353284602.
[26] Liu, W., Ren, G., Yu, R., Guo, S., Zhu, J., & Zhang, L. (2022, June), “Image-adaptive YOLO for object detection in adverse weather conditions,” In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 36, No. 2, pp. 1792-1800).
[27] Zuiderveld, K. (1994), “Contrast limited adaptive histogram equalization,” Graphics gems, 474-485.
[28] "MathWorks," MathWorks, [Online]. Available: https://www.mathworks.com/help/visionhdl/ug/contrast-adaptive-histogram-equalization.html.
[29] Qin, X., Zhang, Z., Huang, C., Dehghan, M., Zaiane, O. R., & Jagersand, M. (2020), "U2-Net: Going deeper with nested U-structure for salient object detection," Pattern recognition, 106, 107404..
[30] Sobel, I., & Feldman, G. (1968), “A 3x3 isotropic gradient operator for image processing,” a talk at the Stanford Artificial Project in, 271-272..
[31] Hartigan, J. A., & Wong, M. A. (1979), "Algorithm AS 136: A k-means clustering algorithm," Journal of the royal statistical society. series c (applied statistics), 28(1), 100-108..
[32] Redmon, J., & Farhadi, A. (2017), “YOLO9000: better, faster, stronger,” In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 7263-7271).
[33] Topal, C., & Akinlar, C. (2012), “Edge drawing: a combined real-time edge and segment detector,” Journal of Visual Communication and Image Representation, 23(6), 862-872.
[34] Dlib, "Dlib," [Online]. Available: http://dlib.net/face_landmark_detection.py.html.
[35] B. Helm, "Kaggle," [Online]. Available: https://www.kaggle.com/datasets/bradhelm/facesspring2020.

電子全文 電子全文(網際網路公開日期:20280727)
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top