跳到主要內容

臺灣博碩士論文加值系統

(18.97.9.175) 您好!臺灣時間:2024/12/06 22:03
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:洪銘駿
研究生(外文):Ming-Jyun Hong
論文名稱:智慧型視覺回授控制機器人應用於彩色藝術繪畫
論文名稱(外文):Robotics Artistic Colorful Picture Drawing and Painting Using Visual Feedback Control System
指導教授:羅仁權羅仁權引用關係
口試委員:張帆人顏炳郎
口試日期:2016-07-27
學位類別:碩士
校院名稱:國立臺灣大學
系所名稱:電機工程學研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2016
畢業學年度:104
語文別:英文
論文頁數:83
中文關鍵詞:服務型機器人娛樂型機器人藝術機器人機器人視覺系統人工智慧
外文關鍵詞:service roboticsentertainment roboticsart roboticsrobot vision systemartificial intelligence
相關次數:
  • 被引用被引用:3
  • 點閱點閱:557
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:1
近幾十年來,隨著生活水平的提升,人們開始追求更高層次的精神享受,機器人的應用不再侷限於工業生產線上產值的增加,越來越多研究團隊投入心力於現代化、具高智慧服務型機器人之研發,這些機器人除了能提供人們所需服務之外,也帶來許多附加的娛樂價值。除此之外,部分科學家更進一步打破了科技以往生硬的形象,嘗試將科技與藝術結合,創造出更多驚奇與想像。
本研究的主題在於將藝術繪畫創作的概念融合進機器人,使機器人能夠像人類一樣創作出具有藝術價值的作品。本研究呈現兩種不同的繪畫風格,第一種著重於人類肖像的繪畫,透過不同的媒材的使用,機器人能夠創作出具有手繪風格的作品。我們強調人物肖像中最主要的三個特徵:顏色、輪廓、結構。人像經過不同影像處理的過程後,取出這三種特徵並融合成具有NPR特色的影像。機器人接著便根據此影像,使用彩色蠟筆、油畫筆、松節油、麥克筆與針筆來進行創作。第二種繪畫風格則更進一步導入傳統繪畫技法,透過機器人視覺回授控制系統,使機器人能夠使用五種基本顏色進行混色來調出各式各樣的色彩,繪畫過程中機器人會比較作品與目標的差異性,進行重複補償與修正,使作品更加完善。我們分析系統的表現並證明透過我們的方法能有效的使作品與原圖更加相似,而調色的誤差也能夠控制在10%左右,對於人眼的色彩敏感度而言是相對可接受的範圍。
此研究展現人工智慧在繪畫創作的可能性,我們除了引入藝術創作的記法與觀念到機器人領域中來創作出許多真正被大眾欣賞的藝術作品之外,也提供藝術家不同的觀點,希望透過科學的方法能夠輔助人們學習繪畫的創作並激發對藝術創作新的想法。


In the decades, with the improvement of the living standards, people start pursuing high-level mental enjoyment. The applications of robots are not just limited in industrial purpose which aims to enhance the performance of production in factories. More and more research teams are devoted to the developments of modern and high-intelligence service robots. These robots not only provide different kinds of services, but also bring lots of recreational values to people. Moreover, some researches further break the image that science is stiff. They try to combine science and art together and sparkle new imaginations and amazement.
The objective of this research is to integrate the idea of art into robotics applications and make a robot that can create artworks with artistic value just like human artists. In this research, we present two different styles of art. First, the Robot Artist uses mixed media to draw portraits which have hand-painted style. We emphasize on three main features of human portraits: color, silhouette and structure. The input image goes through different image processing steps to extract these features and combine together to form a NPR style image. The Robot Artist will draw the portraits based on this image using color crayons, oil painting brushes, turpentine, makers and technical pens. For the other style, the robot is further endowed with the capability of painting colorful pictures with a visual feedback control system. It can use only five basic colors to mix a variety of colors and repeatedly refine its artworks by comparing the current picture with the original image. We analyze the performance of our system and prove that the errors between two images would decrease effectively using our approach. The mixed colors is also limited in the range about 10% compared to the target color, which is acceptable with respect to our visual perception.
In this research, the Robot Artist has successfully achieved the very first step of imitating the human artists and blur the line between human and machine creativity. We incorporate the artistic techniques and concept into robotic field and make the robot create artworks that are truly admired by people. More importantly, we not only present the competence of robots in artistic creation, but also provide a scientific perspective on the creation of art, which can help people learn the techniques of painting.


口試委員會審定書 #
誌謝 i
中文摘要 ii
ABSTRACT iv
CONTENTS vi
LIST OF FIGURES ix
LIST OF TABLES xii
Chapter 1 Introduction 1
1.1 Service Robotics 1
1.2 Entertainment Robotics 3
1.3 Art Robotics 6
1.4 Non-Photorealistic Rendering 11
1.5 Thesis Structure 12
Chapter 2 Manipulator 13
2.1 Mechanism 13
2.1.1 D-H parameters 14
2.1.2 Transmission and actuator 16
2.1.3 Gripper 18
2.2 Control Architecture 20
2.3 Online Trajectory Generation 22
Chapter 3 Colorful Human Portrait Drawing 25
3.1 Introduction 25
3.2 Experimental Setup 26
3.2.1 Scene 26
3.2.2 Media 27
3.3 System Procedure 28
3.4 Image Processing 28
3.4.1 Face Detection 30
3.4.2 Contour Extraction 30
3.4.2.1 Canny Edge 31
3.4.2.2 Flow-based Difference-of-Gaussians 32
3.4.2.3 Morphological Operation 35
3.4.3 Color Segmentation 36
3.4.3.1 Color Segment Clustering (Mean Shift) 36
3.4.3.2 Color Segment Registration 39
3.4.4 Shade Generation 41
3.4.4.1 Image Binarization 42
3.4.4.2 Shade Refinement 43
3.4.5 Fusion 43
3.5 Trajectory Planning 44
3.5.1 Coordination Transformation 44
3.5.2 Drawing procedure Planning 45
3.6 Experimental Results and Discussion 47
Chapter 4 Colorful Picture Painting with Visual Feedback Control System 49
4.1 Introduction 49
4.1.1 Human painting behavior 49
4.1.2 Robot vision system 50
4.1.3 Underpainting 51
4.2 Experimental Setup 53
4.2.1 Scene and Media 53
4.2.2 Calibration 54
4.3 System Structure 55
4.4 Preprocessing 57
4.5 Underpainting Planning 58
4.6 Color Mixing and Painting 59
4.7 Stroke Generation 61
4.7.1 Difference Computation 62
4.7.2 Stroke Generation 63
4.7.3 Clustering, Classifying and Ordering 65
4.8 Termination 66
4.9 Experimental Results and Discussion 67
4.9.1 Color mixing analysis 67
4.9.2 Painting process analysis 68
4.9.3 Artists Comments 70
4.9.4 Other Artworks 73
Chapter 5 Conclusions, Contributions and Future Works 77
REFERENCE 78
VITA 83

[1] Provisional definition of Service Robots English, 27th of October 2012
[2] Moley Robotics Kitchen Robot: http://www.moley.com/
[3] Automatic Self-Cleaning Litter Box for Cats: https://www.litter-robot.com/
[4] Robot Security System: http://robotsecuritysystems.com/
[5] J. Broekens, M. Heeink, and H. Rosendal, “Assistive social robots in elderly care: A review,” Gerontechnology, vol. 8, no. 2, pp. 94–103, 2009.
[6] K. H. Huang, “Autonomous Mobile Carrier Robot with Perception and Docking for Elder and Handicap Care,” Master’s Thesis, Electrical Engineering, National Taiwan University, 2014.
[7] T. Shibata, “An Overview of Human Interactive Robots for Psychological Enrichment,” Proceedings of the IEEE, vo1.92, no.11, pp.1749-1758, 2004.
[8] Makimoto, T. and Doi, T. “Chip Technologies for Entertainment ROBOTS –Present and Future-”. Electron Devices Meeting, IEDM 2002
[9] Y. Kuroki, T. Fukushima, K. Nagasaka, T. Moridaira, T. T. Doi and J. Yamaguchi. “A small biped entertainment robot exploring human-robot interactive applications,” IEEE International Conference on Robot and Human Interactive Communication, pp.303 - 308, 2003.
[10] S. Thrun, M. Bennewitz, W. Burgard, A.B. Cremers, F. Dellaert, D. Fox, D. Haehnel, C. Rosenberg, N. Roy, J. Schulte, and D. Schulz. MINERVA: A second generation mobile tour-guide robot,” IEEE International Conference on Robotics and Automation, pp.1999-2005, 1999.
[11] P. Monaghan, “An art professor uses artificial intelligence to create a computer that can draw and paint,” The Chronicle of Higher Education, 1997.
[12] J. Lehni,“Hektor: a grafiti output device”, 2002, diploma project at the Ecole Cantonale d’Art de Lausanne. Available at http://juerglehni.com/works/hektor.
[13] S. Calinon, J. Epiney and A. Billard, “A humanoid robot drawing human portraits,” in IEEE International Conference on Humanoid Robots, pp. 161-166, 2005.
[14] J. -P. Gazeau and S. Zeghoul, “The artist robot: A robot drawing like a human artist,” IEEE International Conference on Industrial Technology, pp.486-491, 2012.
[15] P. Tresset and F. F. Leymarie, “Portrait drawing by Paul the robot,” Computers and Graphics, v.37 n.5, p.348-363, 2013.
[16] O. Deussen, T. Lindemeier, S. Pirk and M. Tautzenberger, “Feedback-guided stroke placement for a painting machine,” in Proceedings of the eighth international symposium on computational aesthetics in graphics, visualization, and imaging (CAe), p. 2533, 2012.
[17] S. J. Kim, J. H. Cheon, S. Forsyth and E. Jee “Research trends in Art and Entertainment Robots (AnE Robots),” RO-MAN, 2013
[18] 2016 1st-Annual International RobotArt Competition founded by Andrew Conru. Official website: http://robotart.org/
[19] M. C. Sousa, J. W. Buchanan, “Computer generated graphite pencil rendering of 3d polygonal models,” in Computer Graphics Forum, pp.195-208, 1999.
[20] A. Hertzmann, K. Perlin, “Painterly rendering for video and interaction,” In Proceedings of International Symposium on Non Photorealistic Animation and Rendering, 2000.
[21] P. S. Michael, E. A. Sean, R. Barzel, and H. S. David, “Interactive pen-and-ink illustration,” in Proceedings of International conference on Computer graphics and interactive techniques, July 1994, pp.101-108.
[22] P. S. Michael, T. W. Michael, F. H. John and H. S. David, “Orientable textures for image-based pen-and-ink illustration,” in Proceedings of International conference on Computer graphics and interactive techniques, August 1997, pp.401-406.
[23] Y. H. Tsai, “7-dof redundant robot manipulator with multimodal intuitive teach and play system,” Master’s Thesis, Electrical Engineering, National Taiwan University, 2014.
[24] R. S. Hartenberg and J. Denavit, Kinematic synthesis of linkages: McGraw-Hill, 1964.
[25] Robotiq gripper 3-finger gripper is available at
[26] PISO-DA8U is available at
http://www.icpdas.com/root/product/solutions/pc_based_io_board/pci/pio-da4.html
[27] PISO-Encoder600 is available at
http://www.icpdas.com/root/product/solutions/pc_based_io_board/motion_control_boards/piso_encoder600u.html
[28] T. Kröger, On-line Trajectory Generation in Robotics Systems: Basic Concepts for Instantaneous Reactions to Unforeseen (sensor) Events vol. 58: Springer, 2010.
[29] T. Kröger, “Online trajectory generation: straight-line trajectories,” Robotics, IEEE Transaction on, vol. 27, pp. 1010-1016, 2011.
[30] T. Kröger, “On-line trajectory generation: Nonconstant motion constraints,” in Robotics and Automation (ICRA), 2012 IEEE International Conference on, 2012, pp. 2048-2054.
[31] T. Kröger, “Opening the door to new sensor-based robot applications-The Reflexxes Motion Libraries,” in Robotics and Automation (ICRA), 2011 IEEE International Conference on, 2011, pp. 1-4.
[32] P. Viola and M. Jones, “Robust Real-Time Face Detection,” International Journal of Computer Vision, vol. 57, no. 2, pp. 137-154, May 2004.
[33] H. Kang, S. Lee and C. Chui, “Coherent Line Drawing,” Proc. ACM Symposium on Non-photorealistic Animation and Rendering, pp. 43-50, 2007.
[34] J. Canny. “A Computational Approach to Edge Detection,” IEEE Transaction on Pattern Analysis and Machine Intelligence, pp. 679-698, 1986.
[35] B. Gooch, E. Reinhard and A. Gooch, “Human facial illustrations: Creation and psychophysical evaluation,” ACM Trans. Graph, 2004.
[36] T. Y. Zhang and C. Y. Suen, “A fast parallel algorithm for thinning digital patterns,” Communication of the ACM, pp. 236-239, 1984.
[37] J. Serra, “Image analysis and mathematical morphology,” New York: Academic Press, 1982.
[38] D. Comaniciu and P. Meer, “Mean shift: A robust approach toward feature space analysis,” IEEE Transaction on Pattern Analysis and Machine Intelligence, 2002.
[39] J. Foley, A. Van Dam, S. Feiner and J. Hughes, “Computer Graphics: Principles and Practice, 2nd edition,” Addison Wesley, 1997.
[40] Peter Litwinowicz, “Processing Images and Video for An Impressionist Effet,” SIGGRAPH, 1997.
[41] J. P. Collomosse, P. M. Hall, “Painterly Rendering using Image Salience,” Proceedings of the 20th UK conference on Eurographics, p.122, 2002.
[42] P. Klee. Pedagogical sketchbook. Faber and Faber, 1953.
[43] Richard Schmid’s personal website is available at http://www.richardschmid.com/default.asp
[44] A. Weyer, P. Roig Picazom, D. Pop, J. Cassar, A. Ozköse, J.-M. Vallet and I. Sřsa, “European Illustrated Glossary of Conservation Terms for Wall Paintings and Architectural Surfaces,” p. 61, 2015.
[45] Marianne Post''s Studio is available at http://www.mariannepoststudio.com/
[46] Horvat, Les, “Digital Imaging: Essential Skills,” Focal Press. p. 74.
[47] D. Comaniciu and P. Meer, “Mean shift: A robust approach toward feature space analysis,” IEEE Transaction on Pattern Analysis and Machine Intelligence, 2002.
[48] J. Foley, A. Van Dam, S. Feiner and J. Hughes, “Computer Graphics: Principles and Practice, 2nd edition,” Addison Wesley, 1997.
[49] A. Neubeck and L. Van Gool, “Efficient non-maximum suppression,” in ICPR, 2006.
[50] “Artists by Movement.” John Malyon/Artcyclopedia, 2007. Web.
[51] J. Hays and I. Essa, “Image and video based painterly animation”, Proceedings of the 3rd international symposium on Non-photorealistic animation and rendering, pp. 113-120, 2004.


QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top