( 您好!臺灣時間:2024/07/18 09:03
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::


研究生(外文):Ming-Jyun Hong
論文名稱(外文):Robotics Artistic Colorful Picture Drawing and Painting Using Visual Feedback Control System
外文關鍵詞:service roboticsentertainment roboticsart roboticsrobot vision systemartificial intelligence
  • 被引用被引用:3
  • 點閱點閱:513
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:1

In the decades, with the improvement of the living standards, people start pursuing high-level mental enjoyment. The applications of robots are not just limited in industrial purpose which aims to enhance the performance of production in factories. More and more research teams are devoted to the developments of modern and high-intelligence service robots. These robots not only provide different kinds of services, but also bring lots of recreational values to people. Moreover, some researches further break the image that science is stiff. They try to combine science and art together and sparkle new imaginations and amazement.
The objective of this research is to integrate the idea of art into robotics applications and make a robot that can create artworks with artistic value just like human artists. In this research, we present two different styles of art. First, the Robot Artist uses mixed media to draw portraits which have hand-painted style. We emphasize on three main features of human portraits: color, silhouette and structure. The input image goes through different image processing steps to extract these features and combine together to form a NPR style image. The Robot Artist will draw the portraits based on this image using color crayons, oil painting brushes, turpentine, makers and technical pens. For the other style, the robot is further endowed with the capability of painting colorful pictures with a visual feedback control system. It can use only five basic colors to mix a variety of colors and repeatedly refine its artworks by comparing the current picture with the original image. We analyze the performance of our system and prove that the errors between two images would decrease effectively using our approach. The mixed colors is also limited in the range about 10% compared to the target color, which is acceptable with respect to our visual perception.
In this research, the Robot Artist has successfully achieved the very first step of imitating the human artists and blur the line between human and machine creativity. We incorporate the artistic techniques and concept into robotic field and make the robot create artworks that are truly admired by people. More importantly, we not only present the competence of robots in artistic creation, but also provide a scientific perspective on the creation of art, which can help people learn the techniques of painting.

口試委員會審定書 #
誌謝 i
中文摘要 ii
Chapter 1 Introduction 1
1.1 Service Robotics 1
1.2 Entertainment Robotics 3
1.3 Art Robotics 6
1.4 Non-Photorealistic Rendering 11
1.5 Thesis Structure 12
Chapter 2 Manipulator 13
2.1 Mechanism 13
2.1.1 D-H parameters 14
2.1.2 Transmission and actuator 16
2.1.3 Gripper 18
2.2 Control Architecture 20
2.3 Online Trajectory Generation 22
Chapter 3 Colorful Human Portrait Drawing 25
3.1 Introduction 25
3.2 Experimental Setup 26
3.2.1 Scene 26
3.2.2 Media 27
3.3 System Procedure 28
3.4 Image Processing 28
3.4.1 Face Detection 30
3.4.2 Contour Extraction 30 Canny Edge 31 Flow-based Difference-of-Gaussians 32 Morphological Operation 35
3.4.3 Color Segmentation 36 Color Segment Clustering (Mean Shift) 36 Color Segment Registration 39
3.4.4 Shade Generation 41 Image Binarization 42 Shade Refinement 43
3.4.5 Fusion 43
3.5 Trajectory Planning 44
3.5.1 Coordination Transformation 44
3.5.2 Drawing procedure Planning 45
3.6 Experimental Results and Discussion 47
Chapter 4 Colorful Picture Painting with Visual Feedback Control System 49
4.1 Introduction 49
4.1.1 Human painting behavior 49
4.1.2 Robot vision system 50
4.1.3 Underpainting 51
4.2 Experimental Setup 53
4.2.1 Scene and Media 53
4.2.2 Calibration 54
4.3 System Structure 55
4.4 Preprocessing 57
4.5 Underpainting Planning 58
4.6 Color Mixing and Painting 59
4.7 Stroke Generation 61
4.7.1 Difference Computation 62
4.7.2 Stroke Generation 63
4.7.3 Clustering, Classifying and Ordering 65
4.8 Termination 66
4.9 Experimental Results and Discussion 67
4.9.1 Color mixing analysis 67
4.9.2 Painting process analysis 68
4.9.3 Artists Comments 70
4.9.4 Other Artworks 73
Chapter 5 Conclusions, Contributions and Future Works 77

[1] Provisional definition of Service Robots English, 27th of October 2012
[2] Moley Robotics Kitchen Robot: http://www.moley.com/
[3] Automatic Self-Cleaning Litter Box for Cats: https://www.litter-robot.com/
[4] Robot Security System: http://robotsecuritysystems.com/
[5] J. Broekens, M. Heeink, and H. Rosendal, “Assistive social robots in elderly care: A review,” Gerontechnology, vol. 8, no. 2, pp. 94–103, 2009.
[6] K. H. Huang, “Autonomous Mobile Carrier Robot with Perception and Docking for Elder and Handicap Care,” Master’s Thesis, Electrical Engineering, National Taiwan University, 2014.
[7] T. Shibata, “An Overview of Human Interactive Robots for Psychological Enrichment,” Proceedings of the IEEE, vo1.92, no.11, pp.1749-1758, 2004.
[8] Makimoto, T. and Doi, T. “Chip Technologies for Entertainment ROBOTS –Present and Future-”. Electron Devices Meeting, IEDM 2002
[9] Y. Kuroki, T. Fukushima, K. Nagasaka, T. Moridaira, T. T. Doi and J. Yamaguchi. “A small biped entertainment robot exploring human-robot interactive applications,” IEEE International Conference on Robot and Human Interactive Communication, pp.303 - 308, 2003.
[10] S. Thrun, M. Bennewitz, W. Burgard, A.B. Cremers, F. Dellaert, D. Fox, D. Haehnel, C. Rosenberg, N. Roy, J. Schulte, and D. Schulz. MINERVA: A second generation mobile tour-guide robot,” IEEE International Conference on Robotics and Automation, pp.1999-2005, 1999.
[11] P. Monaghan, “An art professor uses artificial intelligence to create a computer that can draw and paint,” The Chronicle of Higher Education, 1997.
[12] J. Lehni,“Hektor: a grafiti output device”, 2002, diploma project at the Ecole Cantonale d’Art de Lausanne. Available at http://juerglehni.com/works/hektor.
[13] S. Calinon, J. Epiney and A. Billard, “A humanoid robot drawing human portraits,” in IEEE International Conference on Humanoid Robots, pp. 161-166, 2005.
[14] J. -P. Gazeau and S. Zeghoul, “The artist robot: A robot drawing like a human artist,” IEEE International Conference on Industrial Technology, pp.486-491, 2012.
[15] P. Tresset and F. F. Leymarie, “Portrait drawing by Paul the robot,” Computers and Graphics, v.37 n.5, p.348-363, 2013.
[16] O. Deussen, T. Lindemeier, S. Pirk and M. Tautzenberger, “Feedback-guided stroke placement for a painting machine,” in Proceedings of the eighth international symposium on computational aesthetics in graphics, visualization, and imaging (CAe), p. 2533, 2012.
[17] S. J. Kim, J. H. Cheon, S. Forsyth and E. Jee “Research trends in Art and Entertainment Robots (AnE Robots),” RO-MAN, 2013
[18] 2016 1st-Annual International RobotArt Competition founded by Andrew Conru. Official website: http://robotart.org/
[19] M. C. Sousa, J. W. Buchanan, “Computer generated graphite pencil rendering of 3d polygonal models,” in Computer Graphics Forum, pp.195-208, 1999.
[20] A. Hertzmann, K. Perlin, “Painterly rendering for video and interaction,” In Proceedings of International Symposium on Non Photorealistic Animation and Rendering, 2000.
[21] P. S. Michael, E. A. Sean, R. Barzel, and H. S. David, “Interactive pen-and-ink illustration,” in Proceedings of International conference on Computer graphics and interactive techniques, July 1994, pp.101-108.
[22] P. S. Michael, T. W. Michael, F. H. John and H. S. David, “Orientable textures for image-based pen-and-ink illustration,” in Proceedings of International conference on Computer graphics and interactive techniques, August 1997, pp.401-406.
[23] Y. H. Tsai, “7-dof redundant robot manipulator with multimodal intuitive teach and play system,” Master’s Thesis, Electrical Engineering, National Taiwan University, 2014.
[24] R. S. Hartenberg and J. Denavit, Kinematic synthesis of linkages: McGraw-Hill, 1964.
[25] Robotiq gripper 3-finger gripper is available at
[26] PISO-DA8U is available at
[27] PISO-Encoder600 is available at
[28] T. Kröger, On-line Trajectory Generation in Robotics Systems: Basic Concepts for Instantaneous Reactions to Unforeseen (sensor) Events vol. 58: Springer, 2010.
[29] T. Kröger, “Online trajectory generation: straight-line trajectories,” Robotics, IEEE Transaction on, vol. 27, pp. 1010-1016, 2011.
[30] T. Kröger, “On-line trajectory generation: Nonconstant motion constraints,” in Robotics and Automation (ICRA), 2012 IEEE International Conference on, 2012, pp. 2048-2054.
[31] T. Kröger, “Opening the door to new sensor-based robot applications-The Reflexxes Motion Libraries,” in Robotics and Automation (ICRA), 2011 IEEE International Conference on, 2011, pp. 1-4.
[32] P. Viola and M. Jones, “Robust Real-Time Face Detection,” International Journal of Computer Vision, vol. 57, no. 2, pp. 137-154, May 2004.
[33] H. Kang, S. Lee and C. Chui, “Coherent Line Drawing,” Proc. ACM Symposium on Non-photorealistic Animation and Rendering, pp. 43-50, 2007.
[34] J. Canny. “A Computational Approach to Edge Detection,” IEEE Transaction on Pattern Analysis and Machine Intelligence, pp. 679-698, 1986.
[35] B. Gooch, E. Reinhard and A. Gooch, “Human facial illustrations: Creation and psychophysical evaluation,” ACM Trans. Graph, 2004.
[36] T. Y. Zhang and C. Y. Suen, “A fast parallel algorithm for thinning digital patterns,” Communication of the ACM, pp. 236-239, 1984.
[37] J. Serra, “Image analysis and mathematical morphology,” New York: Academic Press, 1982.
[38] D. Comaniciu and P. Meer, “Mean shift: A robust approach toward feature space analysis,” IEEE Transaction on Pattern Analysis and Machine Intelligence, 2002.
[39] J. Foley, A. Van Dam, S. Feiner and J. Hughes, “Computer Graphics: Principles and Practice, 2nd edition,” Addison Wesley, 1997.
[40] Peter Litwinowicz, “Processing Images and Video for An Impressionist Effet,” SIGGRAPH, 1997.
[41] J. P. Collomosse, P. M. Hall, “Painterly Rendering using Image Salience,” Proceedings of the 20th UK conference on Eurographics, p.122, 2002.
[42] P. Klee. Pedagogical sketchbook. Faber and Faber, 1953.
[43] Richard Schmid’s personal website is available at http://www.richardschmid.com/default.asp
[44] A. Weyer, P. Roig Picazom, D. Pop, J. Cassar, A. Ozköse, J.-M. Vallet and I. Sřsa, “European Illustrated Glossary of Conservation Terms for Wall Paintings and Architectural Surfaces,” p. 61, 2015.
[45] Marianne Post''s Studio is available at http://www.mariannepoststudio.com/
[46] Horvat, Les, “Digital Imaging: Essential Skills,” Focal Press. p. 74.
[47] D. Comaniciu and P. Meer, “Mean shift: A robust approach toward feature space analysis,” IEEE Transaction on Pattern Analysis and Machine Intelligence, 2002.
[48] J. Foley, A. Van Dam, S. Feiner and J. Hughes, “Computer Graphics: Principles and Practice, 2nd edition,” Addison Wesley, 1997.
[49] A. Neubeck and L. Van Gool, “Efficient non-maximum suppression,” in ICPR, 2006.
[50] “Artists by Movement.” John Malyon/Artcyclopedia, 2007. Web.
[51] J. Hays and I. Essa, “Image and video based painterly animation”, Proceedings of the 3rd international symposium on Non-photorealistic animation and rendering, pp. 113-120, 2004.

第一頁 上一頁 下一頁 最後一頁 top