(3.238.250.105) 您好!臺灣時間:2021/04/18 20:26
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

我願授權國圖
: 
twitterline
研究生:汪奎仲
論文名稱:以體感偵測技術作為實體代理人操控命令輸入之研究
論文名稱(外文):Human Gesture Based Physical Agent Motion Control Using Body Sensing Technology
指導教授:林志敏林志敏引用關係
口試委員:留忠賢劉立頌
口試日期:2014-07-17
學位類別:碩士
校院名稱:逢甲大學
系所名稱:資訊工程學系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2014
畢業學年度:102
語文別:中文
論文頁數:49
中文關鍵詞:實體代理人機器人體感偵測技術Kinect機器人操控程式介面
外文關鍵詞:Physical AgentRobotMotion SensingKinectRobot Control API
相關次數:
  • 被引用被引用:0
  • 點閱點閱:224
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:12
  • 收藏至我的研究室書目清單書目收藏:0
本研究提出一套以體感偵測技術作為實體代理人(機器人)操控命令輸入介面,我們透過Kinect骨架追蹤功能來記錄身體肢體動作的資料,讓使用者可以透過這套系統及一套kinect for windows設備,就能輕易地將人體動作轉換為機器人操控命令或程式,構成機器人操控程式介面(API),可被其他程式所直接呼叫使用,進而驅動機器人做出各種動作,如此將可減輕程式設計師在需要運用機器人動作時,繁雜與低階機器人操控程式設計的負擔。此外,本研究也提出了圖形化機器人動作編輯工具,再配合Kinect設備本身支援的語音辨識功能,即能讓程式設計師與使用者都能輕鬆設計出想要的機器人表演動作。
We proposes a motion-sensing technology based physical agent control interface. Through a Kinect skeleton tracking function used for recording users’ body motion data, users can easily translate human body motions into the commands/programs for triggering the corresponding robot motions. Such that an Application Programming Interface (API) for robot control/manipulation is therefore built. Such an API is helpful to programmers who involve robots’ motions in their programs in eliminating the burdens of low level and trivial coding of robot mechanical controls. Additionally, for the users who will write codes for performing continuous robot motions, this paper also proposes a graphic authoring tool for editing robot motion. Moreover, by combining the built-in speech recognition function of Kinect, programmers may easily design the desired robot motion.
誌謝 i
摘要 ii
ABSTRACT iii
目錄 iv
圖目錄 vi
表目錄 vii
第一章 緒論 1
1.1 研究動機 1
1.2 研究目的 3
1.3 論文架構 4
第二章 文獻探討 5
2.1傳統攝影機對於人體辨識的研究 5
2.2 KINECT設備對於人體辨識的相關研究 5
2.3 KINECT設備與應用程式開發 6
2.4 機器人動作之相關研究 7
2.4.1 機器人動作設計 7
2.4.2 RDSL與劇本描敘語言 9
第三章 研究方法 11
3.1使用者姿勢設計 12
3.1.1姿勢偵測 12
3.1.2誤差門檻界定 14
3.1.3歐幾里德幾何基本原則 14
3.2 機器人動作設計 17
3.3使用者動作與機器人動作結合 17
3.3.1 RDSL架構應用與結合設計 18
3.3.2動作時間機制 20
3.3.3動作時間與誤判排除 22
3.3.4 複雜姿勢設計與辨識議題 23
3.3.4.1 複雜姿勢的判斷與辨識 23
3.3.4.1 複雜姿勢與組合動作 26
第四章 系統實作 29
4.1系統架構 29
4.2軟體應用層 30
4.2.1動作編輯系統(Offline Mode) 30
4.2.1.1使用者動作編輯系統操作 31
4.2.1.2使用者姿勢Training Mode 33
4.3 中介層 37
4.4機器人控制端 38
4.5互動性動作編輯系統之聊天室應用(Online Mode) 39
4.5.1即時線上互動訊息顯示區塊 40
4.5.2即時肢體訊息顯示區塊 42
4.5.3控制器 42
4.5.4動作管理區塊 44
4.5.5機器人連線資訊區塊 44
第五章 結論與未來研究 45
5.1結論 45
5.1未來研究 45
參考文獻 46
[1]K. M. Lee, Y. Jung, J. Kim, and S. R. Kim, &;quot;Are physically embodied social agents better than disembodied social agents?: The effects of physical embodiment, tactile interaction, and people&;#39;s loneliness in human–robot interaction,&;quot; International Journal of Human-Computer Studies, vol. 64, pp. 962-973, 2006.
[2]M. P. Michalowski, S. Sabanovic, and H. Kozima, &;quot;A dancing robot for rhythmic social interaction&;quot;, presented at the Proceedings of the ACM/IEEE international conference on Human-robot interaction, 2007.
[3]D. Sakamoto, T. Kanda, T. Ono, H. Ishiguro, and N. Hagita, &;quot;Android as a telecommunication medium with a human-like presence&;quot;, presented at the Proceedings of the ACM/IEEE international conference on Human-robot interaction, Arlington, Virginia, USA, 2007.
[4]K. Hayashi, T. Kanda, H. Ishiguro, T. Ogasawara, and N. Hagita. An experimental study of the use of multiple humanoid robots as a social communication medium. In Proc. UAHCI. Applications and Services, volume 6768 of Lect. Notes Comput. Sci., pages 32-41. Springer Berlin Heidelberg, 2011.
[5]D. F. Glas, T. Kanda, H. Ishiguro, and N. Hagita, “Teleoperation of multiple social robots,” IEEE Trans. Syst., Man, Cybern. A, Syst., Humans, vol. 42, no. 3, pp. 530–544, May 2012.
[6]www.robotworld.org.tw [Online]. Available: http://www.robotworld.org.tw/index.htm?pid=10&;News_ID=5422, April 2014
[7]Innovate.com.tw [Online]. Available: http://resource.innovati.com.tw/News/-dian-zi-bao005qi-zui-jia16zhou-ren-xing-ji-qi-ren-jiao-cai-robotinno-1, April 2014
[8]游信德,「RDSL:實體代理人動作操控之領域特定語言」,逢甲大學,碩士論文,民國102年。


[9] L. Cheng-Hsien and L. Wei-Yang, &;quot;Human action classification using histogram-based discriminative embedding,&;quot; Intelligent Signal Processing and Communications Systems (ISPACS), 2012 International Symposium on, New Taipei, pp. 7-11, Nov. 2012.
[10]L. Wanqing, Z. Zhengyou, and L. Zicheng, &;quot;Expandable Data-Driven Graphical Modeling of Human Actions Based on Salient Postures,&;quot; Circuits and Systems for Video Technology, IEEE Transactions on, vol. 18, pp. 1499-1510, Nov. 2008.
[11]S Chen et al., &;quot;Efficient Feedforward Categorization of Objects and Human Postures with Address-Event Image Sensors,&;quot; Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 34, pp. 302-314, Feb. 2012.
[12]T. Teixeira, E. Culurciello, and A. G. Andreou, &;quot;An Address-Event Image Sensor Network,&;quot; Circuits and Systems, 2006. ISCAS 2006. Proceedings. 2006 IEEE International Symposium on, Island of Kos, pp. 4467-4470, May 2006.
[13]T. Hachaj, M. R. Ogiela, Recognition of human body poses and gesture sequences with gesture description language, Journal of medical informatics and technology, vol 20/2012, ISSN 1642-6037, pp. 129–135, October 2012.
[14]openni.org, http://www.openni.org/, April 2014.
[15]Miranda, L., Vieira, T., Martinez, D., Lewiner, T., Vieira, A.W., Campos, M.F.M.: Real-Time Gesture Recognition from Depth Data through Key Poses Learning and Decision Forests. In: Graphics, Patterns and Images (SIBGRAPI), Conference on 25th SIBGRAPI 2012, pp.268--275 (2012).
[16]Tomasz Hachaj, Marek R. Ogiela,Rule-based approach to recognizing human body poses and gestures in real time, Multimedia Systems, Volume 20, Issue 1, pp 81-99, February 2014.
[17]T. Hachaj, ; Pedagogical Univ. of Krakow, Krakow, Poland ; Ogiela, M.R. ; Piekarczyk, M., Dependence of Kinect sensors number and position on gestures recognition with Gesture Description Language semantic classifier, Computer Science and Information Systems (FedCSIS), 2013 Federated Conference on, pp. 571-575, September 2013.


[18] Hachaj, T., Ogiela, M.R.: Semantic description and recognition of Human body poses and movement sequences with gesture description language. In: Computer applications for bio-technology, multimedia, and ubiquitous city. Communications in computer and information science, voi. 353, pp 1-8 (2012)
[19]Microsoft Kinect SDK, http://www.microsoft.com/en-us/kinectforwindows/ ,April 2014.
[20]Z. Zhang, “Microsoft Kinect sensor and its effect,” IEEE Multimedia Mag., vol. 19, no. 2, pp. 4–10, February 2012.
[21]Microsoft Kinect SDK [Online]. Available: Programming with the Kinect for Windows SDK, April 2014
[22]Kinect for Windows SDK beta. Programming Guide. “Getting Started with the Kinect for Windows SDK Beta from Microsoft Research”, pp. 19-20, July 2011.
[23] XML特性, http://www.ceci.org.tw/book/52/ch52_3.htm, July 2013.
[24] K.-Y. Chin, J.-M. Lin, Z.-W. Hong, K.-T. Lin, and W.-T. Lee, &;quot;Developing an IDML-based embodied pedagogical agent system for multimedia learning&;quot;, Ninth International Conference on Hybrid Intelligent Systems, pp. 37-41, 2009.
[25]K.-Y. Chin, J.-M. Lin, Z.-W. Hong, and A. J. Lin, &;quot;An Agent Scenario Mechanism Supporting Human/Agent Interaction,&;quot; Proc. of IKE, pp. 203-207, 2006.
[26]T.-H. Chen, J.-M. Lin, K.-Y. Chin, and Z.-W. Hong, &;quot;Design of an IDML-based interactive agent drama authoring tool&;quot;, Eighth International Conference on Intelligent Systems Design and Applications, pp. 67-72, 2008.
[27]金凱儀, 支援電腦輔助學習之IDML為基礎動畫及實體教學代理人,逢甲大學資訊工程學系研究所博士論文,2011.
[28]陳泰宏, IDAT:一個以IDML為基礎之互動式劇本編寫工具, 逢甲大學資訊工程學系研究所碩士論文, 2009.
[29]李光耀, 林志敏, &;quot;一個以IDML為基礎之機器人操控機制,&;quot; 第八屆離島資訊技術與應用研討會,金門縣,福建, May 2009.
[30]顏學回, 金凱儀, 林志敏, &;quot;應用劇本描述語言開發行動式代理人系統&;quot;, 第12屆行動計算研討會, March 2006.
[31] O. Patsadu, C. Nukoolkit, and B. Watanapa, Human gesture recognition using Kinect camera,&;quot; The proceeding of International Joint Conference on Computer Science and Software Engineering (JCSSE2012), Bangkok, Thailand, pp. 28-32, May-June 2012.
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊
 
系統版面圖檔 系統版面圖檔