跳到主要內容

臺灣博碩士論文加值系統

(216.73.216.109) 您好!臺灣時間:2026/04/21 02:42
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:王祈翔
研究生(外文):Kai-Siang Ong
論文名稱:應用多感測器之融合於人機互動之人員偵測與追蹤系統
論文名稱(外文):Sensor Fusion Based Human Detection and Tracking System for Human-Robot Interaction
指導教授:傅立成傅立成引用關係
指導教授(外文):Li-Chen Fu
口試委員:連豊立陳永耀簡忠漢
口試日期:2012-01-19
學位類別:碩士
校院名稱:國立臺灣大學
系所名稱:電機工程學研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2012
畢業學年度:100
語文別:英文
論文頁數:83
中文關鍵詞:人員行為意圖推論感測器融合系統協方差交叉機器人反應行為決策人機互動
外文關鍵詞:human behavior intention inferencesensor fusion based systemCovariance Intersection(CI)robot reaction decisionhuman-robot interaction
相關次數:
  • 被引用被引用:0
  • 點閱點閱:257
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:1
Service robot has received enormous attention with rapid development of high technology in recent years, and it is endowed with the capabilities of interacting with people and performing human-robot interaction (HRI). For this purpose, the Sampling Importance Resampling (SIR) particle filter is adopted to implement the laser and visual based human tracking system when dealing with human-robot interaction (HRI) in real world environment. The sequence of images and the geometric information from measurements are provided by the vision sensor and the laser range finder (LRF), respectively.
We construct a sensor fusion based system to integrate the information from both sensors by using a data association approach – Covariance Intersection (CI). It will be used to increase the robustness and reliability of human in the real world environment. In this thesis, we propose a Behavior System for analyze human features and classify the behavior by the crucial information from sensor fusion based system. The system is used to infer the human behavioral intentions, and also allow the robot to perform more natural and intelligent interaction. We apply a spatial model based on proxemics rules to our robot, and design a behavioral intention inference strategy. Furthermore, the robot will make the corresponding reaction in accordance with the identified behavioral intention. This work concludes with several experimental results with a robot in indoor environment, and promising performance has been observed.

誌謝 i
中文摘要 ii
ABSTRACT iii
CONTENTS iv
LIST OF FIGURES vii
LIST OF TABLES ix
Chapter 1 Introduction 1
1.1 Background and Related Work 3
1.2 System Architecture 5
1.3 Thesis Organization 6
Chapter 2 Preliminaries 7
2.1 Particle Filter 7
2.1.1 Stochastic Filter from Bayesian Perspective 8
2.1.2 Sampling Importance Resampling (SIR) Particle Filter 10
2.2 Laser Based Tracking System 12
2.2.1 Laser Scanning System 12
2.2.2 Human Detection and Tracking 13
2.3 Vision Based Tracking System 16
2.3.1 Camera Perspective Model 17
2.3.2 Human Detection and Tracking 17
Chapter 3 Data Fusion for Human Behavior Intention Inference 21
3.1 Laser Based System 21
3.1.1 Laser Data Modification 22
3.1.2 Relative Angle Evaluation 23
3.2 Vision Based System 24
3.2.1 Multiple Feature Likelihood Evaluation 25
3.2.2 Facing Direction Estimation 28
3.3 Sensor Fusion System 33
3.3.1 Data Fusion via Covariance Intersection 34
3.3.2 Hypotheses Generation of Laser Range Finder 35
3.3.3 Hypotheses Generation of Single Camera 36
3.3.4 Multi-Hypotheses Association 39
3.3.4.1 Hypotheses Generation 39
3.3.4.2 Gating Procedure 40
3.3.4.3 CI based Fusion 41
3.4 Behavior System 43
3.4.1 Human Behavior Intention Inference 43
3.4.1.1 Classification with Individual Information 44
3.4.1.2 Classification with Fusion Information 47
3.4.2 Behavioral Intention Estimation 50
3.4.3 Robot Reaction 55
3.4.3.1 Cognitive Interaction Estimation 56
3.4.3.2 Robot Reaction Decision 57
Chapter 4 Experimental Results 59
4.1 Covariance Intersection based Data Fusion 59
4.2 Facing Direction Estimation 61
4.3 Human Behavior Intention Inference 63
4.4 Cognitive Interaction Estimation 71
4.5 Overall Test (Robot Reaction Decision) 73
Chapter 5 Conclusion 77
REFERENCES 79


[1]T. Kanda, M. Shiomi, Z. Miyashita, H. Ishiguro, and N. Hagita, “A Communication Robot in a Shopping Mall”, IEEE Transactions on Robotics, Vol. 26, pp. 897-913, 2010.
[2]C. Hu, X. Ma, X. Dai, and K. Qian, “Reliable people tracking approach for mobile robot in indoor environments”, Journal Robotics and Computer-Integrated Manufacturing, Vol. 26, 2010.
[3]H. Liu and H. He, “A Salient Feature and Scene Semantics based Attention Model for Human Tracking on Mobile Robots”, IEEE International Conference on Robotics and Automation, pp. 4545-4552, 2010.
[4]S. Frintrop, A .Konigs, F. Hoeller, and D. Schulz, “A Component-Based Approach to Visual Person Tracking from a Mobile Platform”, International Journal of Social Robotics, Vol. 2, pp. 53-62, 2010.
[5]J. Brookshire, “Person Following Using Histogram of Oriented Gradients”, International Journal of Social Robotics, Vol. 2, pp. 137-146, 2010.
[6]L. Brethes, F. Lerasle, P. Danes, and M. Fontmarty, “Particle Filtering Strategies for Data Fusion Dedicated to Visual Tracking for a Mobile Robot”, Journal of Machine Vision and Application, Vol. 21, pp. 427-448, 2010.
[7]E.A. Topp and H.I. Christensen, “Tracking for Following and Passing Persons,” IEEE International Conference on Intelligent Robots and Systems, pp. 2321-2327, 2005.
[8]A. Fod, A. Howard, and M. J. Mataric, “Laser-Based People Tracking,” IEEE International Conference on Robotics and Automation, vol. 3, pp. 3024-2029, 2002.
[9]T. Horiuchi, S. Thompson, S. Kagami, and Y. Ehara, “Pedestrian Tracking from a Mobile Robot Using a Laser Range Finder,” IEEE International Conference on Systems, Man and Cybernetics, pp. 931-936, 2007.
[10]D. Schulz, W. Burgard, D. Fox, and A. B. Cremers, “People Tracking with a Mobile Robot Using Sample-based Joint Probabilistic Data Association Filters,” IEEE International Conference on Robotics and Automation, pp. 3024-3029, 2002.
[11]C.T Chou, J.Y. Li, and L.C. Fu, “Multi-Robot Cooperation Based Human Tracking System Using Laser Range Finder”, IEEE International Conference on Robotics and Automation, pp. 532-537, 2011.
[12]P. Chakravarty and R. Jarvis, “Panoramic Vision and Laser Range Finder Fusion for Multiple Person Tracking”, IEEE International Conference on Intelligent of Robots and Systems, pp. 2949-2954, 2006.
[13]X. Song, H. Zhao, J. Cui, X. Shao, R. Shibasaki, and H. Zha, “Fusion of Laser and Vision for Multiple Targets tracking via On-line Learning”, IEEE International Conference on Robotics and Automation, pp. 406-411, 2010.
[14]P. Vadakkepat and L. Jing, “Improved Particle Filter in Sensor Fusion for Tracking Randomly Moving Object”, IEEE Transactions on Instrument and Measurement, Vol. 55, pp. 1823-1832, 2006.
[15]N. Bellotto and H. Hu, “Multisensor-Based Human Detection and Tracking for Mobile Service Robots, ” IEEE Transactions on Systems, Man, and Cybernetics, Part B, Vol.39 (1), pp. 167-181, 2009.
[16]R.C. Luo, N. Chang, S. Lin, and S. Wu, “Human Tracking and Following Using Sensor Fusion Approach for Mobile Assistive Companion Robot”, IEEE Conference of Industrial Electronics, pp. 2235-2240, 2009.
[17]L. Liao, D. Fox, and H. Kautz, “Location-Based Activity Recognition Using Relational Markov Networks”, Proceeding of IJCAI, pp. 773-778, 2005.
[18]M. Finke, K.L. Koay, and K. Dautenhahn, “Hey, I’m over here – How can a robot attract people’s attention?”, IEEE International Workshop on Robots and Human Interactive Communication, pp. 7-12, 2005.
[19]H. Holzapfel, T. Schaaf, H.K. Ekenel, C. Schaa, and A. Waibel, “A robot learns to know people – First contacts of a robot”, Proceedings of the 29th Annual German Conference on Artificial Intelligence, vol. 4314, pp. 302-316, 2006.
[20]A. Durdu, I. Erkmen, A. M. Erkmen, and A. Yilmaz, “Morphing Estimated Human Intention via Human-Robot Interactions”, Proceedings of the World Congress on Engineering and Computer Science, 2011.
[21]S. Koo and D. Kwon, “Recognizing Human Intentional Actions from Relative Movement between Human and Robot”, IEEE International Symposium on Robot and Human Interactive Communication, pp. 939-944, 2009.
[22]M. Svenstrup, S. Tranberg, H. J. Anderson, and T. Bak, “Pose Estimation and Adaptive Robot Behaviour for Human-Robot Interaction”, IEEE International Conference on Robotics and Automation, pp. 3571-3576, 2009.
[23]S. Satake, T. Kanda, D. F. Glas, M. Imai, H. Ishiguro, and N. Hagita, “How to Approach Humans?: Strategies for Social Robots to Initiate Interaction”, IEEE International Conference on Human Robot Interaction, 2009.
[24]Y. S. Cheng, C.M Huang, and L.C. Fu, “Multiple People Visual Tracking in a Multi-Camera System for Cluttered Environments”, IEEE International Conference on Intelligent Robots and Systems, pp. 675-680, 2007.
[25]C.H. Chuang, S.S Huang, L.C. Fu, and P.Y. Hsiao, “Monocular multi-human detection using Augmented Histograms of Oriented Gradients detection using Augmented Histograms of Oriented Gradients”, IEEE International Conference on Pattern Recognition, pp. 1-4, 2008.
[26]P. Vadakkepat, P. Lim, L.C.D. Silva, L. Jing, and L.L. Ling, “Multimodal Approach to Human-Face Detection and Tracking”, IEEE Transactions on Industrial Electronics, vol. 55, no. 3, pp. 1385-1393, 2008.
[27]A. Mittal and N. Paragios, “Motion-based Background Subtraction Using Adaptive Kernel Density Estimation”, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Vol. 2, pp. 302-309, 2004.
[28]H. Kruppa, Castrillon-Santana, and B. Schiele, “Fast and Robust Face Finding via Local Context,” Joint IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance, 2003.
[29]N. Dalal and B. Triggs, “Histograms of Oriented Gradients for Human Detection”, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Vol. 1, pp. 886-893, 2005.
[30]S. Julier and J.K. Uhlmann, “General Decentralized Data Fusion with Covariance Intersection (CI)”, Handbook of Multisensor Data Fusion (D.L.Hall and J.Llinas, Eds.), CRC Press, 2001.
[31]E.T. Hall, “The Hidden Dimension”, 1966.
[32]N. Bergstrom, T. Kanda, T. Miyashita, H. Ishiguro, and N. Hagita, “Modeling of Natural Human-Robot Encounters”, IEEE International Conference on Intelligent Robots and Systems, pp. 2623-2629, 2008.
[33]W.J Kuo, S.H Tseng, J.Y. Yu, and L.C. Fu, "A hybrid approach to RBPF based SLAM with grid mapping enhanced by line matching," in IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1523-1528, 2009.
[34]M. Javier and L. Montano, "Nearness diagram (ND) navigation: collision avoidance in troublesome scenarios," IEEE Transactions on Robotics and Automation, vol. 20, pp. 45-59, 2004.
[35]Y.R. Chen, C.M. Huang, and L.C. Fu, “Visual tracking of human head and arms with a single camera”, IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3416-3421, 2010.


QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關論文