跳到主要內容

臺灣博碩士論文加值系統

(44.201.97.0) 您好!臺灣時間:2024/04/15 10:33
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:Nurani Lathifah
研究生(外文):NURANI LATHIFAH
論文名稱:增進協作機器人效率的行為分析
論文名稱(外文):Behavior Analysis for Increasing Human-Robot Collaboration Efficiency
指導教授:林顯易
指導教授(外文):LIN, HSIEN-I
口試委員:張以全蕭俊祥李俊賢林顯易
口試委員(外文):CHANG, PETER I-TSYUENSHAW, JIN-SIANGLEE, JIN-SHYANLIN, HSIEN-I
口試日期:2022-07-18
學位類別:碩士
校院名稱:國立臺北科技大學
系所名稱:機電學院機械與自動化外國學生專班
學門:工程學門
學類:機械工程學類
論文種類:學術論文
論文出版年:2022
畢業學年度:110
語文別:英文
論文頁數:71
中文關鍵詞:Collaboration RobotNonverbal BehaviorBehavior recognition
外文關鍵詞:Collaboration RobotNonverbal BehaviorBehavior recognition
相關次數:
  • 被引用被引用:0
  • 點閱點閱:111
  • 評分評分:
  • 下載下載:14
  • 收藏至我的研究室書目清單書目收藏:1
This research presents a behavior analysis for increasing human-robot collaboration efficiency in an assembling case. The study was inspired by previous research where a set of operator intentions in assembling is translated into an intention graph used to formulate a probabilistic decision model (POMDP) for planning robot actions in the presence of operator intention ambiguity and perception uncertainty. In this work, the improvement is made by considering analysis of human behavior in the form of fatigue and adaptation ability. In addition, the collaboration scheme is changed from cooperation to a collaboration in which the robot and operator work in parallel, not sequentially. The proposed method was tested to perform chair assembly. Compared with the previous work, the results obtained in this method are increasing in the effectiveness of the assembly process by shortening the assembly duration. To validate the method, the experiment shows that the proposed method in the assembling fifty chairs is 224 seconds faster than the previous method.
This research presents a behavior analysis for increasing human-robot collaboration efficiency in an assembling case. The study was inspired by previous research where a set of operator intentions in assembling is translated into an intention graph used to formulate a probabilistic decision model (POMDP) for planning robot actions in the presence of operator intention ambiguity and perception uncertainty. In this work, the improvement is made by considering analysis of human behavior in the form of fatigue and adaptation ability. In addition, the collaboration scheme is changed from cooperation to a collaboration in which the robot and operator work in parallel, not sequentially. The proposed method was tested to perform chair assembly. Compared with the previous work, the results obtained in this method are increasing in the effectiveness of the assembly process by shortening the assembly duration. To validate the method, the experiment shows that the proposed method in the assembling fifty chairs is 224 seconds faster than the previous method.
Abstract i
Acknowledgment ii
Contents iii
List of Tables v
List of Figures vi
1 Introduction 1
1.1 Motivation 1
1.2 Problem Statement and Scope 4
1.3 Contributions of the Research 5
1.4 Structure of the thesis 6
2 Literature Review 7
2.1 Robot Collaboration 7
2.1.1 History of Human-Robot Interaction and Collaboration 7
2.1.2 Human-Robot Interaction Efficiency and Collaboration 9
2.2 Nonverbal Behavior Recognition 13
2.2.1 Recognition by Human Feature 14
2.2.2 Recognition by Facial Point or Eye-gaze 20
2.3 Summary 22
3 Theoretical Background 24
3.1 Human Feature Extraction 24
3.2 Human Activity Classification 26
3.3 Partially Observable Markov Decision Process 28
3.4 Worker Fatigue 30
3.5 Worker Adaptation Ability 33
3.6 Summary 34
4 Proposed Method 36
4.1 System Overview 36
4.1.1 Extracting Human Feature 37
4.1.2 Human Activity Classification 40
4.1.3 POMDP 44
4.2 Behavior Analysis for Efficiency 48
4.2.1 Human's Fatigue 48
4.2.2 Human's Adaptation Ability 49
5 Experimental Results 51
5.1 Experimental Setup 51
5.2 Result Human Activity Classification 52
5.3 Behavior Analysis for Efficiency 55
5.3.1 Human intention 55
5.3.2 Human worker's fatigue 55
5.3.3 Human worker's adaptation ability 58
5.4 Performance Comparison 58
6 Conclusions and Future Work 63
6.1 Conclusions 63
6.2 Future Work 64
REFERENCE 66

[1] G. J. M. Read, S. Shorrock, G. H. Walker, and P. M. Salmon, “State of science: Evolving perspectives on ‘human error’,” Ergonomics, vol. 64, pp. 1091–1114, 9 Sep.2021, issn: 0014-0139. doi:10.1080/00140139.2021.1953615.
[2] P. Zhu, L. Sun, Y. Song, L. Wang, X. Yuan, and Z. Dai, “Analysis on cognitive behaviors and prevention of human errors of coalmine hoist drivers,” International Journal of Safety and Security Engineering, vol. 10, pp. 663–670, 5 Nov. 2020, issn: 20419031. doi: 10.18280/ijsse.100511.
[3] P. Barosz, G. Golda, and A. Kampa, “Efficiency analysis of manufacturing line with industrial robots and human operators,” Applied Sciences, vol. 10, p. 2862, 8 Apr. 2020, issn: 2076-3417. doi: 10.3390/app10082862.
[4] X. Pan and Z. Wu, “Performance shaping factors in the human error probability modification of human reliability analysis,” International Journal of Occupational Safety and Ergonomics, vol. 26, pp. 538–550, 3 Jul. 2020, issn: 1080-3548. doi: 10.1080/10803548.2018.1498655.
[5] H. Lausberg, Understanding Body Movement, H. Lausberg, Ed. Peter Lang D, Jan. 2014, isbn: 9783653042085. doi: 10.3726/978-3-653-04208-5.
[6] R. J. Sternberg and A. Kostic, Social Intelligence and Nonverbal Communication, R. J. Sternberg and A. Kosti ́c, Eds. Springer International Publishing, 2020, isbn: 978-3-030-34963-9. doi: 10.1007/978-3-030-34964-6.
[7] R. Singh, T. Miller, J. Newn, E. Velloso, F. Vetere, and L. Sonenberg, “Combining gaze and ai planning for online human intention recognition,” Artificial Intelligence, vol. 284, p. 103 275, Jul. 2020, issn: 00043702. doi: 10.1016/j.artint.2020.103275.
[8] R. Ishii, C. Ahuja, Y. I. Nakano, and L.-P. Morency, “Impact of personality on nonverbal behavior generation,” ACM, Oct. 2020, pp. 1–8, isbn: 9781450375863. doi: 10.1145/3383652.3423908.
[9] C. L. Reed, E. J. Moody, K. Mgrublian, S. Assaad, A. Schey, and D. N. McIntosh, “Body matters in emotion: Restricted body movement and posture affect expression and recognition of status-related emotions,” Frontiers in Psychology, vol. 11, Aug. 2020, issn: 1664-1078. doi: 10.3389/fpsyg.2020.01961.
[10] S. Darafsh, S. S. Ghidary, and M. S. Zamani, “Real-time activity recognition and intention recognition using a vision-based embedded system,” Jul. 2021.
[11] J. C. Mateus, D. Claeys, V. Lim`ere, J. Cottyn, and E.-H. Aghezzaf, “A structured methodology for the design of a human-robot collaborative assembly workplace,” The International Journal of Advanced Manufacturing Technology, vol. 102, pp. 2663–2681, 5-8 Jun. 2019, issn: 0268-3768. doi: 10.1007/s00170-019-033563.
[12] T. Smith, P. Benardos, and D. Branson, “Assessing worker performance using dynamic cost functions in human robot collaborative tasks,” Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science,vol. 234, pp. 289–301, 1 Jan. 2020, issn: 0954-4062. doi: 10.1177/0954406219838568.
[13] R. R. Galin and R. V. Meshcheryakov, Human-robot interaction efficiency and human robot collaboration, 2020. doi: 10.1007/978-3-030-37841-7_5.
[14] E. Matheson, R. Minto, E. G. G. Zampieri, M. Faccio, and G. Rosati, “Human–robot collaboration in manufacturing applications: A review,” Robotics, vol. 8, p. 100, 4 Dec. 2019, issn: 2218-6581. doi: 10.3390/robotics8040100.
[15] S. Huang, L. Yang, W. Chen, T. Tao, and B. Zhang, “A specific perspective: Subway driver behaviour recognition using cnn and time-series diagram,” IET Intelligent Transport Systems, vol. 15, pp. 387–395, 3 Mar. 2021, issn: 1751-956X. doi: 10.1049/itr2.12032.
[16] J. Wang, T. Liu, and X. Wang, “Human hand gesture recognition with convolutional neural networks for k-12 double-teachers instruction mode classroom,” Infrared Physics Technology, vol. 111, p. 103 464, Dec. 2020, issn: 13504495. doi: 10.1016/j.infrared.2020.103464.
[17] F.-C. Lin, H.-H. Ngo, C.-R. Dow, K.-H. Lam, and H. L. Le, “Student behavior recognition system for the classroom environment based on skeleton pose estimation and person detection,” Sensors, vol. 21, p. 5314, 16 Aug. 2021, issn: 1424-8220. doi: 10.3390/s21165314.
[18] S. Li, J. Yi, Y. A. Farha, and J. Gall, “Pose refinement graph convolutional network for skeleton-based action recognition,” IEEE Robotics and Automation Letters, vol. 6, pp. 1028–1035, 2 Apr. 2021, issn: 2377-3766. doi: 10.1109/LRA.2021.3056361.
[19] N. Jaouedi, F. J. Perales, J. M. Buades, N. Boujnah, and M. S. Bouhlel, “Prediction of human activities based on a new structure of skeleton features and deep learning model,” Sensors, vol. 20, p. 4944, 17 Sep. 2020, issn: 1424-8220. doi: 10.3390/s20174944.
[20] H.-F. Sang, Z.-Z. Chen, and D.-K. He, “Human motion prediction based on attention mechanism,” Multimedia Tools and Applications, vol. 79, pp. 5529–5544, 9-10 Mar. 2020, issn: 1380-7501. doi: 10.1007/s11042-019-08269-7.
[21] P. Neto, M. Sim ̃ao, N. Mendes, and M. Safeea, “Gesture-based human-robot interaction for human assistance in manufacturing,” The International Journal of Advanced Manufacturing Technology, vol. 101, pp. 119–135, 1-4 Mar. 2019, issn: 0268-3768. doi: 10.1007/s00170-018-2788-x.
[22] K. B. de Carvalho, D. K. D. Villa, M. Sarcinelli-Filho, and A. S. Brand ̃ao, “Gestures teleoperation of a heterogeneous multi-robot system,” The International Journal of Advanced Manufacturing Technology, vol. 118, pp. 1999–2015, 5-6 Jan. 2022, issn: 0268-3768. doi: 10.1007/s00170-021-07659-2.
[23] K.-J. Wang and D. Santoso, “A smart operator advice model by deep learning for motion recognition in human–robot coexisting assembly line,” The International Journal of Advanced Manufacturing Technology, vol. 119, pp. 865–884, 1-2 Mar. 2022, issn: 0268-3768. doi: 10.1007/s00170-021-08319-1.
[24] V. Voronin, M. Zhdanova, E. Semenishchev, A. Zelenskii, Y. Cen, and S. Agaian, “Action recognition for the robotics and manufacturing automation using 3-d binary micro-block difference,” The International Journal of Advanced Manufac-turing Technology, vol. 117, pp. 2319–2330, 7-8 Dec. 2021, issn: 0268-3768. doi: 10.1007/s00170-021-07613-2.
[25] L. Roda-Sanchez, C. Garrido-Hidalgo, A. S. Garc ́ıa, T. Olivares, and A. Fern ́andez Caballero, “Comparison of rgb-d and imu-based gesture recognition for human-robot interaction in remanufacturing,” The International Journal of Advanced Manufacturing Technology, Oct. 2021, issn: 0268-3768. doi: 10.1007/s00170-021-08125-9.
[26] A. AlZoubi, B. Al-Diri, T. Pike, T. Kleinhappel, and P. Dickinson, “Pair-activity analysis from video using qualitative trajectory calculus,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 28, pp. 1850–1863, 8 Aug. 2018, issn: 1051-8215. doi: 10.1109/TCSVT.2017.2701860.
[27] D. Hartmann and C. Schwenck, “Emotion processing in children with conduct problems and callous-unemotional traits: An investigation of speed, accuracy, and attention,” Child Psychiatry Human Development, vol. 51, pp. 721–733, 5 Oct. 2020, issn: 0009-398X. doi: 10.1007/s10578-020-00976-9.
[28] C. B. S. Maior, M. J. das Chagas Moura, J. M. M. Santana, and I. D. Lins, “Real time classification for autonomous drowsiness detection using eye aspect ratio,” Expert Systems with Applications, vol. 158, p. 113 505, Nov. 2020, issn: 09574174. doi: 10.1016/j.eswa.2020.113505.
[29] J. Stapel, M. E. Hassnaoui, and R. Happee, “Measuring driver perception: Combining eye-tracking and automated road scene perception,” Human Factors: The Journal of the Human Factors and Ergonomics Society, vol. 64, pp. 714–731, 4 Jun. 2022, issn: 0018-7208. doi: 10.1177/0018720820959958.
[30] M. Schaeffer, J. Nitzke, A. Tardel, K. Oster, S. Gutermuth, and S. Hansen-Schirra, “Eye-tracking revision processes of translation students and professional translators,” Perspectives, vol. 27, pp. 589–603, 4 Jul. 2019, issn: 0907-676X. doi: 10 .1080/0907676X.2019.1597138.
[31] J. M. Mart ́ın, V. L. del Campo, and L. J. M. Fern ́andez-Arg ̈uelles, “Design and development of a low-cost mask-type eye tracker to collect quality fixation measurements in the sport domain,” Proceedings of the Institution of Mechanical Engineers,Part P: Journal of Sports Engineering and Technology, vol. 233, pp. 116–125, 1 Mar. 2019, issn: 1754-3371. doi: 10.1177/1754337118808177.
[32] A. Saeed, A. Al-Hamadi, and H. Neumann, “Facial point localization via neural networks in a cascade regression framework,” Multimedia Tools and Applications, vol. 77, pp. 2261–2283, 2 Jan. 2018, issn: 1380-7501. doi: 10.1007/s11042-016-4261-x.
[33] Y. Ma, W. Zhu, and Y. Zhou, “Automatic grasping control of mobile robot based on monocular vision,” The International Journal of Advanced Manufacturing Technology, vol. 121, pp. 1785–1798, 3-4 Jul. 2022, issn: 0268-3768. doi: 10.1007/s00170-022-09438-z.
[34] S. Garg, A. Saxena, and R. Gupta, “Yoga pose classification: A cnn and mediapipe inspired deep learning approach for real-world application,” Journal of Ambient Intelligence and Humanized Computing, Jun. 2022, issn: 1868-5137. doi: 10.1007/s12652-022-03910-0.
[35] R. Mojarad, F. Attal, A. Chibani, and Y. Amirat, “Automatic classification error detection and correction for robust human activity recognition,” IEEE Robotics and Automation Letters, vol. 5, pp. 2208–2215, 2 Apr. 2020, issn: 2377-3766. doi: 10.1109/LRA.2020.2970667.
[36] Y. Yu, X. Si, C. Hu, and J. Zhang, “A review of recurrent neural networks: Lstm cells and network architectures,” Neural Computation, vol. 31, pp. 1235–1270, 7 Jul. 2019, issn: 0899-7667. doi: 10.1162/neco_a_01199.
[37] N. Renotte, Sign language detection using action recognition with python — lstm deep learning model, https://www.youtube.com/watch?v=doDUihpj6ro, [Online; accessed 27-June-2022], 2021.
[38] H. Kurniawati, “Partially observable markov decision processes and robotics,” Annual Review of Control, Robotics, and Autonomous Systems, vol. 5, pp. 253–277, 1 May 2022, issn: 2573-5144. doi: 10.1146/annurev-control-042920-092451.
[39] M. Cramer, K. Kellens, and E. Demeester, “Probabilistic decision model for adaptive task planning in human-robot collaborative assembly based on designer and operatorintents,” IEEE Robotics and Automation Letters, vol. 6, pp. 7325–7332, 4 Oct. 2021, issn: 2377-3766. doi: 10.1109/LRA.2021.3095513.
[40] E. Escobar-Linero, M. Dom ́ınguez-Morales, and J. L. Sevillano, “Worker’s physical fatigue classification using neural networks,” Expert Systems with Applications, vol. 198, p. 116 784, Jul. 2022, issn: 09574174. doi: 10.1016/j.eswa.2022.116784.
[41] S. Digiesi, A. A. Kock, G. Mummolo, and J. E. Rooda, “The effect of dynamic worker behavior on flow line performance,” International Journal of Production Economics, vol. 120, pp. 368–377, 2 Aug. 2009, issn: 09255273. doi: 10.1016/j.ijpe.2008.12.012.
[42] Proplanner, Mtm-uas. [Online]. Available: https://www.proplanner.com/solutions/assembly-process-planning/time-studies/mtm.
[43] M. J. Anzanello and F. S. Fogliatto, “Learning curve models and applications: Literature review and research directions,” International Journal of Industrial Ergonomics, vol. 41, pp. 573–583, 5 Sep. 2011, issn: 01698141. doi: 10 . 1016 / j .ergon.2011.05.001.
[44] amazon, Hp w300 1080p 30 fps fhd webcam with built-in dual. [Online]. Available: https://www.amazon.in/HP- Digital- Wide- Angle- Calling- Microsoft/dp/B08FTH38QX.
[45] P. Radzki, Detection of human body landmarks - mediapipe and openpose comparison, 2022. [Online]. Available: https://www.hearai.pl/post/14-openpose/.71

QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top