跳到主要內容

臺灣博碩士論文加值系統

(44.211.26.178) 您好!臺灣時間:2024/06/15 02:07
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:陳靖霖
研究生(外文):Chen, Jing-Lin
論文名稱:利用5G網路實現長距離沉浸式虛擬實境機器人遠端操作任務
論文名稱(外文):Enabling Long-distance Immersive Virtual Reality Interface for Remote Robot Manipulation Using 5G Network
指導教授:王學誠王學誠引用關係
指導教授(外文):Wang, Hsueh-Cheng
口試委員:楊谷洋蕭得聖柯立偉余立輝
口試委員(外文):Young, Kuu-YoungHsiao, Te-ShengKo, Li-WeiYu, Lap-Fai
口試日期:2023-02-15
學位類別:碩士
校院名稱:國立陽明交通大學
系所名稱:電控工程研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2023
畢業學年度:111
語文別:英文
論文頁數:55
中文關鍵詞:人機互動5G網路虛擬實境遠端遙控
外文關鍵詞:human-computer interaction5G NetworkVirtual RealityTeleoperation
相關次數:
  • 被引用被引用:0
  • 點閱點閱:49
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
隨著科技的發展,機器人領域的技術逐漸被工業所重視,更多的機器人被工廠所使用,工業上自動化的程度也逐漸提高。自從2019年底所爆發的COVID-19疫情以來,重創各種製造產業,在工作或是工廠運行上的方式需要轉移至遠端辦公與遠端操控下,但是工廠裡面的機器卻還是需要透過人力來監控與維護,為了達到遠端操控與降低群聚感染風險,使用虛擬實境所呈現出的視覺化3D模型來遠端操控機器手臂,是一個很有潛力的方式。

本論文透過5G網路的通訊介面,提出一個能符合實際應用場景的虛擬實境遠端遙控系統,藉由5G網路提供的高頻寬與低延遲,能夠容納下更多感測器所傳輸的資料,並且通訊延遲對於受試者而言,已經接近乙太網路的表現。此外也經由在虛擬實境中以多視角呈現的點雲,改善先前的影像串流所會發生的遮擋問題,並讓受試者在虛擬實境遠端遙控機械手臂時,就像是在生活中透過自己的手去抓握物體一樣地流暢。最後透過受試者的實驗,驗證本系統允許初學者執行高度靈巧性操作的任務,並且經過一小段時間的練習,便可掌握遠端遙控的技巧。
The robotics industry is increasingly valued by the manufacturing industry due to technology development. COVID-19 has affected various manufacturing industries, leading to a shift towards remote work and teleoperation. To remotely control machines and reduce the risk of infections, a promising method is to use a virtual reality teleoperation system that presents a visualized 3D model. This study proposes a 5G-based virtual reality teleoperation system that accommodates more sensor data and has low latency. The point cloud presented from multiple perspectives in virtual reality improves the occlusion problem, allowing for smoother remote control of the mechanical arm. Experiments have shown that beginners can master teleoperation skills after a short period of practice.

This proposes a virtual reality teleoperation system that uses 5G network to handle large amounts of sensor data with low latency. Multiple perspectives of the point cloud presented in virtual reality improve occlusion, allowing subjects to control a mechanical arm remotely as smoothly as grasping objects in real life. Experiments show that beginners can master teleoperation skills after a short period of practice.
摘要 . i
Abstract . ii
Acknowledgement . iii
Table of Contents . iv
List of Figures . vii
List of Tables . viii
1 Introduction . 1
1.1 Background and Motivation . 1
1.2 Challenges and Contributions . 3
1.2.1 Problem Definitions . 3
1.2.2 Challenges . 3
1.2.3 Contributions . 4
1.3 Architecture . 5
2 Related Work . 7
2.1 VR Teleoperation . 7
2.2 Communication Interface and Delay . 9
2.3 Telemedicine . 10
3 System Architecture and Methods . 13
3.1 Teleoperation System Architecture . 13
3.1.1 Robot Platform System (Robot Side) . 13
3.1.2 Operator Platform System (Operator Side) . 14
3.2 Hardware and Software Architecture . 15
3.2.1 LoCoBot Mobile Manipulator . 15
3.2.2 Intel Realsense D435 Depth Camera . 16
3.2.3 Meta Quest 2 VR head-mounted Device . 17
3.2.4 Workstation . 18
3.2.5 5G Base Station . 20
3.2.6 VPN Server . 21
3.2.7 VR Development Platform - Unity . 22
3.2.8 Robot Operating System (ROS) . 23
3.3 VR Interface Design . 24
3.3.1 Display interface . 24
3.3.2 Control interface . 25
3.4 Methodology . 26
3.4.1 Network Bandwidth Requirements and Communication Latency Evaluation . 27
3.4.2 Video Streaming . 29
3.4.3 Multi-view Point Cloud Streaming . 30
4 Experiment design and results discussion . 33
4.1 Experiment 1:Different visualization methods for teleoperation . 33
4.1.1 Introduction of evaluation metrics . 33
4.1.2 Experiment Design and Evaluation . 34
4.1.3 Experiment results and discussion . 37
4.2 Experiment 2:Practice makes perfect . 39
4.2.1 Introduction of evaluation metrics . 40
4.2.2 Experiment Design and Evaluation . 40
4.2.3 Experiment results and discussion . 42
5 Conclusions and Future Work . 44
5.1 Conclusions . 44
5.1.1 5G Network . 44
5.1.2 Multi-view Point Cloud Streaming . 45
5.2 Future Work . 45
5.2.1 Performance burden of point cloud projection . 45
5.2.2 Simplification of point cloud . 46
References . 47
[1] N. Koenig and A. Howard, “Design and use paradigms for gazebo, an open-source multi-robot simulator,” in 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)(IEEE Cat. No. 04CH37566), vol. 3. IEEE, 2004, pp. 2149–2154.
[2] E. Kolve, R. Mottaghi, W. Han, E. VanderBilt, L. Weihs, A. Herrasti, D. Gordon, Y. Zhu, A. Gupta, and A. Farhadi, “Ai2-thor: An interactive 3d environment for visual ai,” arXiv preprint arXiv:1712.05474, 2017.
[3] M. Savva, A. Kadian, O. Maksymets, Y. Zhao, E. Wijmans, B. Jain, J. Straub, J. Liu, V. Koltun, J. Malik et al., “Habitat: A platform for embodied ai research,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 9339–9347.
[4] F. Xiang, Y. Qin, K. Mo, Y. Xia, H. Zhu, F. Liu, M. Liu, H. Jiang, Y. Yuan, H. Wang et al., “Sapien: A simulated part-based interactive environment,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 11 097–11 107.
[5] X. Puig, K. Ra, M. Boben, J. Li, T. Wang, S. Fidler, and A. Torralba, “Virtualhome: Simulating household activities via programs,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 8494–8502.
[6] X. Gao, R. Gong, T. Shu, X. Xie, S. Wang, and S.-C. Zhu, “Vrkitchen: an interactive 3d virtual environment for task-oriented learning,” arXiv preprint arXiv:1903.05757, 2019.
[7] C. Gan, J. Schwartz, S. Alter, M. Schrimpf, J. Traer, J. De Freitas, J. Kubilius, A. Bhandwaldar, N. Haber, M. Sano et al., “Threedworld: A platform for interactive multi-modal physical simulation,” arXiv preprint arXiv:2007.04954, 2020.
[8] C. Li, F. Xia, R. Martín-Martín, M. Lingelbach, S. Srivastava, B. Shen, K. Vainio, C. Gokmen, G. Dharan, T. Jain et al., “igibson 2.0: Object-centric simulation for robot learning of everyday household tasks,” arXiv preprint arXiv:2108.03272, 2021.
[9] H. Fu, W. Xu, H. Xue, H. Yang, R. Ye, Y. Huang, Z. Xue, Y. Wang, and C. Lu, “Rfuniverse: A physics-based action-centric interactive environment for everyday household tasks,” arXiv preprint arXiv:2202.00199, 2022.
[10] R. Smith, “Open dynamics engine,” 2008, http://www.ode.org/. [Online]. Available: http://www.ode.org/
[11] NVIDIAGameWorks, “Nvidiagameworks/physx: Nvidia physx sdk.” [Online]. Available: https://github.com/NVIDIAGameWorks/PhysX
[12] E. Coumans and Y. Bai, “Pybullet, a python module for physics simulation for games, robotics and machine learning,” 2016.
[13] A. Juliani, V.-P. Berges, E. Teng, A. Cohen, J. Harper, C. Elion, C. Goy, Y. Gao, H. Henry, M. Mattar et al., “Unity: A general platform for intelligent agents,” arXiv preprint arXiv:1809.02627, 2018.
[14] P. Higgins, G. Y. Kebe, K. Darvish, D. Engel, F. Ferraro, C. Matuszek et al., “Towards making virtual human-robot interaction a reality,” in 3rd International Workshop on Virtual, Augmented, and Mixed-Reality for Human-Robot Interactions (VAM-HRI), 2021.
[15] T. Inamura and Y. Mizuchi, “Sigverse: A cloud-based vr platform for research on multimodal human-robot interaction,” Frontiers in Robotics and AI, vol. 8, p. 549360, 2021.
[16] S. Bustamante, J. Peters, B. Schölkopf, M. Grosse-Wentrup, and V. Jayaram, “Armsym: A virtual human–robot interaction laboratory for assistive robotics,” IEEE Transactions on Human-Machine Systems, vol. 51, no. 6, pp.
568–577, 2021.
[17] X. Puig, T. Shu, S. Li, Z. Wang, Y.-H. Liao, J. B. Tenenbaum, S. Fidler, and A. Torralba, “Watch-and-help: A challenge for social perception and human-ai collaboration,” arXiv preprint arXiv:2010.09890, 2020.
[18] Z. Erickson, V. Gangaram, A. Kapusta, C. K. Liu, and C. C. Kemp, “Assistive gym: A physics simulation framework for assistive robotics,” in 2020 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2020, pp. 10 169–10 176.
[19] T. Probst, A. Fossati, and L. Van Gool, “Automated visual input analysis for a telediagnostic robot system,” in Proceedings of the 5th Joint Workshop on New Technologies for Computer/Robot Assisted Surgery (CRAS 2015). Katholieke Universiteit Leuven, 2015, pp. 54–57.
[20] H. Wu, L. Lou, C.-C. Chen, S. Hirche, and K. Kuhnlenz, “Cloud-based networked visual servo control,” IEEE Transactions on Industrial Electronics, vol. 60, no. 2, pp. 554–566, 2012.
[21] E. Garcia and P. J. Antsaklis, “Model-based event-triggered control for systems with quantization and time-varying network delays,” IEEE Transactions on Automatic Control, vol. 58, no. 2, pp. 422–434, 2012.
[22] W. Pryor, B. P. Vagvolgyi, A. Deguet, S. Leonard, L. L. Whitcomb, and P. Kazanzides, “Interactive planning and supervised execution for high-risk, high-latency teleoperation,” in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2020, pp. 1857–1864.
[23] J. Artigas, R. Balachandran, C. Riecke, M. Stelzer, B. Weber, J.-H. Ryu, and A. Albu-Schaeffer, “Kontur-2: force-feedback teleoperation from the international space station,” in 2016 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2016, pp. 1166–1173.
[24] G. Brantner and O. Khatib, “Controlling ocean one,” in Field and Service Robotics. Springer, 2018, pp. 3–17.
[25] M. N. Tehrani, M. Uysal, and H. Yanikomeroglu, “Device-to-device communication in 5g cellular networks: challenges, solutions, and future directions,” IEEE Communications Magazine, vol. 52, no. 5, pp. 86–92, 2014.
[26] A. Gohil, H. Modi, and S. K. Patel, “5g technology of mobile communication: A survey,” in 2013 international conference on intelligent systems and signal processing (ISSP). IEEE, 2013, pp. 288–292.
[27] S. Mumtaz, K. M. S. Huq, and J. Rodriguez, “Direct mobile-to-mobile communication: Paradigm for 5g,” IEEE Wireless Communications, vol. 21, no. 5, pp. 14–23, 2014.
[28] R. Vannithamby and S. Talwar, Towards 5G: Applications, requirements and candidate technologies. John Wiley & Sons, 2017.
[29] F. Al-Turjman and S. Alturjman, “5g/iot-enabled uavs for multimedia delivery in industry-oriented applications,” Multimedia Tools and Applications, vol. 79, no. 13, pp. 8627–8648, 2020.
[30] M. A. Lema, A. Laya, T. Mahmoodi, M. Cuevas, J. Sachs, J. Markendahl, and M. Dohler, “Business case and technology analysis for 5g low latency applications,” IEEE Access, vol. 5, pp. 5917–5935, 2017.
[31] G. P. Fettweis, “The tactile internet: Applications and challenges,” IEEE Vehicular Technology Magazine, vol. 9, no. 1, pp. 64–70, 2014.
[32] P. Arbeille, A. Capri, J. Ayoub, V. Kieffer, M. Georgescu, and G. Poisson, “Use of a robotic arm to perform remote abdominal telesonography,” American journal of Roentgenology, vol. 188, no. 4, pp. W317–W322, 2007.
[33] T. R. Coles, N. W. John, D. Gould, and D. G. Caldwell, “Integrating haptics with augmented reality in a femoral palpation and needle insertion training simulation,” IEEE transactions on haptics, vol. 4, no. 3, pp. 199–209, 2011.
[34] A. Filippeschi, F. Brizzi, E. Ruffaldi, J. M. Jacinto, and C. A. Avizzano, “Encountered-type haptic interface for virtual interaction with real objects based on implicit surface haptic rendering for remote palpation,” in 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2015, pp. 5904–5909.
[35] E. Ruffaldi, F. Brizzi, A. Filippeschi, and C. A. Avizzano, “Co-located haptic interaction for virtual usg exploration,” in 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE, 2015, pp. 1548–1551.
[36] M. A. Khan, S. F. Jawed, M. O. Khan, and O. Mazhar, “An innovative approach towards e-health in development of tele auscultation system for heart using gsm mobile communication technology,” in 2013 IEEE 19th International Symposium for Design and Technology in Electronic Packaging (SIITME). IEEE, 2013, pp. 201–204.
[37] K. Hori, Y. Uchida, T. Kan, M. Minami, C. Naito, T. Kuroda, H. Takahashi, M. Ando, T. Kawamura, N. Kume et al., “Tele-auscultation support system with mixed reality navigation,” in 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE, 2013, pp. 4646–4649.
[38] S. Falleni, A. Filippeschi, E. Ruffaldi, and C. A. Avizzano, “Teleoperated multimodal robotic interface for telemedicine: A case study on remote auscultation,” in 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). IEEE, 2017, pp. 476–482.
[39] C. R. Evans, M. G. Medina, and A. M. Dwyer, “Telemedicine and telerobotics: from science fiction to reality,” Updates in surgery, vol. 70, no. 3, pp. 357–362, 2018.
[40] A. B. Rosenkrantz, T. N. Hanna, S. D. Steenburg, M. J. Tarrant, R. S. Pyatt, and E. B. Friedberg, “The current state of teleradiology across the united states: a national survey of radiologists’habits, attitudes, and perceptions on teleradiology practice,” Journal of the American College of Radiology, vol. 16, no. 12, pp. 1677–1687, 2019.
[41] V. Strotbaum, “Die intensivmedizinischen herausforderungen der zukunft mithilfe der telemedizin bewältigen,” in E-Health-Ökonomie. Springer, 2017, pp. 459–472.
[42] A. Laurent-Bellue, E. Poullier, S. Prevot, and C. Guettier, “Le télédiagnostic extemporané,” in Annales de Pathologie, vol. 39, no. 2. Elsevier Masson, 2019, pp. 113–118.
[43] S. Erridge, D. K. Yeung, H. R. Patel, and S. Purkayastha, “Telementoring of surgeons: a systematic review,” Surgical innovation, vol. 26, no. 1, pp. 95–111, 2019.
[44] A. Schneider, D. Wilhelm, U. Bohn, A. Wichert, and H. Feussner, “An evaluation of a surgical telepresence system for an intrahospital local area network,” Journal of telemedicine and telecare, vol. 11, no. 8, pp. 408–413, 2005.
[45] B. M. Harnett, R. Satava, P. Angood, N. R. Merriam, C. R. Doarn, and R. C. Merrell, “The benefits of integrating internet technology with standard communications for telemedicine in extreme environments.” Aviation, space, and environmental medicine, vol. 72, no. 12, pp. 1132–1137, 2001.
[46] A. Nagori, L. S. Dhingra, A. Bhatnagar, R. Lodha, and T. Sethi, “Predicting hemodynamic shock from thermal images using machine learning,” Scientific reports, vol. 9, no. 1, pp. 1–9, 2019.
[47] Y. Shen, C. A. Pomeroy, N. Xi, N. Methil-Sudhakaran, R. Mukherjee, D. Zhu, M. W. Mutka, C. A. Slomski, and K. N. Apelgren, “Supermedia interface for internet based tele-diagnostics of breast pathology,” in The First IEEE/RAS-EMBS International Conference on Biomedical Robotics and Biomechatronics, 2006. BioRob 2006. IEEE, 2006, pp. 787–792.
[48] Y. Shen, N. Xi, N. Methil-Sudhakaran, R. Mukherjee, D. Zhu, Z. Cen, M. W. Mutka, C. A. Slomski, and K. N. Apelgren, “Internet based tele-diagnostic interface for breast pathology,” in The Third IASTED International Conference on Telehealth, 2007, pp. 130–135.
[49] K. A. R. Carranza, N. J. B. Day, L. M. S. Lin, A. R. Ponce, W. R. O. Reyes, A. C. Abad, and R. G. Baldovino, “Akibot: a telepresence robot for medical teleconsultation,” in 2018 IEEE 10th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment and Management (HNICEM). IEEE, 2018, pp. 1–4.
[50] S. Bhaskar, S. Bradley, S. Sakhamuri, S. Moguilner, V. K. Chattu, S. Pandya, S. Schroeder, D. Ray, and M. Banach, “Designing futuristic telemedicine using artificial intelligence and robotics in the covid-19 era,” Frontiers in public health, p. 708, 2020.
[51] R. A. Machado, N. L. de Souza, R. M. Oliveira, H. M. Júnior, and P. R. F. Bonan, “Social media and telemedicine for oral diagnosis and counselling in the covid-19 era,” Oral oncology, vol. 105, p. 104685, 2020.
[52] Z. H. Khan, A. Siddique, and C. W. Lee, “Robotics utilization for healthcare digitization in global covid-19 management,” International journal of environmental research and public health, vol. 17, no. 11, p. 3819, 2020.
[53] J. Wang, C. Peng, Y. Zhao, R. Ye, J. Hong, H. Huang, and L. Chen, “Application of a robotic tele-echography system for covid-19 pneumonia,” Journal of Ultrasound in Medicine, vol. 40, no. 2, pp. 385–390, 2021.
[54] M. Akbari, J. Carriere, T. Meyer, R. Sloboda, S. Husain, N. Usmani, and M. Tavakoli, “Robotic ultrasound scanning with real-time image-based force adjustment: quick response for enabling physical distancing during the covid-19 pandemic,” Frontiers in Robotics and AI, vol. 8, p. 645424, 2021.
[55] L. Lai, K. A. Wittbold, F. Z. Dadabhoy, R. Sato, A. B. Landman, L. H. Schwamm, S. He, R. Patel, N. Wei, G. Zuccotti et al., “Digital triage: novel strategies for population health management in response to the covid-19 pandemic,” in Healthcare, vol. 8, no. 4. Elsevier, 2020, p. 100493.
[56] A. Zemmar, A. M. Lozano, and B. J. Nelson, “The rise of robots in surgical environments during covid-19,” Nature Machine Intelligence, vol. 2, no. 10, pp. 566–572, 2020.
[57] I. Khan, N. Ndubuka, K. Stewart, V. McKinney, and I. Mendez, “Indigenous health: The use of technology to improve health care to saskatchewan’s first nations communities,” Canada communicable disease report, vol. 43, no. 6, p. 120, 2017.
[58] M. Giuliani, D. Szczęśniak-Stańczyk, N. Mirnig, G. Stollnberger, M. Szyszko, B. Stańczyk, and M. Tscheligi, “User-centred design and evaluation of a teleoperated echocardiography robot,” Health and Technology, vol. 10, no. 3, pp. 649–665, 2020.
[59] B. Stańczyk, A. Kurnicki, and K. Arent, “Logical architecture of medical telediagnostic robotic system,” in 2016 21st International Conference on Methods and Models in Automation and Robotics (MMAR). IEEE, 2016, pp. 200–205.
[60] A. Murali, T. Chen, K. V. Alwala, D. Gandhi, L. Pinto, S. Gupta, and A. Gupta, “Pyrobot: An open-source robotics framework for research and benchmarking,” arXiv preprint arXiv:1906.08236, 2019.
[61] M. Quigley, K. Conley, B. Gerkey, J. Faust, T. Foote, J. Leibs, R. Wheeler, and A. Y. Ng, “Ros: an open-source robot operating system,” in ICRA workshop on open source software, vol. 3, no. 3.2. Kobe, Japan, 2009, p. 5.
[62] L. S. Yim, Q. T. Vo, C.-I. Huang, C.-R. Wang, W. McQueary, H.-C. Wang, H. Huang, and L.-F. Yu, “Wfh-vr: Teleoperating a robot arm to set a dining table across the globe via virtual reality,” in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2022, pp. 4927–4934.
[63] E. Babaians, M. Tamiz, Y. Sarfi, A. Mogoei, and E. Mehrabi, “Ros2unity3d; high-performance plugin to interface ros with unity3d engine,” in 2018 9th Conference on Artificial Intelligence and Robotics and 2nd Asia-Pacific International Symposium. IEEE, 2018, pp. 59–64.
[64] A. Aristidou and J. Lasenby, “Fabrik: A fast, iterative solver for the inverse kinematics problem,” Graphical Models, vol. 73, no. 5, pp. 243–260, 2011.
[65] J. Tremblay, T. To, B. Sundaralingam, Y. Xiang, D. Fox, and S. Birchfield, “Deep object pose estimation for semantic robotic grasping of household objects,” Conference on Robot Learning, 2018.
電子全文 電子全文(網際網路公開日期:20260330)
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊