|
[1]R. Sultanov, S. Sulaiman, H. Li, R. Meshcheryakov, and E. Magid, “A Review on Collaborative Robots in Industrial and Service Sectors,” in 2022 International Siberian Conference on Control and Communications (SIBCON), IEEE, Nov. 2022, pp. 1–7. doi: 10.1109/SIBCON56144.2022.10003014. [2]G. Kokotinis, G. Michalos, Z. Arkouli, and S. Makris, “On the quantification of human-robot collaboration quality,” Int J Comput Integr Manuf, vol. 36, no. 10, pp. 1431–1448, Oct. 2023, doi: 10.1080/0951192X.2023.2189304. [3]S. Ni, L. Zhao, A. Li, D. Wu, and L. Zhou, “Cross-View Human Intention Recognition for Human-Robot Collaboration,” IEEE Wirel Commun, vol. 30, no. 3, pp. 189–195, Jun. 2023, doi: 10.1109/MWC.018.2200514. [4]M. T. Calcagni, C. Scoccia, G. Battista, G. Palmieri, and M. Palpacelli, “Collaborative Robot Sensorization with 3D Depth Measurement System for Collision Avoidance,” in 2022 18th IEEE/ASME International Conference on Mechatronic and Embedded Systems and Applications (MESA), IEEE, Nov. 2022, pp. 1–6. doi: 10.1109/MESA55290.2022.10004475. [5]X. Li, Z. Chen, Z. Zhong, and J. Ma, “Human-machine Collaboration Method Based on Key Nodes of Human Posture,” in 2022 IEEE Asia-Pacific Conference on Image Processing, Electronics and Computers (IPEC), IEEE, Apr. 2022, pp. 140–146. doi: 10.1109/IPEC54454.2022.9777570. [6]J. Marić, L. Petrović, and I. Marković, “Human Intention Recognition in Collaborative Environments using RGB-D Camera,” in 2023 46th MIPRO ICT and Electronics Convention (MIPRO), IEEE, May 2023, pp. 350–355. doi: 10.23919/MIPRO57284.2023.10159985. [7]A. Franceschetti, E. Tosello, N. Castaman, and S. Ghidoni, “Robotic Arm Control and Task Training Through Deep Reinforcement Learning,” 2022, pp. 532–550. doi: 10.1007/978-3-030-95892-3_41. [8]K. M. Oikonomou, I. Kansizoglou, and A. Gasteratos, “A Hybrid Reinforcement Learning Approach With a Spiking Actor Network for Efficient Robotic Arm Target Reaching,” IEEE Robot Autom Lett, vol. 8, no. 5, pp. 3007–3014, May 2023, doi: 10.1109/LRA.2023.3264836. [9]F. Munguia-Galeano, S. Veeramani, J. D. Hernández, Q. Wen, and Z. Ji, “Affordance-Based Human–Robot Interaction With Reinforcement Learning,” IEEE Access, vol. 11, pp. 31282–31292, 2023, doi: 10.1109/ACCESS.2023.3262450. [10]E. Salvato, G. Fenu, E. Medvet, and F. A. Pellegrino, “Crossing the Reality Gap: A Survey on Sim-to-Real Transferability of Robot Controllers in Reinforcement Learning,” IEEE Access, vol. 9, pp. 153171–153187, 2021, doi: 10.1109/ACCESS.2021.3126658. [11]P. Xie et al., “Part-Guided 3D RL for Sim2Real Articulated Object Manipulation,” IEEE Robot Autom Lett, vol. 8, no. 11, pp. 7178–7185, Nov. 2023, doi: 10.1109/LRA.2023.3313063. [12]T. Zhang, K. Zhang, J. Lin, W.-Y. G. Louie, and H. Huang, “Sim2real Learning of Obstacle Avoidance for Robotic Manipulators in Uncertain Environments,” IEEE Robot Autom Lett, vol. 7, no. 1, pp. 65–72, Jan. 2022, doi: 10.1109/LRA.2021.3116700. [13]O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” May 2015. [14]L.-C. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam, “Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation,” Feb. 2018. [15]“Collaborative robotic automation | Cobots from Universal Robots.” Accessed: Feb. 23, 2024. [Online]. Available: https://www.universal-robots.com/ [16]“Bullet Real-Time Physics Simulation | Home of Bullet and PyBullet: physics simulation for games, visual effects, robotics and reinforcement learning.” Accessed: Feb. 24, 2024. [Online]. Available: https://pybullet.org/wordpress/ [17]T. Haarnoja et al., “Soft Actor-Critic Algorithms and Applications,” Dec. 2018.
|