[1]International Federation of Robotics, "IFR presents World Robotics 2021 reports"
[2]圖片來源: https://www.assemblymag.com/articles/94549-random-bin-picking-comes-of-age, June,2022
[3]圖片來源: MWES Random Bin Picking : https://www.mwes.com/random-bin-picking/, June,2022
[4]圖片來源: SOLOMON機器人隨機取放系統 : https://www.solomon.com.tw/produc/智慧型隨機取放系統/, June,2022
[5]圖片來源: Photoneo:https://www.photoneo.com/bin-picking/
[6]圖片來源: FANUC:https://www.fanucamerica.com/solutions/applications/picking-and-packing-robots, June,2022
[7]C. Wu, S. Jiang and K. Song, "CAD-based pose estimation for random bin-picking of multiple objects using a RGB-D camera," 2015 15th International Conference on Control, Automation and Systems (ICCAS), 2015, pp. 1645-1649.
[8]H.Y. Kuo, H.R. Su, S.H. Lai, and C.C. Wu, “3D Object Detection and Pose Estimation from Depth Image for Robotic Bin Picking,” Proc. of IEEE Int'l Conf. on Automation Science and Engineering,pp.1264-1269, 2014.
[9]Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proc. of the IEEE, pp.2278-2324, 1998.
[10]圖片來源: https://papers.readthedocs.io/en/latest/imagedetection/rcnn/, June,2022
[11]L. Pinto and A. Gupta, "Supersizing self-supervision: Learning to grasp from 50K tries and 700 robot hours," 2016 IEEE International Conference on Robotics and Automation (ICRA), 2016, pp. 3406-3413.
[12]H. Wang, H. Situ and C. Zhuang, "6D Pose Estimation for Bin-Picking based on Improved Mask R-CNN and DenseFusion," 2021 26th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA ), 2021, pp. 1-7.
[13]KUKA Roboter GmbH, “Operating and Programming Instructions for System Integrators”, 2016, pp. 22-23.
[14]圖片來源: http://www.toyorobot.com/Product, June,2022
[15]圖片來源: https://kheresy.wordpress.com/2014/12/29/kinect-for-windows-sdk-v2-basic/, June,2022
[16]P. Fankhauser, M. Bloesch, D. Rodriguez, R. Kaestner, M. Hutter and R. Siegwart, "Kinect v2 for mobile robot navigation: Evaluation and modeling," 2015 International Conference on Advanced Robotics (ICAR), 2015, pp. 388-394.
[17]KUKA Roboter GmbH, “WorkVisual”, 2016, pp. 11-12.
[18]圖片來源: https://www.vision-doctor.com/en/optical-errors/distortion.html, June,2022
[19]Jiao J, Yuan L, Tang W, Deng Z, Wu Q. A Post-Rectification Approach of Depth Images of Kinect v2 for 3D Reconstruction of Indoor Scenes. ISPRS International Journal of Geo-Information. 2017; 6(11):349.
[20]Zhengyou Zhang, "Flexible camera calibration by viewing a plane from unknown orientations," Proceedings of the Seventh IEEE International Conference on Computer Vision, 1999, pp. 666-673
[21]R. Y. Tsai and R. K. Lenz, "A new technique for fully autonomous and efficient 3D robotics hand/eye calibration," in IEEE Transactions on Robotics and Automation, vol. 5, no. 3, pp. 345-358
[22]F. C. Park and B. J. Martin, "Robot sensor calibration: solving AX=XB on the Euclidean group," in IEEE Transactions on Robotics and Automation, vol. 10, no. 5, pp. 717-721, Oct. 1994
[23]Radu Horaud, Fadi Dornaika. Hand-eye Calibration. The International Journal of Robotics Research,SAGE Publications, 1995, 14 (3), pp.195–210
[24]N. Andreff, R. Horaud and B. Espiau,"On-line hand-eye calibration,"Second International Conference on 3-D Digital Imaging and Modeling(Cat.No.PR00062),1999,pp.430-436
[25]K. H. Strobl and G. Hirzinger, "Optimal Hand-Eye Calibration," 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2006, pp. 4647-4653,
[26]R. S. Hartenberg and J. Denavit, "A kinematic notation for lower pair mechanisms basedon matrices," Journal of Applied Mechanics, vol. 77, June, 1955, pp. 215-221.
[27]Mark W. Spong, Seth Hutchinson and M. Vidyasagar, "Robot Modeling and Control," First Edition, 2006.
[28]Sebastian Doliwa. (2020). Inverse kinematics of the KUKA LBR iiwa R800 (7 DOF). Zenodo. https://doi.org/10.5281/zenodo.4063575
[29]圖片來源: http://www.yi-fon.com/pd47560086.html, June,2022
[30]M. Kuo, H. -T. Chan and C. -H. Hsia, "Study on Mask R-CNN with Data Augmentation for Retail Product Detection," 2021 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS), 2021, pp. 1-2
[31]圖片來源: https://www.wpgdadatong.com/tw/blog/detail?BID=B1319,July,2022
[32]R. Girshick, J. Donahue, T. Darrell and J. Malik, "Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation," 2014 IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 580-587
[33]王仁蔚(2019)。以實例切割與表徵學習應用於機械臂夾取堆疊物件。國立臺灣大學機械工程學研究所碩士論文,台北市。[34]K. He, G. Gkioxari, P. Dollár and R. Girshick, "Mask R-CNN," 2017 IEEE International Conference on Computer Vision (ICCV), 2017, pp. 2980-2988
[35]https://doi.org/10.48550/arXiv.1411.4038
[36]https://cocodataset.org/#home
[37]K. He, X. Zhang, S. Ren and J. Sun, "Deep Residual Learning for Image Recognition," 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770-778, doi: 10.1109/CVPR.2016.90.
[38]C. Szegedy et al., “Going deeper with convolutions,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 07–12-June, pp. 1–9
[39]圖片來源: https://chih-sheng-huang821.medium.com/深度學習系列-什麼是ap-map-aaf089920848,July,2022
[40]@inproceedings{morrison2018closing,title={{Closing the Loop for Robotic Grasping: A Real-time, Generative Grasp Synthesis Approach}},author={Morrison, Douglas and Corke, Peter and Leitner, J\"urgen},booktitle={Proc.\ of Robotics: Science and Systems (RSS)},year={2018}}
[41]圖片來源: https://www.cnblogs.com/xwgli/p/7045562.html,July,2022