跳到主要內容

臺灣博碩士論文加值系統

(44.201.97.224) 您好!臺灣時間:2024/04/18 02:49
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:史詩妤
研究生(外文):Shih-Yu Shih
論文名稱:應用於虛擬體驗中環境即時顯示之有效地圖記憶體管理
論文名稱(外文):Efficient SLAM Map Memory Management for Real-Time Displaying in Virtual Experience
指導教授:黃正民黃正民引用關係
指導教授(外文):Cheng-Ming Huang
口試委員:黃正民練光祐連豊力簡忠漢陸敬互
口試委員(外文):Cheng-Ming HuangKuang-Yow LianFeng-Li LianJong-Hann JeanChing-Hu Lu
口試日期:2018-07-30
學位類別:碩士
校院名稱:國立臺北科技大學
系所名稱:電機工程系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2018
畢業學年度:106
語文別:英文
論文頁數:102
中文關鍵詞:RTAB-Map虛擬實境視覺同步定位與環境地圖建置
外文關鍵詞:virtual experiencevirtual realityRTAB-Mapvisual simultaneous localization and mapping (v-SLAM)
相關次數:
  • 被引用被引用:0
  • 點閱點閱:162
  • 評分評分:
  • 下載下載:6
  • 收藏至我的研究室書目清單書目收藏:0
由於現今科技發展進步,使用虛擬實境來體驗與遊玩的技術在各界也越來廣泛;另一方面因為同步定位與環境地圖建置之技術純熟,相關應用也常出現在各領域研究當中。本篇論文提出了一個結合同步定位與環境建圖技術和虛擬實境結合的虛擬體驗系統,讓使用者可以透過虛擬實境頭戴式顯示器,來觀看體驗先前所建置的三維環境地圖,並且與之互動。相對於一般二維地圖,三維環境透過虛擬實境頭戴式顯示器呈現,可以讓使用者更真實地觀看到場景的狀況,提供身歷其境的感覺。與現在市面上的虛擬實境體驗眼鏡不同,我們的系統採用簡單的智慧型手機當作頭戴式顯示器,以降低設備成本,可讓使用者方便隨時使用。在同步定位與環境建圖的部分,我們使用RTAB-Map演算法將機器人作業系統ROS端得到的環境地圖與路徑資料,透過Unity引擎顯示在使用者頭戴式顯示器上。此外,我們提出了一貝葉斯網路估測使用者所將看到的虛擬場景區域之點雲資料,使用加入內存管理的方式來截取RTAB-Map演算法所建立的三維環境地圖中之一部分,以配合使用者在虛擬場景中的移動來變更顯示,從而減少顯示端設備的智慧型手機記憶體不足之問題。最後,所提出的方法在數個測試場景中經過驗證,能有效地減少大範圍地圖之點雲數量在手機記憶體的佔有量,達成即時虛擬體驗觀看之效果。
In this thesis, an application of simultaneous localization and mapping (SLAM) combined with visual reality (VR) is proposed to let the user experience and interact with an real environment through the headset with a mobile phone such that the user can easily visit the environment with cheaper equipment at any time. Compared with exploring an environment with general 2D map, the user can view the environment more realistically in an immersive manner. Here, an intensive map of environment with huge amount of point clouds, which is generated by employing the RTAB-Map SLAM with RGB-D cameras, is utilized to provide a more complete observation to the user than that of the sparse map. Since the memory of a mobile phone is limited and smaller than a personal computer, the data of whole map cannot be transmitted to the mobile phone for VR displaying at one time. Based on the architecture of memory management for detecting loop closure in the RTAB-Map SLAM, an efficient memory management for extracting the point cloud data to achieve the real-time VR displaying performance is designed by utilizing the Bayesian filtering and data clustering. The proposed approaches have been validated and analyzed in the experiments to present that our method can effectively reduce the memory usage for continuously and smoothly displaying the virtual scene with real-time performance.
摘要 i
ABSTRACT iii
誌謝 v
Contents vi
List of Tables viii
List of Figures ix
Chapter 1 Introduction 1
1.1 Background 1
1.2 Motivation 3
1.3 Related Work 4
1.4 Purpose and Contribution 6
1.5 Organization of the Thesis 8
Chapter 2 Architecture of Virtual Experience System 9
2.1 System Architecture 9
2.2 SLAM with Memory Management Subsystem 13
2.3 VR Display Subsystem 14
2.3.1 Virtual Reality Tool — Unity3D 14
2.3.2 Cross-Platform Exchange of Data 16
Chapter 3 Effective Memory Management for Displaying in Virtual Experience 18
3.1 Calibration and Use of Depth Cameras 18
3.2 RTAB-Map SLAM Algorithm 20
3.2.1 Feature Point Extraction 22
3.2.2 Real-Time Appearance-Based Mapping 24
3.3 Bayesian Filtering for Estimating the Retrieved Point Cloud Data 30
3.4 Resampling with Data Clustering 38
3.5 Reduce the Map Data with OctoMap 40
Chapter 4 Experimental Results 43
4.1 Experimental Equipment 43
4.2 Experimental Results 46
4.2.1 Large Office 46
4.2.2 Practice Laboratory 64
4.2.3 Multi Floor 80
Chapter 5 Conclusions and Future Work 98
Reference 100
[1]“Virtual Reality in the Military,” Virtual Reality Society, 05-May-2017. .
[2]“The Smithsonian art museum dove into VR with Intel’s help.” [Online]. Available: https://www.engadget.com/2017/08/02/smithsonian-art-museum-intel-vr/. [Accessed: 03-Jul-2018].
[3]The Verge, HTC Vive VR review. .
[4]R. Codd-Downey, P. M. Forooshani, A. Speers, H. Wang, and M. Jenkin, “From ROS to unity: Leveraging robot and virtual environment middleware for immersive teleoperation,” in 2014 IEEE International Conference on Information and Automation (ICIA), 2014, pp. 932–936.
[5]“iRobot Roomba 600 Robot Vacuums - RobotShop.” [Online]. Available: https://www.robotshop.com/en/irobot-roomba-600-series-robot-vacuums.html. [Accessed: 03-Jul-2018].
[6]“物流及自動化|事業概要 | Muratec 村田機械.” [Online]. Available: http://www.muratec.tw/corp/division/logistics.html. [Accessed: 03-Jul-2018].
[7]“Microsoft HoloLens | The leader in mixed reality technology,” 26-Mar-2018. [Online]. Available: https://www.microsoft.com/en-us/hololens. [Accessed: 26-Mar-2018].
[8]“Oculus Rift | Oculus,” 26-Mar-2018. [Online]. Available: https://www.oculus.com/rift/#oui-csl-rift-games=star-trek. [Accessed: 26-Mar-2018].
[9]“Magic in the Making,” 26-Mar-2018. [Online]. Available: https://www.magicleap.com/. [Accessed: 26-Mar-2018].
[10]T. Nescher, M. Zank, and A. Kunz, “Simultaneous mapping and redirected walking for ad hoc free walking in virtual environments,” in 2016 IEEE Virtual Reality (VR), 2016, pp. 239–240.
[11]E. Alepis and A. Sakelliou, “Augmented car: A low-cost augmented reality RC car using the capabilities of a smartphone,” in 2016 7th International Conference on Information, Intelligence, Systems Applications (IISA), 2016, pp. 1–7.
[12]“Simultaneous localization and mapping,” Wikipedia. 01-Mar-2018.
[13]P. Martin, E. Marchand, P. Houlier, and I. Marchal, “Decoupled mapping and localization for Augmented Reality on a mobile phone,” in 2014 IEEE Virtual Reality (VR), 2014, pp. 97–98.
[14]C. W. Chen, W. Z. Chen, J. W. Peng, B. X. Cheng, T. Y. Pan, and H. C. Kuo, “A Real-Time Markerless Augmented Reality Framework Based on SLAM Technique,” in 2017 14th International Symposium on Pervasive Systems, Algorithms and Networks 2017 11th International Conference on Frontier of Computer Science and Technology 2017 Third International Symposium of Creative Computing (ISPAN-FCST-ISCC), 2017, pp. 127–132.
[15]K. Krückel, F. Nolden, A. Ferrein, and I. Scholl, “Intuitive visual teleoperation for UGVs using free-look augmented reality displays,” in 2015 IEEE International Conference on Robotics and Automation (ICRA), 2015, pp. 4412–4417.
[16]G. Klein and D. Murray, “Parallel Tracking and Mapping for Small AR Workspaces,” in 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, 2007, pp. 225–234.
[17]Z. Tong, D. Shi, and S. Yang, “SceneSLAM: A SLAM framework combined with scene detection,” in 2017 IEEE International Conference on Robotics and Biomimetics (ROBIO), 2017, pp. 487–494.
[18]T. Laidlow, M. Bloesch, W. Li, and S. Leutenegger, “Dense RGB-D-inertial SLAM with map deformations,” in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2017, pp. 6741–6748.
[19]O. Wasenmüller, M. Meyer, and D. Stricker, “CoRBS: Comprehensive RGB-D benchmark for SLAM using Kinect v2,” in 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), 2016, pp. 1–7.
[20]W. Meng, Y. Hu, J. Lin, F. Lin, and R. Teo, “ROS + Unity: An efficient high-fidelity 3D multi-UAV navigation and control simulator in GPS-denied environments,” in IECON 2015 - 41st Annual Conference of the IEEE Industrial Electronics Society, 2015, pp. 002562–002567.
[21]M. Meilland, C. Barat, and A. Comport, “3D High Dynamic Range dense visual SLAM and its application to real-time object re-lighting,” in 2013 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2013, pp. 143–152.
[22]Y. Mizuchi and T. Inamura, “Cloud-based multimodal human-robot interaction simulator utilizing ROS and unity frameworks,” in 2017 IEEE/SICE International Symposium on System Integration (SII), 2017, pp. 948–955.
[23]P. Wang, J. Xiao, H. Lu, H. Zhang, R. Yan, and S. Hong, “A novel human-robot interaction system based on 3D mapping and virtual reality,” in 2017 Chinese Automation Congress (CAC), 2017, pp. 5888–5894.
[24]J. Du, W. Sheng, and M. Liu, “Human-guided robot 3D mapping using virtual reality technology,” in 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2016, pp. 4624–4629.
[25]“RTAB-Map,” RTAB-Map, 25-Dec-2017. [Online]. Available: http://introlab.github.io/rtabmap/. [Accessed: 25-Dec-2017].
[26]“PCL - Point Cloud Library (PCL),” 20-Apr-2018. [Online]. Available: http://pointclouds.org/. [Accessed: 20-Apr-2018].
[27]“ROS.org | Powering the world’s robots,” 20-Apr-2018. .
[28]“ROS.org | Powering the world’s robots.” .
[29]N. Y. Ko, T. G. Kim, W. Youn, and T. Kim, “3 Dimensional application of SLAM for ground navigation,” in 2017 17th International Conference on Control, Automation and Systems (ICCAS), 2017, pp. 720–722.
[30]Y. Endo, K. Sato, A. Yamashita, and K. Matsubayashi, “Indoor positioning and obstacle detection for visually impaired navigation system based on LSD-SLAM,” in 2017 International Conference on Biometrics and Kansei Engineering (ICBAKE), 2017, pp. 158–162.
[31]R. Mur-Artal, J. M. M. Montiel, and J. D. Tardós, “ORB-SLAM: A Versatile and Accurate Monocular SLAM System,” IEEE Trans. Robot., vol. 31, no. 5, pp. 1147–1163, Oct. 2015.
[32]M. Labbé and F. Michaud, “Memory management for real-time appearance-based loop closure detection,” in 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2011, pp. 1271–1276.
[33]“Unity.” [Online]. Available: https://unity3d.com/. [Accessed: 03-Jul-2018].
[34]“WebVR - Bringing Virtual Reality to the Web.” [Online]. Available: https://webvr.info/. [Accessed: 03-Jul-2018].
[35]“rosbridge_suite - ROS Wiki,” 20-Apr-2018. [Online]. Available: http://wiki.ros.org/rosbridge_suite. [Accessed: 20-Apr-2018].
[36]“Official Structure Sensor Store - Give Your iPad 3D Vision.” [Online]. Available: https://structure.io/openni. [Accessed: 03-Jul-2018].
[37]“rgbdslam - ROS Wiki.” [Online]. Available: http://wiki.ros.org/rgbdslam. [Accessed: 03-Jul-2018].
[38]J. Engel, T. Schöps, and D. Cremers, “LSD-SLAM: Large-Scale Direct Monocular SLAM,” in European Conference on Computer Vision (ECCV), 2014.
[39]“Speeded up robust features,” Wikipedia. 20-Aug-2017.
[40]“Scale-invariant feature transform,” Wikipedia. 28-Jun-2018.
[41]“Haar-like feature,” Wikipedia. 01-Apr-2018.
[42]“Sichere Personenerkennung in der Mensch-Maschine-Interaktion - PDF.” [Online]. Available: https://docplayer.org/79045049-Sichere-personenerkennung-in-der-mensch-maschine-interaktion.html. [Accessed: 03-Jul-2018].
[43]A. Hornung, K. M. Wurm, M. Bennewitz, C. Stachniss, and W. Burgard, “OctoMap: an efficient probabilistic 3D mapping framework based on octrees,” Auton. Robots, vol. 34, no. 3, pp. 189–206, Apr. 2013.
[44]D. Caruso, J. Engel, and D. Cremers, “Large-scale direct SLAM for omnidirectional cameras,” in 2015 IEEE/RSJ International Conference on
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top