|
[1] Atap.google.com. (2017). Ara. [online] Available at: https://atap.google.com/ara/ [Accessed 18 Jul. 2017]. [2] Motorola. (2017). Explore the world of Moto Mods™. [online] Available at: https://www.motorola.com/us/moto-mods [Accessed 18 Jul. 2017]. [3] Higuchi K, Shimada T, Rekimoto J. Flying sports assistant. Proceedings of the 2nd Augmented Human International Conference on - AH ’11. 2011. doi:10.1145/1959826.1959833 [4] Joubert, N., Roberts, M., Truong, A., Berthouzoz, F., & Hanrahan, P. (2015). An interactive tool for designing quadrotor camera shots. ACM Transactions on Graphics (TOG), 34(6), 238. [5] Chen, C. F., Liu, K. P., & Yu, N. H. (2015, November). Exploring interaction modalities for a selfie drone. In SIGGRAPH Asia 2015 Posters (p. 25). ACM. [6] Mur-Artal, R., Montiel, J. M. M., & Tardos, J. D. (2015). ORB-SLAM: a versatile and accurate monocular SLAM system. IEEE Transactions on Robotics, 31(5), 1147-1163. [7] Rublee, E., Rabaud, V., Konolige, K., & Bradski, G. (2011, November). ORB: An efficient alternative to SIFT or SURF. In Computer Vision (ICCV), 2011 IEEE international conference on (pp. 2564-2571). IEEE. [8] Liang, Q. Visual-Inertial Ego-Positioning for Flying Cameras. M.S. thesis, National Taiwan University, 2016 [9] Andrew J Davison, Ian D Reid, Nicholas D Molton, and Olivier Stasse. Monoslam:Real-time single camera slam. IEEE transactions on pattern analysis and machine intelligence, 29(6):102–1067, 2007. [10] Javier Civera, Andrew J Davison, and JM Martinez Montiel. Inverse depth parametrization for monocular slam. IEEE transactions on robotics, 24(5):932–945, 2008. [11] Ethan Eade and Tom Drummond. Scalable monocular slam. In 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition(CVPR’06), volume 1, pages 469–476. IEEE, 2006. [12] Jakob Engel, Thomas Schóps, and Daniel Cremers. Lsd-slam: Large-scale direct monocular slam. In European Conference on Computer Vision, pages 834–849. Springer, 2014. [13] Richard A Newcombe, Steven J Lovegrove, and Andrew J Davison. Dtam: Dense tracking and mapping in real-time. In 2011 international conference on computer vision, pages 2320–2327. IEEE, 2011. [14] Jan Stúhmer, Stefan Gumhold, and Daniel Cremers. Real-time dense geometry from a handheld camera. In Joint Pattern Recognition Symposium, pages 11–20. Springer, 2010. [15] Matia Pizzoli, Christian Forster, and Davide Scaramuzza. Remode: Probabilistic, monocular dense reconstruction in real time. In 2014 IEEE International Conference on Robotics and Automation (ICRA), pages 2609–2616. IEEE, 2014. [16] Jakob Engel, Jurgen Sturm, and Daniel Cremers. Semi-dense visual odometry for a monocular camera. In Proceedings of the IEEE international conference on computer vision, pages 1449–1456, 2013. [17] Thomas Schöps, Jakob Engel, and Daniel Cremers. Semi-dense visual odometry for ar on a smartphone. In Mixed and Augmented Reality (ISMAR), 2014 IEEE International Symposium on, pages 145–150. IEEE, 2014. [18] Georg Klein and David Murray. Parallel tracking and mapping for small ar workspaces. In Mixed and Augmented Reality, 2007. ISMAR 2007. 6th IEEE and ACM International Symposium on, pages 225–234. IEEE, 2007. [19] Kuan-Wen Chen, Chun-Hsin Wang, Xiao Wei, Qiao Liang, Chu-Song Chen, Ming-Hsuan Yang, and Yi-Ping Hung. Vision-based positioning for internet-of-vehicles. IEEE Transactions on Intelligent Transportation Systems, 2016. [20] Vicon. Vicon bonita. http://www.vicon.com/products/camerasystems/bonita.
|