|
[1]T. Pire, T. Fischer, J. Civera, P. De Cristforis and J. J. Berlles, “Stereo parallel tracking and mapping for robot localization,” IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, 2015, pp. 1373-1378.
[2]K. Qiu, F. Zhang and M. Liu, “Visible Light Communication-based indoor localization using Gaussian Process,” IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, 2015, pp. 3125-3130.
[3]R. C. Luo, V. W. S. Ee and C. K. Hsieh, “3D point cloud based indoor mobile robot in 6-DoF pose localization using Fast Scene Recognition and Alignment approach,” IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI), Baden-Baden, Germany, 2016, pp. 470-475.
[4]H. Kikkeri, G. Parent, M. Jalobeanu and S. Birchfield, “An inexpensive method for evaluating the localization performance of a mobile robot navigation system,” IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, 2014, pp. 4100-4107.
[5]R. L. Riek, “The Social Co-Robotics Problem Space: Six Key Challenges,” Robotics Challenges and Vision (RCV2013), 2014.
[6]C. R. Raymundo, C. G. Johnson and P. A. Vargas, “An architecture for emotional and context-aware associative learning for robot companions,” IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Kobe, 2015, pp. 31-36.
[7]P. Lison, C. Ehrler and G. J. M. Kruijff, “Belief modelling for situation awareness in human-robot interaction,” International Symposium in Robot and Human Interactive Communication, Viareggio, 2010, pp. 138- 143.
[8]S. H. Tseng, J. H. Hua, S. P. Ma and L. e. Fu, “Human awareness based robot performance learning in a social environment,” IEEE International Conference on Robotics and Automation, Karlsruhe, 2013, pp. 4291- 4296.
[9]Situational Context, https://www.alleydog.com/glossary/psychology-glossary.php [Online; accessed 1-March-2017] [10]Nigam and L. D. Riek, “Social context perception for mobile robots,” IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, 2015, pp. 3621-3627.
[11]H. Qureshi, Y. Nakamura, Y. Yoshikawa and H. Ishiguro, “Robot gains social intelligence through multimodal deep reinforcement learning,” IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids), Cancun, 2016, pp. 745-751.
[12]Aly and A. Tapus, “A model for synthesizing a combined verbal and nonverbal behavior based on personality traits in human-robot interaction,” ACM/IEEE International Conference on Human-Robot Interaction (HRI), Tokyo, 2013, pp. 325-332.
[13]J.Mumm and B.Mutlu, “Human-robotproxemics : Physical and psychological distancing in human-robot interaction,” ACM/IEEE International Conference on Human-Robot Interaction (HRI), Lausanne, 2011, pp. 331-338.
[14]T. Kitade, S. Satake, T. Kanda and M. Imai, “Understanding suitable locations for waiting,” ACM/IEEE International Conference on Human- Robot Interaction (HRI), Tokyo, 2013, pp. 57-64.
[15]PCL, http://pointclouds.org/ [Online; accessed 15-July-2017] [16]OpenCV, http://opencv.org/ [Online; accessed 15-July-2017] [17]Scikit-Learn, http://scikit-learn.org/stable/ [Online; accessed 15-July-2017] [18]Keras, https://keras.io/ [Online; accessed 15-July-2017] [19]API.AI, https://api.ai/ [Online; accessed 15-July-2017] [20]ROS, http://www.ros.org/ [Online; accessed 15-July-2017] [21]N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR''05), San Diego, CA, USA, 2005, pp. 886-893 vol. 1.
[22]Optical Flow, http://docs.opencv.org/trunk/d7/d8b/tutorial_py_lucas_kanade.html [Online; accessed 15-July-2017] [23]Autoencoders, https://blog.keras.io/building-autoencoders-in-keras.html [Online; accessed 15-July-2017] [24]Category classification by CNNs, https://ujjwalkarn.me/2016/08/11/intuitive-explanation-convnets/ [Online; accessed 15-July-2017] [25]Activation function, https://en.wikibooks.org/wiki/Artificial_Neural_Networks/-Print_Version [Online; accessed 15-July-2017] [26]Gradient descent, http://sebastianruder.com/optimizing-gradient-descent/ [Online; accessed 15-July-2017] [27]Early stopping, https://deeplearning4j.org/earlystopping [Online; accessed 15-July-2017] [28]K. Sasaki, H. Tjandra, K. Noda, K. Takahashi and T. Ogata, “Neural network based model for visual-motor integration learning of robot’s drawing behavior: Association of a drawing motion from a drawn image,” IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, 2015, pp. 2736-2741. [29]V. Veeriah, N. Zhuang and G. J. Qi, “Differential Recurrent Neural Networks for Action Recognition,” IEEE International Conference on Computer Vision (ICCV), Santiago, 2015, pp. 4041-4049. [30]Q.Li,X.ZhaoandK.Huang,“Learningtemporallycorrelatedrepresentations using lstms for visual tracking,” IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 2016, pp. 1614-1618. [31]Y. Bengio, P. Simard and P. Frasconi, “Learning long-term dependencies with gradient descent is difficult,” IEEE Transactions on Neural Networks, vol. 5, no. 2, pp. 157-166, 1994. [32]S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural Computation, vol. 9, no. 8, pp. 1735-1780, 1997. [33]SVM, http://diggdata.in/post/94066544971/support-vector-machine-without-tears [Online; accessed 15-July-2017] [34]X. Yuan, L. Cai-nian, X. Xiao-liang, J. Mei and Z. Jian-guo, “A two- stage hog feature extraction processor embedded with SVM for pedes- trian detection,” IEEE International Conference on Image Processing (ICIP), Quebec City, QC, 2015, pp. 3452-3455. [35]Y. Benabbas, N. Ihaddadene, T. Yahiaoui, T. Urruty and C. Djeraba, “Spatio-Temporal Optical Flow Analysis for People Counting,” IEEE International Conference on Advanced Video and Signal Based Surveil- lance, Boston, MA, 2010, pp. 212-217. [36]J. Ba and D. Kingma, “Adam: a Method for Stochastic Optimization,” In- ternational Conference on Learning Representations, San Diego, 2015.
|