|
參考文獻 [1]P. Ekman, W. V. Friesen, and P. Ellsworth, “Emotion in the Human Face,” Oxford University Press, 1972. [2]S. Boucenna, P. Gaussier, and L. Hafemeister, “Development of first social referencing skills: emotional interaction as a way to regulate robot behavior,” IEEE Trans. Autonomous Mental Development, vol. 6, no. 1, pp. 42-55, Mar. 2014. [3]A. Zaraki, D. Mazzei, M. Giuliani, and D. De Rossi, “Designing and evaluating a social gaze-control system for a humanoid robot,” IEEE Trans. Human- Machine Syst., vol. 44, no. 2, pp. 157-168, Apr. 2014. [4]A. Zaraki, M. Pieroni, D. De Rossi, D. Mazzei, R. Garofalo, L. Cominelli, and M. B. Dehkordi, “Design and evaluation of a unique social perception system for human-robot interaction,” IEEE Trans. Cognitive and Development Syst., vol. 9, no. 4, pp. 341-352, Dec. 2017. [5]F. Ren and Z. Huang, “Automatic facial expression learning method based on humanoid robot XIN-REN,” IEEE Trans. Human-Machine and Syst., vol. 46, no. 6, pp. 810-821, Dec. 2016. [6]S. A. Koch, C. E. Stevens, C. D. Clesi, J. B. Lebersfeld, A. G. Sellers, M. E. McNew, F. J. Biasini, F. R. Amthor, and M. I. Hopkins, “A feasibility study evaluating the emotionally expressive robot SAM,” vol. 9, no. 4, pp. 601-613, Int. J. of Social Robotics, 2017. [7]S. C. Hsu, H. H. Huang, and C.L. Huang, “Facial expression recognition for human-robot interaction,” The 1st IEEE International Conference on Robotic Computing, pp. 1-7, 2017. [8]Z. Liu, M. Wu, W. Cao, L. Chen, J. Xu, R. Zhang, M. Zhou, and J. Mao, “A facial expression emotion recognition based human-robot interaction system,” IEEE/CAA J. of Automatica Sinica, vol. 4, no. 4, p. 668-676, Oct. 2017. [9]Y. Liu, X. Yuan, X. Gong, Z. Xie, F. Fang, and Z. Luoa, “Conditional convolution neural network enhanced random forest for facial expression recognition,” Patter Recognition, vol. 84, pp. 251-261, 2018. [10]Y. Yaddaden, M. Addaa, A. Bouzouanea, S. Gaboury, and B. Bouchard, “User action and facial expression recognition for error detection system in an ambient assisted environment,” Expert Syst. and Applica., vol. 112, pp. 173-189, 2018. [11]Y. Chen, T. Wang, H. Wu, and Y. Wang, “A fast and accurate multi-model facial expression recognition method for affective intelligent robots,” IEEE International Conference on Intelligence and Safety for Robotics, pp. 319-324, Shenyang, China, August 24-27, 2018. [12]J. Deng, G. Pang, Z. Zhang, Z. Pang, H. Yang, and G. Yang, “cGAN based facial expression recognition for human-robot interaction,” IEEE Access, vol. 7, pp. 9848-9859, 2019. [13]J.-H. Kim, B.-G. Kim, P. P. Roy, and D.-M. Jeong, “Efficient facial expression recognition algorithm based on hierarchical deep neural network structure,” IEEE Access, vol. 7, pp. 9848-9859, 2019. [14]A. Lopez-Rincon, “Emotion recognition using facial expressions in children using the NAO robot,” IEEE Int. Conf., pp. 146-153, 2019. [15]H. B. Abebe and C.-L. Hwang, “RGB-D face recognition using LBP with suitable feature dimension of depth image,” IET Cyber Physical Systems: Theory & Applications, to be published, Feb. 2019. [16]M. Wu, W. Su, L. Chen, Z. Liu, W. Cao, and K. Hirota, “Weight-adapted convolution neural network for facial expression recognition in human–robot interaction,” IEEE Trans. Syst. Man and Cybern., Syst., to be published, 2019. [17]L. Chen, M. Li, W. Su, M. Wu, K. Hirota, and W. Pedrycz, “Adaptive feature selection-based AdaBoost-KNN with direct optimization for dynamic emotion recognition in human–robot interaction,” IEEE Trans. Emerging Topics in Computational Intelligence, to be published, 2019. [18]L. Chen, M. Wu, M. Zhou, Z. Liu, J. She, and K. Hirota, “Dynamic emotion understanding in HRI based on two-layer fuzzy SVR-TS model,” IEEE Trans. Syst. Man and Cybern., Syst., to be published, 2019. [19]C. Li, Y. Hou, P. Wang, and W. Li, “Multiview-based 3-D action recognition using deep networks,” IEEE Trans. Human-Machine and Syst., vol. 49, no. 1, pp. 95-104, Feb. 2019. [20]J. Nie, L. Huang, W. Zhang, G. Wei, and Z. Wei, “Deep feature ranking for person re-identification,” IEEE Access, to be published, 2019. [21]C.-L. Hwang, W. H. Hung, and Y. Lee, “Tracking design of omnidirectional drive service robot using hierarchical adaptive finite-time control,” IEEE IECON-2018, Washington D.C. USA, Oct. 21st-Oct. 23rd, 2018. [22]T. Ahonen, A. Hadid, and M. Pietikäinen, “Face description with local binary patterns: application to face recognition,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 28, no. 12, pp. 2037–2041, 2006. [23]C.-W. Hsu, and C.-J. Lin, “A comparison of methods for multiclass support vector machines,” IEEE Trans. Neural Netw. Learn. Syst., vol. 13, no. 2, pp. 415–425, Mar. 2002. [24]G.-B. Huang, H. Zhou, X. Ding, and R. Zhang, “Extreme learning machine for regression and multiclass classification,” IEEE Trans. Syst. Man and Cybern., Pt. B, vol. 29, no. 2, pp. 513-529, Apr. 2012. [25]A. Rocha and S. K. Goldenstein, “Multiclass from binary: expanding one-versusall, one-versus-one and ECOC-based approaches’, IEEE Trans. Neural Netw. Learn. Syst., vol. 25, no. 2, pp. 289–302, Mar. 2014. [26]C.-L. Hwang, B. L. Chen, H. H. Huang, H.H., and H.T. Syu, “Hybrid learning model and MMSVM classification for on-line visual imitation of a human with 3-D motions,” Int. J. Robot. Autonomous Syst., vol. 71, pp. 150–165, 2015. [27]S. Haykin, Neural Networks and Learn Machines, Pearson Edu., Upper Saddle River, NJ, USA, 2009, 3rd Ed. [28]C.-L. Hwang and Y.-H. Chen, “Fuzzy fixed-time learning control with saturated input, nonlinear switching surface and switching gain to achieve null tracking error,” IEEE Trans. Fuzzy Syst., to be published, 2019.
|