跳到主要內容

臺灣博碩士論文加值系統

(216.73.216.17) 您好!臺灣時間:2025/09/03 05:06
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:賴聖霖
研究生(外文):Sheng-Lin Lai
論文名稱:基於改進的局部二元圖案的人臉表情辨識和深度影像定位之人與服務型全向移動機器人的互動
論文名稱(外文):Human and Omnidirectional Service Robot Interactions by Face Expression Recognition with Improved Local Binary Pattern and Localization with Depth Image
指導教授:黃志良黃志良引用關係
指導教授(外文):Chih-Lyang Hwang
口試委員:蔡奇謚吳修明施慶隆
口試委員(外文):Chi-Yi TsaiHsiu-Ming WuChing-Long Shih
口試日期:2019-07-17
學位類別:碩士
校院名稱:國立臺灣科技大學
系所名稱:電機工程系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2019
畢業學年度:107
語文別:中文
論文頁數:69
中文關鍵詞:適應有限時間分層飽和控制人臉表情辨識人體偵測全向移動機器人視覺搜索和跟蹤
外文關鍵詞:Adaptive hierarchical finite-time saturated controlface expression recognitionhuman detectionomnidirectional service robotvisual searching and tracking
相關次數:
  • 被引用被引用:0
  • 點閱點閱:165
  • 評分評分:
  • 下載下載:3
  • 收藏至我的研究室書目清單書目收藏:1
服務型全向移動機器人(ODSR)經由RGB-D相機,於規範的時間內偵測並搜尋是否已執行手勢動作的使用者。若ODSR於3m以內偵測到使用者手勢後,藉由RGB-D相機的深度影像資訊估算使用者的座標。已知使用者座標資訊後,以“適應有限時間分層飽和控制”使得ODSR接近相距使用者0.75-1.25m的位置且相對於相機光軸-29°~29°的角度內進行該使用者六類人臉表情 (例如: 憤怒、厭惡、恐懼、快樂、驚訝、難過)的辨識。經由對人臉影像的關鍵點偵測與分割,裁切出眼睛、鼻子、嘴巴三個特定子區域,以所建議之“改進的局部二值模式(ILBP)”應用於六類人臉表情進行辨識,並藉六個人臉表情資料庫(即NTUST-IRL, Cohn-Kanada, JAFFE, FACES, KDEF, MMI)於多類分類器的訓練和測試,以驗證此研究方法的優劣性。所建議的方法不僅可提高辨識率,而且縮短離線訓練和即時辨識的運算時間。根據圖像信息,ODSR以“適應有限時間分層飽和控制(AFTHSC)”精確地實現搜索或追蹤使用者,以執行人機互動(HRI)的任務,例如,將辨識結果顯示於屏幕,使用者並以「舉手」手勢確認辨識的結果的正確性。最後,經由HRI系列的實驗,包含非資料庫的使用者及不同人臉表情,驗證所建議方法的有效性和強健性。
By using the RGB-D camera, the omnidirectional service robot (ODSR) searches and detects the human with his/her hand gesture over a specific time interval. After the hand gesture has been detected when ODSR neighbor the detected human between 3m, the RGB-D camera’s depth image will estimate the coordinate of the detected human. Then the ODSR will also receive that information and reaches the position 0.75-1.25m with respect to human and -29°~29° with respect to the camera optical axis such that his/her six face expressions (e.g., anger, disgust, fear, happy, surprise, and sadness) is recognized. These six face expressions are recognized by the improved local binary pattern (ILBP) integrating eyes, nose and mouth regions these three dividing regions segmented by face landmarks and through the individual training and testing of 6 databases (e.g., NTUST-IRL, Cohn-Kanada, JAFFE, FACES, KDEF, MMI) using multiclass SVM to get verified the methodology. It not only improves the recognized rate but also reduce the computation time of off-line training and on-line recognition. Based on the recognized result, the human-robot interaction (HRI), e.g., the text description on screen which is according to the recognition result, simultaneously, the camera detected confirmation through the “raising human hand”, are executed. Based on the information of image processing, the planning pose for searching or tracking human is accurately achieved by the ODSR with the adaptive finite-time hierarchical saturated control (AFTHSC). The effectiveness and robustness of the overall system is validated by a series of HRI experiments.
目錄
摘要 I
ABSTRACT II
目錄 III
圖目錄 VI
表目錄 VIII
第一章 緒論 1
第二章 相關文獻 3
第三章 系統描述和研究任務 5
3.1 影像視覺系統描述 5
3.2 服務型全向移動機器人系統描述 7
3.3 研究任務 9
第四章 人機互動 12
4.1 人機互動系統架構 12
4.2 影像處理 12
4.2.1 深度影像處理 13
4.2.2 座標轉換 15
4.2.3 影像擷取 17
4.3 搜尋策略 18
4.4 人機互動實驗步驟 20
第五章 人臉表情辨識 21
5.1 人臉表情辨識系統架構 21
5.2 校正與前置處理 22
5.2.1 影像校正與人臉子區塊分割 23
5.3 特徵萃取 24
5.3.1 局部二元圖案模式(LBP) 24
5.3.2 LBP直方圖 27
5.3.3 特徵向量 28
5.4 多類分類器 29
5.5 閾值確認 31
5.6 人臉表情辨識步驟 32
5.7 人臉表情資料庫 33
第六章 實驗結果與分析 36
6.1 人臉表情資料庫的訓練及測試結果 36
6.2 人機互動結果展示 38
6.2.1 區域1 (REGION1: -29°~29°)實驗結果 39
6.2.2 區域2 (REGION2: -21°~-79°)實驗結果 44
6.2.3 區域3 (REGION 3: 79°~21°)實驗結果 47
6.2.4 非特定使用者實驗結果 50
第七章 結論與未來展望 53
7.1 結論 53
7.2 未來展望 54
參考文獻 55
參考文獻
[1]P. Ekman, W. V. Friesen, and P. Ellsworth, “Emotion in the Human Face,” Oxford University Press, 1972.
[2]S. Boucenna, P. Gaussier, and L. Hafemeister, “Development of first social referencing skills: emotional interaction as a way to regulate robot behavior,” IEEE Trans. Autonomous Mental Development, vol. 6, no. 1, pp. 42-55, Mar. 2014.
[3]A. Zaraki, D. Mazzei, M. Giuliani, and D. De Rossi, “Designing and evaluating a social gaze-control system for a humanoid robot,” IEEE Trans. Human- Machine Syst., vol. 44, no. 2, pp. 157-168, Apr. 2014.
[4]A. Zaraki, M. Pieroni, D. De Rossi, D. Mazzei, R. Garofalo, L. Cominelli, and M. B. Dehkordi, “Design and evaluation of a unique social perception system for human-robot interaction,” IEEE Trans. Cognitive and Development Syst., vol. 9, no. 4, pp. 341-352, Dec. 2017.
[5]F. Ren and Z. Huang, “Automatic facial expression learning method based on humanoid robot XIN-REN,” IEEE Trans. Human-Machine and Syst., vol. 46, no. 6, pp. 810-821, Dec. 2016.
[6]S. A. Koch, C. E. Stevens, C. D. Clesi, J. B. Lebersfeld, A. G. Sellers, M. E. McNew, F. J. Biasini, F. R. Amthor, and M. I. Hopkins, “A feasibility study evaluating the emotionally expressive robot SAM,” vol. 9, no. 4, pp. 601-613, Int. J. of Social Robotics, 2017.
[7]S. C. Hsu, H. H. Huang, and C.L. Huang, “Facial expression recognition for human-robot interaction,” The 1st IEEE International Conference on Robotic Computing, pp. 1-7, 2017.
[8]Z. Liu, M. Wu, W. Cao, L. Chen, J. Xu, R. Zhang, M. Zhou, and J. Mao, “A facial expression emotion recognition based human-robot interaction system,” IEEE/CAA J. of Automatica Sinica, vol. 4, no. 4, p. 668-676, Oct. 2017.
[9]Y. Liu, X. Yuan, X. Gong, Z. Xie, F. Fang, and Z. Luoa, “Conditional convolution neural network enhanced random forest for facial expression recognition,” Patter Recognition, vol. 84, pp. 251-261, 2018.
[10]Y. Yaddaden, M. Addaa, A. Bouzouanea, S. Gaboury, and B. Bouchard, “User action and facial expression recognition for error detection system in an ambient assisted environment,” Expert Syst. and Applica., vol. 112, pp. 173-189, 2018.
[11]Y. Chen, T. Wang, H. Wu, and Y. Wang, “A fast and accurate multi-model facial expression recognition method for affective intelligent robots,” IEEE International Conference on Intelligence and Safety for Robotics, pp. 319-324, Shenyang, China, August 24-27, 2018.
[12]J. Deng, G. Pang, Z. Zhang, Z. Pang, H. Yang, and G. Yang, “cGAN based facial expression recognition for human-robot interaction,” IEEE Access, vol. 7, pp. 9848-9859, 2019.
[13]J.-H. Kim, B.-G. Kim, P. P. Roy, and D.-M. Jeong, “Efficient facial expression recognition algorithm based on hierarchical deep neural network structure,” IEEE Access, vol. 7, pp. 9848-9859, 2019.
[14]A. Lopez-Rincon, “Emotion recognition using facial expressions in children using the NAO robot,” IEEE Int. Conf., pp. 146-153, 2019.
[15]H. B. Abebe and C.-L. Hwang, “RGB-D face recognition using LBP with suitable feature dimension of depth image,” IET Cyber Physical Systems: Theory & Applications, to be published, Feb. 2019.
[16]M. Wu, W. Su, L. Chen, Z. Liu, W. Cao, and K. Hirota, “Weight-adapted convolution neural network for facial expression recognition in human–robot interaction,” IEEE Trans. Syst. Man and Cybern., Syst., to be published, 2019.
[17]L. Chen, M. Li, W. Su, M. Wu, K. Hirota, and W. Pedrycz, “Adaptive feature selection-based AdaBoost-KNN with direct optimization for dynamic emotion recognition in human–robot interaction,” IEEE Trans. Emerging Topics in Computational Intelligence, to be published, 2019.
[18]L. Chen, M. Wu, M. Zhou, Z. Liu, J. She, and K. Hirota, “Dynamic emotion understanding in HRI based on two-layer fuzzy SVR-TS model,” IEEE Trans. Syst. Man and Cybern., Syst., to be published, 2019.
[19]C. Li, Y. Hou, P. Wang, and W. Li, “Multiview-based 3-D action recognition using deep networks,” IEEE Trans. Human-Machine and Syst., vol. 49, no. 1, pp. 95-104, Feb. 2019.
[20]J. Nie, L. Huang, W. Zhang, G. Wei, and Z. Wei, “Deep feature ranking for person re-identification,” IEEE Access, to be published, 2019.
[21]C.-L. Hwang, W. H. Hung, and Y. Lee, “Tracking design of omnidirectional drive service robot using hierarchical adaptive finite-time control,” IEEE IECON-2018, Washington D.C. USA, Oct. 21st-Oct. 23rd, 2018.
[22]T. Ahonen, A. Hadid, and M. Pietikäinen, “Face description with local binary patterns: application to face recognition,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 28, no. 12, pp. 2037–2041, 2006.
[23]C.-W. Hsu, and C.-J. Lin, “A comparison of methods for multiclass support vector machines,” IEEE Trans. Neural Netw. Learn. Syst., vol. 13, no. 2, pp. 415–425, Mar. 2002.
[24]G.-B. Huang, H. Zhou, X. Ding, and R. Zhang, “Extreme learning machine for regression and multiclass classification,” IEEE Trans. Syst. Man and Cybern., Pt. B, vol. 29, no. 2, pp. 513-529, Apr. 2012.
[25]A. Rocha and S. K. Goldenstein, “Multiclass from binary: expanding one-versusall, one-versus-one and ECOC-based approaches’, IEEE Trans. Neural Netw. Learn. Syst., vol. 25, no. 2, pp. 289–302, Mar. 2014.
[26]C.-L. Hwang, B. L. Chen, H. H. Huang, H.H., and H.T. Syu, “Hybrid learning model and MMSVM classification for on-line visual imitation of a human with 3-D motions,” Int. J. Robot. Autonomous Syst., vol. 71, pp. 150–165, 2015.
[27]S. Haykin, Neural Networks and Learn Machines, Pearson Edu., Upper Saddle River, NJ, USA, 2009, 3rd Ed.
[28]C.-L. Hwang and Y.-H. Chen, “Fuzzy fixed-time learning control with saturated input, nonlinear switching surface and switching gain to achieve null tracking error,” IEEE Trans. Fuzzy Syst., to be published, 2019.
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊
 
1. 應用具有SSD-FN-KCF的深度學習於全方位移動機器人與指定人士之人機互動
2. 基於增強語音命令的適應分層限定時間飽和控制之全向移動機器人的設計與實現
3. 基於分層自適應有限時間控制的全向驅動服務機器人之軌跡設計與實現
4. 應用階層可變結構控制於具有時變地形的全向移動機器人之軌跡追踪
5. 應用雙向長短期記憶模型於分佈式超寬頻模式及具遞迴神經網路之限定時間追蹤控制於全向服務型機器人的特定人士之追隨
6. 利用反射式光彈法量測與分析圓管環縫填料對接殘留應力
7. 由元素矽水解法合成無機二氧化矽奈米顆粒及探討矽烷接枝二氧化矽顆粒、反應性微膠顆粒與矽烷接枝及高分子接枝之氧化石墨烯及熱脫層氧化石墨烯對乙烯基酯樹脂之聚合固化反應動力、玻璃轉移溫度及X光散射特性之影響
8. 合成金屬有機框架與衍生奈米結構材料應用於檢測與觸媒催化
9. 印尼泗水Kampung住宅區公共空間研究
10. 應用模糊追蹤遞增控制於室外四旋翼之即時障礙物偵測、閃避及地圖建構
11. 應用具有序列迴歸卷積網路為基準的動態人臉表情及無線語音命令辨識的人機協同之研究
12. 碳材及金屬添加劑對經不同製程之AZ31鎂合金儲氫性能之影響
13. 基於工業機器手臂之復健機器人研究
14. 細胞式網路具有預留與重傳的未決興趣表評估
15. 具有再生能源之軟體定義衛星網路效能分析