跳到主要內容

臺灣博碩士論文加值系統

(44.200.101.84) 您好!臺灣時間:2023/10/05 10:46
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:周純靜
研究生(外文):Chun-JingZhou
論文名稱:Face2Music:以OM2M框架實作基於人臉情緒辨識的音樂控制系統
論文名稱(外文):Face2Music : Implementing a music control system based on facial emotion recognition using the OM2M framework
指導教授:蘇銓清
指導教授(外文):Chuan-Ching Sue
學位類別:碩士
校院名稱:國立成功大學
系所名稱:資訊工程學系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2019
畢業學年度:107
語文別:英文
論文頁數:64
中文關鍵詞:物聯網情緒辨識OM2M播放器控制系統
外文關鍵詞:Internet of ThingsOM2MEmotion RecognitionSpeaker Control System
相關次數:
  • 被引用被引用:0
  • 點閱點閱:179
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
近年來AI及物聯網快速發展,人們可以透過這些技術滿足自己的生活所需,包括情緒健康。當人們的生活面對過多壓力,情緒受到影響,甚至會產生精神上的疾病。為了舒緩現代人的壓力,並考量使用服務的便利性,本篇研究結合了AI及物聯網的技術,實作了娛樂型人機互動系統-Face2Music,包含情緒控制模組 (Emotion Controller Module)、喇叭管理模組 (Speaker Manager Module)、音樂管理模組 (Music Manager Module) 等三個模組。在情緒控制模組中,透過AI情緒辨識,使用者可以利用表情即時控制音樂播放裝置 (本篇系統設定若偵測到開心的表情,則給予播放音樂的回饋)。喇叭管理模組可以調整音樂播放器的開關、音量設定,並針對群組或個別做控制。音樂管理模組搭載了音樂資料庫及HTTP Server,並提供web管理頁面,可修改音樂播放清單,並設有可對喇叭進行控制的音樂播放器。為了讓服務可以獲得更廣泛的利用,達到智慧家庭、智慧城市的概念,本系統以OM2M為框架,將三個模組建置四層結構之中-Application Layer、Network Layer、Gateway Layer、Device Layer。
In recent years, AI and the Internet of Things (IoT) have developed rapidly. People can use these technologies to meet their own needs, including emotional health. When people's lives face excessive pressure, their emotions are affected, and even mental illness may occur. In order to reduce emotional problems and consider the convenience of using services, this study combines the techniques of AI and the IoT to implement the interactive human-machine interaction system - Face2Music, including the Emotion Controller Module, Speaker Manager Module and Music Manager Module. In the emotion control module, through AI emotion recognition, the user can use the expression to control the music playback device immediately (our system gives a feedback of playing music if a happy expression is detected). The speaker management module can adjust the power and volume settings of the music player and control the group or individual. The music management module is equipped with a music database and HTTP server, and provides a web management page, that can modify the music play list and control the speaker by music player. In order to make the service more widely available and reach the concept of smart homes and smart cities, we use OM2M as the framework to build three modules into a four-layer structure - Application Layer, Network Layer, Gateway Layer, and Device Layer.
中文摘要 III
Abstract IV
Contents 1
List of Tables 4
List of Figures 5
1. Introduction 7
2. Background 11
2.1. Emotion Recognition 11
2.1.1. Classification of Emotion 11
2.1.2. Approaches of Emotion Recognition 11
2.2. OneM2M 13
2.2.1. OneM2M Architecture and Definitions 13
2.2.2. OneM2M Common Services Functions 16
2.2.3. OneM2M Resource Type 17
2.3. Eclipse OM2M 18
2.4. Node-Red 20
3. Related Work 20
3.1. Facial Expressions Recognition 20
3.2. Emotion Transmission 21
3.3. IoT Application using oneM2M 21
4. System Architecture 22
4.1. Emotion Controller Module 25
4.1.1. Face2Music App (IN-AE) in ECM 26
4.1.2. Emotion Recognition (IN-AE) 27
4.1.3. Trigger (MN-AE) 29
4.1.4. Music Controller (MN-AE) 30
4.1.5. Raspberry Pi and Speaker (ADN-AE) 31
4.2. Music Manager Module 33
4.2.1. Music Manager (IN-AE) 34
4.2.2. Music Database 35
4.2.3. Music HTTP Server 36
4.3. Speaker Manager Module 36
4.3.1. Face2Music (IN-AE) SMM Part 37
4.3.2. Speaker Controller (MN-AE) 38
5. System Implementation 39
5.1. Equipment Setup 40
5.1.1. Hardware Information 40
5.1.2. Network Setup 42
5.1.3. Application Entity Information 43
5.2. OM2M System Implementation 44
5.2.1. OM2M Registration 45
5.2.2. OM2M Resource Creation and Subscription 45
5.3. Modules Implementation 47
5.3.1. Emotion Controller Module 47
5.3.2. Music Manager Module 49
5.3.3. Speaker Manager Module 51
6. Evaluation 55
6.1. Emotion Recognition Test 55
6.2. Face2Music Test 57
7. Conclusion and Future Work 59
8. References 60
Appendices 64
[1] Patti, Edoardo, and Andrea Acquaviva. IoT platform for Smart Cities: Requirements and implementation case studies. Research and Technologies for Society and Industry Leveraging a better tomorrow (RTSI), IEEE 2nd International Forum on. IEEE, 2016.
[2] Selye, Hans. The general adaptation syndrome and the diseases of adaptation. The journal of clinical endocrinology 6.2, pp. 117-230, 1946.
[3] Kraft, Tara L., and Sarah D. Pressman. Grin and bear it: The influence of manipulated facial expression on the stress response. Psychological science 23.11, pp. 1372-1378, 2012.
[4] Mori, Kazuo, and Hideko Mori. Another test of the passive facial feedback hypothesis: When your face smiles, you feel happy. Perceptual and motor skills 109.1, pp. 76-78, 2009.
[5] Arriaga, Octavio, Matias Valdenegro-Toro, and Paul Plöger. Real-time Convolutional Neural Networks for Emotion and Gender Classification. arXiv preprint arXiv:1710.07557 , 2017.
[6] World Health Organization. Depression and other common mental disorders: global health estimates., 2017.
[7] OneM2M. Available on June 28, 2018 http://www.OneM2M.org/
[8] OM2M. Available on June 28, 2018 http://www.eclipse.org/om2m/
[9] Theodoridis, Evangelos, Georgios Mylonas, and Ioannis Chatzigiannakis. Developing an IoT smart city framework. Information, intelligence, systems and applications (iisa), fourth international conference on. IEEE, 2013.
[10] Ekman, Paul. An argument for basic emotions. Cognition & emotion 6.3-4, pp. 169-200, 1992.
[11] Cowen, Alan S., and Dacher Keltner. Self-report captures 27 distinct categories of emotion bridged by continuous gradients. Proceedings of the National Academy of Sciences 114.38, pp. E7900-E7909, 2017.
[12] Shaver, Phillip, et al. Emotion knowledge: Further exploration of a prototype approach. Journal of personality and social psychology 52.6, pp. 1061, 1987.
[13] Lazarus, Richard S. Thoughts on the relations between emotion and cognition. American psychologist 37.9, pp. 1019, 1982.
[14] Mascolo, Michael F., Kurt W. Fischer, and Jin Li. Dynamic development of component systems of emotions: Pride, shame, and guilt in China and the United States. Handbook of affective sciences, pp. 375-408, 2003.
[15] Chen, Min, Ping Zhou, and Giancarlo Fortino. Emotion Communication System. IEEE Access 5, pp. 326-337, 2017.
[16] Yan, Haibin, Marcelo H. Ang, and Aun Neow Poo. A survey on perception methods for human–robot interaction in social robots. International Journal of Social Robotics 6.1, pp. 85-119, 2014.
[17] Cambria, Erik, et al. Sentiment analysis is a big suitcase. IEEE Intelligent Systems 32.6, pp. 74-80, 2017.
[18] Sharef, Nurfadhlina Mohd, Harnani Mat Zin, and Samaneh Nadali. Overview and future opportunities of Sentiment Analysis approaches for big data. Journal of Computer Sciences, 2016.
[19] Soleymani, Mohammad, et al. A survey of multimodal sentiment analysis. Image and Vision Computing 65, pp. 3-14, 2017.
[20] Android-Face-Recognition-with-Deep-Learning-Library. Available on June 28, 2018 https://github.com/Qualeams/Android-Face-Recognition-with-Deep-Learning-Library
[21] Android SDK 3.4.1. Available on June 28, 2018 https://opencv.org/releases.html
[22] python-for-android. Available on June 28, 2018 https://python-for-android.readthedocs.io/en/latest/
[23] [Android Studio + TensorFlow] Error with some Kernel's Operations. Available on June 28, 2018 https://github.com/tensorflow/tensorflow/issues/17013
[24] OMXplayer: an accelerated command line media player. Available on June 28, 2018 https://www.raspberrypi.org/documentation/raspbian/applications/omxplayer.md
[25] EmotionNet. Available on June 28, 2018 https://github.com/co60ca/EmotionNet
[26] Fisherface. Available on June 28, 2018 https://docs.opencv.org/2.4/modules/contrib/doc/facerec/facerec_tutorial.html#fisherfaces
[27] Node-Red. Available on June 28, 2018 https://nodered.org/
[28] Lucey, Patrick, et al. The extended cohn-kanade dataset (ck+): A complete dataset for action unit and emotion-specified expression. Computer Vision and Pattern Recognition Workshops (CVPRW), IEEE Computer Society Conference on. IEEE, 2010.
[29] FER-2013 face database. Available on June 28, 2018 https://www.kaggle.com/c/challenges-in-representation-learning-facial-expression-recognition-challenge/data
[30] Lundqvist, D., Flykt, A., & Öhman, A. (1998). The Karolinska Directed Emotional Faces – KDEF, CD ROM from Department of Clinical Neuroscience, Psychology section, Karolinska Institutet, ISBN 91-630-7164-9.
[31] Datta, Soumya Kanti, et al. oneM2M architecture based IoT framework for mobile crowd sensing in smart cities. Networks and Communications (EuCNC), European Conference on. IEEE, 2016.
[32] 國家發展委員會 106 年持有手機民眾數位機會調查報告. Available on June 28, 2018 https://ws.ndc.gov.tw/Download.ashx?u=LzAwMS9hZG1pbmlzdHJhdG9yLzEwL2NrZmlsZS9mOTM5NzkxZi02NDM1LTQ2Y2QtOTYxNC02MmMwMzE3MzJlYzkucGRm&n=MTA25bm05omL5qmf5peP5pW45L2N5qmf5pyD6Kq%2F5p%2Bl5aCx5ZGKLnBkZg%3D%3D
[33] Suk, Myunghoon, and Balakrishnan Prabhakaran. Real-time mobile facial expression recognition system-a case study. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2014.
[34] Kivy. Available on June 28, 2018 https://kivy.org/
[35] Building TensorFlow on Android. Available on June 28, 2018 https://www.tensorflow.org/lite/tfmobile/android_build
[36] Jakubik, Jan, and Halina Kwaśnicka. Music emotion analysis using semantic embedding recurrent neural networks. INnovations in Intelligent SysTems and Applications (INISTA), 2017 IEEE International Conference on. IEEE, 2017.
[37] DEAM dataset - The MediaEval Database for Emotional Analysis of Music. Available on June 28, 2018 http://cvml.unige.ch/databases/DEAM/
[38] Go, Hyoun-Joo, et al. Emotion recognition from the facial image and speech signal. SICE 2003 Annual Conference. Vol. 3. IEEE, 2003.
[39] Ko, Byoung Chul. A Brief Review of Facial Emotion Recognition Based on Visual Information. sensors 18.2, pp. 401, 2018.
[40] Anagnostopoulos, Christos-Nikolaos, Theodoros Iliou, and Ioannis Giannoukos. Features and classifiers for emotion recognition from speech: a survey from 2000 to 2011. Artificial Intelligence Review 43.2, pp. 155-177, 2015.
[41] Hossain, M. Shamim, et al. Audio-visual emotion recognition using big data towards 5G. Mobile Networks and Applications 21.5, pp. 753-763, 2016.
[42] omxplayer(1) -- Raspberry Pi command line OMX player. Available on June 28, 2018 https://github.com/popcornmix/omxplayer
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top