跳到主要內容

臺灣博碩士論文加值系統

(3.236.84.188) 您好!臺灣時間:2021/07/30 01:44
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:謝萬霖
研究生(外文):HSIEH, WEN-LIN
論文名稱:基於聽障輔具使用之詢答機器人系統之研究
論文名稱(外文):Research on Question and Answer Robot System Based on the Use of Hearing Impaired Aids
指導教授:何健鵬
指導教授(外文):HO, CHIEN-PENG
口試委員:李俊賢許佳興何健鵬
口試委員(外文):LEE, JIN-SHYANSHEU, JIA-SHINGJiann-Perng Ho
口試日期:2020-07-30
學位類別:碩士
校院名稱:亞東技術學院
系所名稱:資訊與通訊工程碩士班
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2020
畢業學年度:108
語文別:中文
論文頁數:85
中文關鍵詞:聊天機器人LSTMSequence to SequenceLuongBahdanau深度學習
外文關鍵詞:ChatBotLSTMSequence to SequenceLuongBahdanau
相關次數:
  • 被引用被引用:0
  • 點閱點閱:40
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:1
由於聽障人士使用聽障輔具有時會遇到困難或輔具發生故障,詢問使用方式或是故障如何修理時,不易取得相關資訊,加上聽障學生容易誤解訊息與語意,且在生活和學習上語言發展較為遲緩,難以運用言語溝通的意思,以及在心理社交上較不願意主動與人溝通的意願,更不會願意使用電話客服。
因此本論文的目的在基於聽障輔具使用之詢答機器人系統之研究,並運用Line即時通訊軟體平台做為系統發展平台,本論文利用Python爬蟲蒐集聽障輔具相關網站的文本資料,並使用了Jieba分詞系統進行分詞,以Word2Vec計算文本中的詞向量,並比較Sequence to Sequence Attention 模型與BERT模型於回答問題之的回應內容,因此選擇模型於聊天機器人來輔助聽障學生使用助聽器或聽障輔具時所遇到的問題答詢,透過不同種類的助聽器問答比較正常的回答及錯誤的回答。

Since hearing impaired students will encounter difficulties or malfunctions when using hearing impaired assistive devices, it is not easy to obtain relevant information when inquiring about the use or how to repair the malfunction.In addition, hearing impaired students are prone to misunderstanding information and semantics, and they are more likely to The language development is relatively slow in learning, it is difficult to use the meaning of verbal communication, and the willingness to actively communicate with others in the psychosocial aspect, and is less willing to use telephone customer service.
Therefore, the purpose of this paper is to study the question-and-answer robot system based on the use of hearing-impaired assistive devices commodity data, and use the Line instant messaging software platform as a system development platform. This paper uses Python crawlers to collect text data on websites related to hearing-impaired assistive devices, and The Jieba word segmentation system is used for word segmentation, Word2Vec is used to calculate the word vector in the text, and the Sequence to Sequence Attention model is compared with the BERT model in answering the response content of the query.The model is selected on the chat robot to assist the hearing impaired students to use hearing aids or hearing aids. When answering questions about handicap assistive devices, there are more normal answers and incorrect answers through different types of hearing aids.

目錄
誌謝 I
摘要 II
Abstract III
目錄 IV
表目錄 VII
圖目錄 VIII
第一章 緒論 1
1.1 研究背景 1
1.2 研究動機 1
1.3 文獻回顧 2
1.4 問題陳述 3
1.5 研究方法與貢獻 4
1.6 論文架構 4
第二章 相關背景知識介紹 6
2.1 相關環境軟體介紹 6
2.1.1 Ngork套件 6
2.1.2 MySQL系統 7
2.1.3 TensorFlow套件 7
2.1.4 Keras套件 8
2.2 分詞系統 8
2.2.1 Jieba系統 8
2.2.2 SnowNLP系統 10
2.2.3 HanLP系統 10
2.2.4 CkipTagger系統 11
2.3 詞向量系統 11
2.3.1 類神經網路結構 12
2.3.2 Word2Vec套件 13
2.3.3 Continous Bag of Words Model 13
2.3.4 Skip-gram Model 14
2.4 Sequence to Sequence Attention模型 15
2.4.1 LSTM 16
2.4.2 Sequence to Sequence 19
2.4.3 Attention 21
2.5 BERT 26
2.5.1 Transformer 26
2.5.2 BERT 27
2.6 雙語替換評測 28
第三章 研究方法 30
3.1 系統架構 30
3.2 問答數據 32
第四章 實驗測試與結果分析 34
4.1 分詞模型比較 34
4.2 文本前處理 35
4.3 Sequence to Sequence Attention模型與BERT模型比較 37
4.4 Line實作與實驗 41
4.5 實際調查 51
4.5.1 收集活動 51
4.5.2 問卷設計 54
4.5.3 問卷分析與統計 56
第五章 結論與未來發展 69
5.1 結論 69
5.2 未來發展 69
參考文獻 71

[1]台新銀行line加好友 Retrieved July 20, 2020, from https://line.me/R/ti/p/%40richart
[2]小玉銀行line加好友 Retrieved July 20, 2020, from https://line.me/R/ti/p/@esunbank
[3]台灣e院, Retrieved July 20, 2020, from https://sp1.hso.mohw.gov.tw/doctor/Often_question/
[4]大專校院及高中職聽語障學生教育輔具中心, Retrieved July 20, 2020, from https://cacd.nknu.edu.tw/cacd/items.aspx
[5]巨泉助聽器官網助聽器說明, Retrieved July 20, 2020, from https://www.goldenday.com.tw/learn.php
[6]呂瑞麟、郭欣逸,一個基於語意分析的自然語言查詢系統,國立中興大學資訊管理學系所,碩士論文,2017。
[7]Y. Wang and X. Huang, “A hierarchical semantic extraction model for Chinese counseling question based on neural networks,”, 2017 8th IEEE International Conference on Software Engineering and Service Science (ICSESS),pp. 496-500,2017.
[8]Huang, S., Chen, K., Ma, W. et al., “Semantic relation identification for consecutive predicative constituents in Chinese,”,. lingua. sin. 3, 9 (2017).
[9]J. Zhou, Y. Lu, H. Dai, H. Wang and H. Xiao, “Sentiment Analysis of Chinese Microblog Based on Stacked Bidirectional LSTM,”, IEEE Access, vol. 7, pp. 38856-38866, 2019.
[10]Tomas Mikolov, Kai Chen, Greg Corrado, Jeffrey Dean,” Efficient Estimation of Word Representations in Vector Space,”, arXiv preprint arxiv:1301.3781, 2013.
[11]彭昱傑、吳 昇,聊天機器人系統設計與實作,國立中正大學資訊工程研究所,碩士論文,2017。
[12]Ilya Sutskever, Oriol Vinyals, Quoc V. Le,” Sequence to Sequence Learning with Neural Networks”, arXiv preprint arXiv:1409.3215, 2014.
[13]魏彰村、潘仁義,運用爬蟲技術之主題導向即時通訊聊天機器人設計與實現:以籃球領域諮詢結合LINE APP 實作為例,國立中正大學通訊資訊數位學習碩士在職專班,碩士論文,2017。
[14]Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio ,“Neural Machine Translation by Jointly Learning to Align and Translate,”, cite arXiv: 1409.0473 Comment: Accepted at ICLR 2015 as oral presentation,2014.
[15]Tomas Mikolov, Kai Chen, Greg Corrado, Jeffrey Dean,”Efficient Estimation of Word Representations in Vector Space,”, arXiv:1301.3781, 2015.
[16]Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova,”BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding,”, arXiv:1810.04805, 2018.
[17]Ngrok, Retrieved July 20, 2020, from https://ngrok.com/
[18]XAMPP, Retrieved July 20, 2020, from https://www.apachefriends.org/zh_tw/download.html
[19]tensorflow1.14.1, Retrieved July 20, 2020, from https://github.com/tensorflow/docs/tree/r1.14/site/en/api_docs
[20]keras2.0.8, Retrieved July 20, 2020, from https://faroit.com/keras-docs/2.0.8/
[21]Jieba, Retrieved July 20, 2020, from https://github.com/fxsjy/jieba
[22]snownlp, Retrieved July 20, 2020, from https://github.com/isnowfy/snownlp
[23]HanLP, Retrieved July 20, 2020, from https://github.com/hankcs/HanLP
[24]CkipTagger, Retrieved July 20, 2020, from https://github.com/ckiplab/ckiptagger
[25]S. Hochreiter and J. Schmidhuber,“Long Short-Term Memory,”, in Neural Computation, vol. 9, no. 8, pp. 1735-1780,1997.
[26]Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin,” Attention Is All You Need,”, arXiv:1706.03762, 2017.

電子全文 電子全文(網際網路公開日期:20250826)
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top