(100.24.122.117) 您好!臺灣時間:2021/04/12 05:34
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

我願授權國圖
: 
twitterline
研究生:王名宸
研究生(外文):Ming-Chen Wang
論文名稱:可解釋性之深度特徵擷取於機器教學
論文名稱(外文):Extracting Explainable Deep Representation for Machine Tutoring
指導教授:鮑興國鮑興國引用關係
指導教授(外文):Hsiang-Kuo Pao
口試委員:項天瑞楊傳凱李育杰
口試委員(外文):Tien-Jui HsiangChuan-Kai YangYu-Chieh Li
口試日期:2019-01-29
學位類別:碩士
校院名稱:國立臺灣科技大學
系所名稱:資訊工程系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2019
畢業學年度:107
語文別:英文
論文頁數:55
中文關鍵詞:生成對抗自編碼器可解釋模型深度學習互動式機器學習機器教學特徵擷取
外文關鍵詞:Adversarial autoencoderExplainable modelDeep learningInteractive machine learningmachine tutoringRepresentation
相關次數:
  • 被引用被引用:0
  • 點閱點閱:97
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:1
  • 收藏至我的研究室書目清單書目收藏:1
有鑑於深度學習發展日趨成熟,被拓展應用之領域與日俱增,然而,深度學習常用的深度網路可解釋性低的缺點,也逐漸被重視,歐盟個資保護新法亦針對個人化自動決策方面,賦予用戶請求決策解釋的權利,於此,模型的可解釋性將成為深度學習大量應用之前,必須克服的問題。於這篇研究中,我們將具可解釋性的深度學習算法應用至互動式機器教學(Interactive machine learning, IaML),藉由穿戴式裝置的輔助,記錄初新者的動作資料,將資料傳輸至教學機器進行分析,結果以視覺化的圖像回饋給初新者,協助指引其修正動作。本篇研究中,學習情境設定為太鼓達人打鼓節奏遊戲,初新者將穿戴簡易的感測裝置進行遊戲,藉由互動教學機器提供的回合式回饋(Summary Feedback)檢視打擊狀況,並從中獲得改善的建議,而達到比自適學習(Self-Learning)還顯著的進步,達到打擊專家的表現水準.研究將分成幾個階段討論,從資料收集、資料前處理、模型算法設計乃至人機互動介面設計,並在最後展示教學機器幫助初新者增大進步幅度的前導研究結果.於本篇研究中,我們將拓展對抗自編碼器的實作內容並進行討論,後將其應用在具時序性的太鼓達人資料集上
Due to the fact that deep learning becomes powerful and mature technique. It is deployed on more and more applications day by day. However, people start to concern the interpretability of deep learning methodologies. Law such as GDPR (General Data Protection Regulation) defines that customs have right to ask for explanation of decision made by algorithm. Therefore, deep learning still need to conquer this shortcoming before been widely applied. In this research, we apply explainable deep neural networks on IaML (Interactive machine learning). We record the motion of novices through wearable sensors. And the tutoring machine analyze data before generating visualization feedback to novices, to help them improve their practices. The scenario is set on Taiko rhythm game, novices will play Taiko game with wearable sensors and try to improve their practices by machine's help. In this thesis, we will step by step introduce data collection, data processing, model design and graph user interface design. In the end we will show the pilot study of our IaML project. For methodology contribution, we modify adversarial autoencoder and apply on time-series Taiko data set.
Recommendation Letter . . . . . . . . . . . . . . . . . . . . . . . . i
Approval Letter . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii
Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv
Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . v
Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
List of Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.1 Learning by demonstration . . . . . . . . . . . . . 5
1.2.2 Intelligent tutoring . . . . . . . . . . . . . . . . . 5
1.2.3 Precision sports . . . . . . . . . . . . . . . . . . . 6
1.2.4 Deep learning . . . . . . . . . . . . . . . . . . . . 7
2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.1 Adversarial autoencoder . . . . . . . . . . . . . . . . . . 8
vii
2.2 Multi-Layer perceptron . . . . . . . . . . . . . . . . . . . 10
2.3 Long-short-term memory . . . . . . . . . . . . . . . . . . 11
2.4 Convolution neural networks . . . . . . . . . . . . . . . . 13
2.5 Modified Adversarial Autoencoder Regression . . . . . . 14
3 Experiment and Result . . . . . . . . . . . . . . . . . . . . . . 15
3.1 Taiko data set and data collection . . . . . . . . . . . . . . 16
3.1.1 Taiko game . . . . . . . . . . . . . . . . . . . . . 16
3.1.2 Data collection . . . . . . . . . . . . . . . . . . . 17
3.1.3 Taiko data set . . . . . . . . . . . . . . . . . . . . 18
3.2 Data preprocessing . . . . . . . . . . . . . . . . . . . . . 21
3.3 Model design and Result . . . . . . . . . . . . . . . . . . 26
3.4 Visualization . . . . . . . . . . . . . . . . . . . . . . . . 30
4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.1 Drawbacks . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . 53
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
[1] H. S. Kang, J. Y. Lee, S. Choi, H. Kim, J. H. Park, J. Y. Son, B. H. Kim, and S. Do Noh, “Smart manufacturing: Past research, present findings, and future directions,” International Journal of Precision
Engineering and Manufacturing-Green Technology, vol. 3, no. 1, pp. 111–128, 2016.
[2] G. Konidaris, S. Kuindersma, R. Grupen, and A. Barto, “Robot learning from demonstration by constructing skill trees,” The International Journal of Robotics Research, vol. 31, no. 3, pp. 360–375,
2012.
[3] S. Schaal, “Learning from demonstration,” in Advances in neural information processing systems,
pp. 1040–1046, 1997.
[4] S. R. Ahmadzadeh, A. Paikan, F. Mastrogiovanni, L. Natale, P. Kormushev, and D. G. Caldwell,
“Learning symbolic representations of actions from human demonstrations,” in Robotics and Automation (ICRA), 2015 IEEE International Conference on, pp. 3801–3808, IEEE, 2015.
[5] M. Kuderer, S. Gulati, and W. Burgard, “Learning driving styles for autonomous vehicles from demonstration,” in Robotics and Automation (ICRA), 2015 IEEE International Conference on, pp. 2641–
2646, IEEE, 2015.
[6] C. Paxton, G. D. Hager, L. Bascetta, et al., “An incremental approach to learning generalizable robot
tasks from human demonstration,” in Robotics and Automation (ICRA), 2015 IEEE International Conference on, pp. 5616–5621, IEEE, 2015.
[7] H. Mayer, I. Nagy, and A. Knoll, “Skill transfer and learning by demonstration in a realistic scenario of
laparoscopic surgery,” in Proceedings of the IEEE International Conference on Humanoids CD-ROM,
2003.
[8] W. J. Clancey, “Tutoring rules for guiding a case method dialogue,” International Journal of ManMachine Studies, vol. 11, no. 1, pp. 25–49, 1979.
[9] D. Hooshyar, R. B. Ahmad, M. Yousefi, M. Fathi, S.-J. Horng, and H. Lim, “Applying an online gamebased formative assessment in a flowchart-based intelligent tutoring system for improving problemsolving skills,” Computers & Education, vol. 94, pp. 18–36, 2016.
[10] A. M. Khan, Y.-K. Lee, S. Y. Lee, and T.-S. Kim, “A triaxial accelerometer-based physical-activity
recognition via augmented-signal features and a hierarchical recognizer,” IEEE transactions on information technology in biomedicine, vol. 14, no. 5, pp. 1166–1172, 2010.
[11] M. Ermes, J. Pärkkä, J. Mäntyjärvi, and I. Korhonen, “Detection of daily activities and sports with
wearable sensors in controlled and uncontrolled conditions,” IEEE transactions on information technology in biomedicine, vol. 12, no. 1, pp. 20–26, 2008.
47
[12] D. Spelmezan and J. Borchers, “Real-time snowboard training system,” in CHI’08 Extended Abstracts
on Human Factors in Computing Systems, pp. 3327–3332, ACM, 2008.
[13] L. Liu and J. Hodgins, “Learning basketball dribbling skills using trajectory optimization and deep
reinforcement learning,” ACM Transactions on Graphics (TOG), vol. 37, no. 4, p. 142, 2018.
[14] H. Ghasemzadeh, V. Loseu, E. Guenterberg, and R. Jafari, “Sport training using body sensor networks: A statistical approach to measure wrist rotation for golf swing,” in Proceedings of the Fourth
International Conference on Body Area Networks, p. 2, ICST (Institute for Computer Sciences, SocialInformatics and …, 2009.
[15] D. Arvind and A. Bates, “The speckled golfer,” in Proceedings of the ICST 3rd international conference on Body area networks, p. 28, ICST (Institute for Computer Sciences, Social-Informatics and …
, 2008.
[16] H. Ghasemzadeh and R. Jafari, “Coordination analysis of human movements with body sensor networks: A signal processing model to evaluate baseball swings,” IEEE Sensors Journal, vol. 11, no. 3,
pp. 603–610, 2011.
[17] T. T. Tran, J. W. Choi, C. Van Dang, G. SuPark, J. Y. Baek, and J. W. Kim, “Recommender system
with artificial intelligence for fitness assistance system,” in 2018 15th International Conference on
Ubiquitous Robots (UR), pp. 489–492, IEEE, 2018.
[18] E. Hernández, V. Sanchez-Anguix, V. Julian, J. Palanca, and N. Duque, “Rainfall prediction: A deep
learning approach,” in International Conference on Hybrid Artificial Intelligence Systems, pp. 151–
162, Springer, 2016.
[19] T. Kuremoto, S. Kimura, K. Kobayashi, and M. Obayashi, “Time series forecasting using a deep belief
network with restricted boltzmann machines,” Neurocomputing, vol. 137, pp. 47–56, 2014.
[20] D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” arXiv preprint arXiv:1312.6114,
2013.
[21] A. Makhzani, J. Shlens, N. Jaitly, I. Goodfellow, and B. Frey, “Adversarial autoencoders,” arXiv
preprint arXiv:1511.05644, 2015.
[22] M. Phuong, M. Welling, N. Kushman, R. Tomioka, and S. Nowozin, “The mutual autoencoder: Controlling information in latent code representations,” 2018.
[23] A. B. Dieng, Y. Kim, A. M. Rush, and D. M. Blei, “Avoiding latent variable collapse with generative
skip models,” arXiv preprint arXiv:1807.04863, 2018.
[24] P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P.-A. Manzagol, “Stacked denoising autoencoders:
Learning useful representations in a deep network with a local denoising criterion,” Journal of machine
learning research, vol. 11, no. Dec, pp. 3371–3408, 2010.
[25] J. Song, P. Kalluri, A. Grover, S. Zhao, and S. Ermon, “Learning controllable fair representations,”
arXiv preprint arXiv:1812.04218, 2018.
48
[26] J. Tan, M. Ung, C. Cheng, and C. S. Greene, “Unsupervised feature construction and knowledge extraction from genome-wide assays of breast cancer with denoising autoencoders,” in Pacific Symposium
on Biocomputing Co-Chairs, pp. 132–143, World Scientific, 2014.
[27] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and
Y. Bengio, “Generative adversarial nets,” in Advances in neural information processing systems,
pp. 2672–2680, 2014.
[28] F. A. Gers, J. Schmidhuber, and F. Cummins, “Learning to forget: Continual prediction with lstm,”
1999.
[29] V. Veeriah, N. Zhuang, and G.-J. Qi, “Differential recurrent neural networks for action recognition,”
in Proceedings of the IEEE international conference on computer vision, pp. 4041–4049, 2015.
[30] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
系統版面圖檔 系統版面圖檔