跳到主要內容

臺灣博碩士論文加值系統

(44.222.134.250) 您好!臺灣時間:2024/10/13 09:18
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:陳沛萱
研究生(外文):Pei-Hsuan Chen
論文名稱:基於縮放指數線性單元的循環神經網路推薦系統之研究ij
論文名稱(外文):A Study of Recommender System Based on Recurrent Neural Network Using Scaled Exponential Linear Unit
指導教授:吳怡樂
指導教授(外文):Yi-Leh Wu
口試委員:陳建中唐政元吳怡樂閻立剛
口試委員(外文):Jiann-Jone ChenCheng-Yuan TangYi-Leh WuLi-Kang Yen
口試日期:2019-12-27
學位類別:碩士
校院名稱:國立臺灣科技大學
系所名稱:資訊工程系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2020
畢業學年度:108
語文別:英文
論文頁數:48
中文關鍵詞:推薦系統深度學習協同式過濾循環神經網路隱性評價資料集
外文關鍵詞:Recommender SystemDeep LearningCollaborative FilteringRecurrent Neural NetworkImplicit Feedback Datasets
相關次數:
  • 被引用被引用:0
  • 點閱點閱:271
  • 評分評分:
  • 下載下載:45
  • 收藏至我的研究室書目清單書目收藏:0
隨著深度學習的快速發展,如今深度學習已被廣泛的應用在各個領域,理所當然地也被應用於各種推薦系統中。推薦系統的主要目的是幫助用戶過濾大量信息,並提供滿足用戶個人喜好的產品或服務推薦,而隨者網路與電子設備的日益蓬勃發展所提供的便利性,使用者對於各種網路平台的依賴性也日漸提高,不論是電子商務平台或是音樂串流服務,推薦系統都已被廣泛地應用於其中,以提供用戶感興趣的項目來延長使用者的停留時間或是更多消費。本論文以最先進的推薦系統框架RNNCF(“以循環神經網路為基礎之協同過濾推薦系統”的縮寫)為主要研究對像,提出了幾種新的帶有更多時間資訊的訓練資料格式,同時也將最先進的激活函數SELU(“縮放指數線性單元”的縮寫)引入RNNCF,研究應用此激活函數在不同網路階段所造成的影響。最後,我們在兩個真實世界資料集MovieLens-1m 與 Pinterest 上進行了實驗,結果表明,與LSTM相比,將SELU引入RNNCF的MLP具有更高的收益。
As the rapid development of deep learning, deep learning has been widely used in various fields, and also used in various recommendation systems. The main purpose of the recommendation system is to help users filter a large amount of information and provide product or service recommendations that meet the user's personal preferences. The convenience provided by the growing of the Internet and electronic devices makes users reinforce dependence on various network platforms. Whether it is an e-commerce platform or a music streaming service, recommendation systems have been widely used to provide the item that users interest in to extend the user's stay time or more consumption. In this paper, we use a state-of-the-art recommendation system framework, the Recurrent Neural Network based Collaborative Filtering Recommender System (RNNCF), as the main study theme, and propose few new training data formats with more time information inside. At the same time, we also introduce the state-of-the-art Scaling Exponential Linear Unit (SELU) activation function, into the RNNCF to study the impact of applying this activation function in different network stages. Finally, we conduct the experiments on two real-world datasets, MovieLens-1m and Pinterest, and the results show that compared with the Long Short-Term Memory (LSTM), using the SELU as the activation function in the Multi-Layer Perceptron (MLP) from the RNNCF has higher benefits.
RECOMMENDATION LETTER I
APPROVAL LETTER II
論文摘要 III
ABSTRACT IV
ACKNOWLEDGEMENTS V
CONTENTS VI
LIST OF FIGURES VIII
LIST OF TABLES IX
CHAPTER 1. INTRODUCTION 1
CHAPTER 2. RELATED WORK AND PRELIMINARIES 3
2.1 DEEP LEARNING 3
2.2 RECOMMENDER SYSTEMS 5
2.3 ACTIVATION FUNCTION 6
2.3.1 Sigmoid Function, Logistic and tanh Function 7
2.3.2 Rectified Linear Unit (ReLU) 8
2.3.3 Scaled Exponential Linear Units (SELU) 9
2.4 RECURRENT NEURAL NETWORK BASED COLLABORATIVE FILTERING (RNNCF) 11
CHAPTER 3. PROPOSED METHOD 16
3.1 ADD TIMESTAMP IN THE INPUT FORMAT 16
3.2 REVERSE THE INPUT DATA TO CONSIDER REVERSE TIME RELATIONSHIP 17
3.3 USING THE SCALED EXPONENTIAL LINEAR UNIT AS THE ACTIVATION FUNCTION 18
CHAPTER 4. EXPERIMENTS 20
4.1 EXPERIMENTAL SETTINGS 20
4.2 THE IMPACT OF ADD TIMESTAMP IN THE INPUT FORMAT (RQ1) 23
4.3 THE IMPACT OF ADD THE REVERSE INPUT DATA (RQ2) 25
4.4 THE IMPACT OF USING SCALED EXPONENTIAL LINEAR UNIT AS THE ACTIVATION FUNCTION (RQ3) 27
CHAPTER 5. CONCLUSIONS 30
REFERENCES 31
APPENDIX I: THE SPECIFIC INPUT OF EACH METHOD 34
APPENDIX II: IMPACT OF DROPOUT RATE 36
[1] H. C. Hu, “Recurrent Neural Network based Collaborative Filtering Recommender System.” Master Thesis, Department of Computer Science and Information Engineering, National Taiwan University of Science and Technology, Taiwan, 2018.
[2] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” nature, vol. 521, no. 7553, p. 436, 2015.
[3] C. Cortes and V. Vapnik, “Support-vector networks,” Machine learning, vol. 20, no. 3, pp. 273–297, 1995.
[4] D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” nature, vol. 323, no. 6088, p. 533, 1986.
[5] G. E. Hinton, S. Osindero, and Y.-W. Teh, “A fast learning algorithm for deep belief nets,” Neural computation, vol. 18, no. 7, pp. 1527–1554, 2006.
[6] G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” science, vol. 313, no. 5786, pp. 504–507, 2006.
[7] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, pp. 1097–1105, 2012.
[8] K. Cao and A. K. Jain, “Automated latent fingerprint recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018.
[9] Wikipedia, “Typical convolutional neural network architecture”,
https://en.wikipedia.org/wiki/Convolutional_neural_network, Accessed November 20th, 2019.
[10] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural Computation, vol. 9, no. 8, pp. 1735–1780, 1997.
[11] Wikipedia, “Long Short-Term Memory.svg”, https://en.wikipedia.org/wiki/File:Long_Short-Term_Memory.svg, Accessed November 20th, 2019.
[12] X. He, L. Liao, H. Zhang, L. Nie, X. Hu, and T.-S. Chua, “Neural collaborative filtering,” in Proceedings of the 26th International Conference on World Wide Web, pp. 173–182, International World Wide Web Conferences Steering Committee, 2017.
[13] F. Yuan, A. Karatzoglou, I. Arapakis, J. M. Jose, and X. He, “A simple convolutional generative network for next item recommendation,” in Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining. ACM, 2019, pp. 582–590.
[14] X. Du, X. He, F. Yuan, J. Tang, Z. Qin, and T.-S. Chua, “Modeling embedding dimension correlations via convolutional neural collaborative filtering,” arXiv preprint arXiv:1906.11171, 2019.
[15] Y. K. Tan, X. Xu, and Y. Liu, “Improved recurrent neural networks for session-based recommendations,” in Proceedings of the 1st Workshop on Deep Learning for Recommender Systems. ACM, 2016, pp. 17–22.
[16] B. Hidasi, A. Karatzoglou, L. Baltrunas, and D. Tikk, “Session- based recommendations with recurrent neural networks,” arXiv preprint arXiv:1511.06939, 2015.
[17] U. Udofia, “Basic Overview of Convolutional Neural Network (CNN) ”, https://medium.com/@udemeudofia01/basic-overview-of-convolutional-neural-network-cnn-4fcc7dbb4f17, Accessed November 29th, 2019.
[18] X. Glorot, A. Bordes, and Y. Bengio, “Deep sparse rectifier neural networks,” in Proceedings of the fourteenth international conference on artificial intelligence and statistics, 2011, pp. 315–323.
[19] G. Klambauer, T. Unterthiner, A. Mayr, and S. Hochreiter, “Self- normalizing neural networks,” in Advances in neural information pro- cessing systems, 2017, pp. 971–980.
[20] D.-A. Clevert, T. Unterthiner, and S. Hochreiter, “Fast and accurate deep network learning by exponential linear units (elus),” arXiv preprint arXiv:1511.07289, 2015.
[21] E. Cohen, “SELU — Make FNNs Great Again (SNN)”, https://towardsdatascience.com/selu-make-fnns-great-again-snn-8d61526802a9, Accessed December 17th, 2019.
[22] GroupLens, “Movielens 1m dataset.” https://grouplens.org/datasets/movielens/1m/. Accessed November 23, 2018.
[23] G. Xue, “Pinterest iccv dataset.” https://sites.google.com/site/xueatalphabeta/ Computer Vision, vol. 2, pp. 593-602, 1996
[24] Y. Koren, “Factorization meets the neighborhood: a multifaceted collaborative filtering model,” in Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 426–434, ACM, 2008.
[25] X. Geng, H. Zhang, J. Bian, and T.-S. Chua, “Learning image and user features for recommendation in social networks,” in Proceedings of the IEEE International Conference on Computer Vision, pp. 4274– 4282, 2015.
[26] I. Bayer, X. He, B. Kanagal, and S. Rendle, “A generic coordinate descent framework for learning from implicit feedback,” in Proceedings of the 26th International Conference on World Wide Web, pp. 1341–1350, International World Wide Web Conferences Steering Committee, 2017
[27] X. He, H. Zhang, M.-Y. Kan, and T.-S. Chua, “Fast matrix factorization for online recommendation with implicit feedback,” in Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 549–558, ACM, 2016.
[28] S. Rendle, C. Freudenthaler, Z. Gantner, and L. Schmidt-Thieme, “Bpr: Bayesian personalized ranking from implicit feedback,” in Proceedings of the 25th Conference on Uncertainty in Artificial Intelligence, pp. 452–461, AUAI Press, 2009.
[29] A. M. Elkahky, Y. Song, and X. He, “A multi-view deep learning approach for cross domain user modeling in recommendation systems,” in Proceedings of the 24th International Conference on World Wide Web, pp. 278–288, International World Wide Web Conferences Steering Committee, 2015.
[30] X. He, T. Chen, M.-Y. Kan, and X. Chen, “Trirank: Review-aware explainable recommendation by modeling aspects,” in Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, pp. 1661–1670, ACM, 2015.
[31] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in International Conference on Learning Representations (ICLR), Ithaca, 2015.
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊