跳到主要內容

臺灣博碩士論文加值系統

(44.220.181.180) GMT+8:2024/09/18 09:55
Font Size: Enlarge Font   Word-level reduced   Reset  
Back to format1 :::

Browse Content

 
twitterline
Author:阮明皓
Author (Eng.):Ming-Hao Juan
Title:基於注意力機制與圖學習並整合物品文字資訊之用戶偏好建模
Title (Eng.):Attentive Graph-based Text-aware Preference Modeling for Top-N Recommendation
Advisor:鄭卜壬鄭卜壬 author reflink
advisor (eng):Pu-Jen Cheng
degree:Master
Institution:國立臺灣大學
Department:資訊工程學研究所
Narrow Field:工程學門
Detailed Field:電資工程學類
Types of papers:Academic thesis/ dissertation
Publication Year:2022
Graduated Academic Year:110
language:English
number of pages:26
keyword (chi):推薦系統協同過濾圖卷積網路文字資訊注意力機制
keyword (eng):RecommendationCollaborative FilteringGraph Convolutional NetworkTextual InformationAttention Mechanism
Ncl record status:
  • Cited Cited :0
  • HitsHits:67
  • ScoreScore:system iconsystem iconsystem iconsystem iconsystem icon
  • DownloadDownload:5
  • gshot_favorites title msgFav:0
今日,文字資訊常用作建模用戶偏好的輔助資訊。許多先前的研究會利用用戶評論進行評分預測,但少有研究關注 top-N 推薦,而嘗試使用物品文字內容——如標題和敘述——的研究更少。儘管在評分預測上可以做到不錯的結果,但我們發現,大部分基於評論的模型在 top-N 推薦上的表現並不佳。此外,在某些推薦場景中並沒有用戶評論的資料,而物品文字內容則較為普遍可得。另一方面,最近基於圖卷積網絡(graph convolutional network)的模型在 top-N 推薦上展示了前所未有的最佳表現。因此,我們研究的目標是透過有效地建模物品文字內容和用戶—物品互動圖中的高階連接性(high-order connectivity)來進一步增強模型在 top-N 推薦上的表現。我們提出了一個新的模型:基於注意力機制與圖學習之文字感知推薦模型(Attentive Graph-based Text-aware Recommendation Model)(AGTM),並提供系統性的實驗來證明此模型設計的合理性和有效性。
Textual data are commonly used as auxiliary information for modeling user preference nowadays. While many prior works utilize user reviews for rating prediction, few focus on top-N recommendation, and even few try to incorporate item textual contents such as title and description. Though delivering promising performance for rating prediction, we empirically find that many review-based models cannot perform comparably well on top-N recommendation. Also, user reviews are not available in some recommendation scenarios, while item textual contents are more prevalent. On the other hand, recent graph convolutional network (GCN) based models demonstrate state-of-the-art performance for top-N recommendation. Thus, in this work, we aim to further improve top-N recommendation by effectively modeling both item textual content and high-order connectivity in user-item graph. We propose a new model named Attentive Graph-based Text-aware Recommendation Model (AGTM). Extensive experiments are provided to justify the rationality and effectiveness of our model design.
致謝 i
摘要 iii
Abstract iv
Contents v
List of Figures vii
List of Tables viii

Chapter 1 Introduction 1
Chapter 2 Related Work 4
2.1 Factorization Methods .............................. 4
2.2 Knowledge­-Graph-­based Methods ...................... 5
2.3 Review­-based Methods ............................... 5
Chapter 3 Methodology 7
3.1 Problem Formulation ................................ 7
3.2 Textual Content Modeling ........................... 8
3.2.1 Extracting Semantics from Textual Contents ...... 8
3.2.2 Condensing Content Representations .............. 9
3.3 Item-­Based Preference Modeling .................... 10
3.3.1 Attentive Aspect-­aware User Modeling ........... 10
3.3.2 Embedding Propagation and Layer Combination .... 11
3.4 Model Prediction and Training ..................... 12
Chapter 4 Experiments 14
4.1 Experimental Settings ............................. 14
4.1.1 Dataset Description ............................ 14
4.1.2 Evaluation Metrics ............................. 15
4.1.3 Compared Methods ............................... 15
4.1.4 Hyper­-parameter Settings ....................... 16
4.2 Performance Comparison ............................ 16
4.3 Ablation Studies .................................. 18
4.3.1 Effect of Main Components and Two­-stage Approach 18
4.3.2 Effect of Number of GCN Layers ................. 19
Chapter 5 Conclusion and Future Work 20

References 21
[1] J. L. Ba, J. R. Kiros, and G. E. Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
[2] A. Bordes, N. Usunier, A. Garcia-Durán, J. Weston, and O. Yakhnenko. Translating embeddings for modeling multi-relational data. In NIPS, page 2787–2795, 2013.
[3] Y. Cao, X. Wang, X. He, Z. Hu, and T.-S. Chua. Unifying knowledge graph learning and recommendation: Towards a better understanding of user preferences. In WWW, page 151–161, 2019.
[4] C. Chen, M. Zhang, Y. Liu, and S. Ma. Neural attentional rating regression with review-level explanations. In WWW, page 1583–1592, 2018.
[5] H.-T. Cheng, L. Koc, J. Harmsen, T. Shaked, T. Chandra, H. Aradhye, G. Anderson, G. Corrado, W. Chai, M. Ispir, R. Anil, Z. Haque, L. Hong, V. Jain, X. Liu, and H. Shah. Wide & deep learning for recommender systems. In DLRS, page 7–10, 2016.
[6] J. Y. Chin, K. Zhao, S. Joty, and G. Cong. Anr: Aspect-based neural recommender. In CIKM, page 147–156, 2018.
[7] Y.-N. Chuang, C.-M. Chen, C.-J. Wang, M.-F. Tsai, Y. Fang, and E.-P. Lim. Tpr: Text-aware preference ranking for recommender systems. In CIKM, page 215–224, 2020.
[8] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In NAACL, pages 4171– 4186, 2019.
[9] T. Ebesu, B. Shen, and Y. Fang. Collaborative memory network for recommendation systems. In SIGIR, page 515–524, 2018.
[10] X. Guan, Z. Cheng, X. He, Y. Zhang, Z. Zhu, Q. Peng, and T.-S. Chua. Attentive aspect modeling for review-aware recommendation. TOIS, 2019.
[11] H. Guo, R. Tang, Y. Ye, Z. Li, and X. He. Deepfm: A factorization-machine based neural network for ctr prediction. In IJCAI, page 1725–1731, 2017.
[12] W. Hamilton, Z. Ying, and J. Leskovec. Inductive representation learning on large graphs. In NIPS, 2017.
[13] F. M. Harper and J. A. Konstan. The movielens datasets: History and context. TIIS, 5(4):1–19, 2015.
[14] X. He and T.-S. Chua. Neural factorization machines for sparse predictive analytics. In SIGIR, page 355–364, 2017.
[15] X. He, K. Deng, X. Wang, Y. Li, Y. Zhang, and M. Wang. Lightgcn: Simplifying and powering graph convolution network for recommendation. In SIGIR, page 639– 648, 2020.
[16] X. He, L. Liao, H. Zhang, L. Nie, X. Hu, and T.-S. Chua. Neural collaborative filtering. In WWW, page 173–182, 2017.
[17] T. N. Kipf and M. Welling. Semi-supervised classification with graph convolutional networks. In ICLR, 2017.
[18] W. Krichene and S. Rendle. On sampled metrics for item recommendation. In KDD, page 1748–1757, 2020.
[19] Q. Le and T. Mikolov. Distributed representations of sentences and documents. In ICML, pages 1188–1196, 2014.
[20] D. Liu, J. Li, B. Du, J. Chang, and R. Gao. Daml: Dual attention mutual learning between ratings and reviews for item recommendation. In KDD, page 344–352, 2019.
[21] H. Liu, W. Wang, H. Xu, Q. Peng, and P. Jiao. Neural Unified Review Recommendation with Cross Attention, page 1789–1792. 2020.
[22] I. Loshchilov and F. Hutter. Decoupled weight decay regularization. In ICLR, 2019.
[23] Y. Lu, R. Dong, and B. Smyth. Coevolutionary recommendation model: Mutual learning between ratings and reviews. In WWW, page 773–782, 2018.
[24] J. McAuley and J. Leskovec. Hidden factors and hidden topics: Understanding rating dimensions with review text. In RecSys, page 165–172, 2013.
[25] J. Ni, J. Li, and J. McAuley. Justifying recommendations using distantly-labeled reviews and fine-grained aspects. In EMNLP-IJCNLP, pages 188–197, 2019.
[26] Y. Qu, B. Fang, W. Zhang, R. Tang, M. Niu, H. Guo, Y. Yu, and X. He. Product- based neural networks for user response prediction over multi-field categorical data. TOIS, 2018.
[27] N. Reimers and I. Gurevych. Sentence-bert: Sentence embeddings using siamese bert-networks. In EMNLP, 2019.
[28] S. Rendle. Factorization machines with libfm. ACM Trans. Intell. Syst. Technol., 2012.
[29] S. Rendle, C. Freudenthaler, Z. Gantner, and L. Schmidt-Thieme. Bpr: Bayesian personalized ranking from implicit feedback. In UAI, page 452–461, 2009.
[30] D. E. Rumelhart and J. L. McClelland. Learning Internal Representations by Error Propagation, pages 318–362. 1987.
[31] S. Seo, J. Huang, H. Yang, and Y. Liu. Interpretable convolutional neural networks with dual local and global attention for review rating prediction. In RecSys, page 297–305, 2017.
[32] Y. Shan, T. R. Hoens, J. Jiao, H. Wang, D. Yu, and J. Mao. Deep crossing: Web-scale modeling without manually crafted combinatorial features. In KDD, page 255–262, 2016.
[33] Y. Tay, A. T. Luu, and S. C. Hui. Multi-pointer co-attention networks for recom- mendation. In KDD, page 2309–2318, 2018.
[34] R. vandenBerg, T. N. Kipf, and M. Welling. Graph convolutional matrix completion. In KDD Workshop on Deep Learning Day, 2018.
[35] P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Liò, and Y. Bengio. Graph attention networks. In ICLR, 2018.
[36] R. Wang, B. Fu, G. Fu, and M.Wang. Deep & cross network for ad click predictions. In ADKDD, 2017.
[37] X. Wang, X. He, Y. Cao, M. Liu, and T.-S. Chua. Kgat: Knowledge graph attention network for recommendation. In KDD, page 950–958, 2019.
[38] X. Wang, X. He, M. Wang, F. Feng, and T.-S. Chua. Neural graph collaborative filtering. In SIGIR, page 165–174, 2019.
[39] X. Wang, T. Huang, D. Wang, Y. Yuan, Z. Liu, X. He, and T.-S. Chua. Learning Intents behind Interactions with Knowledge Graph for Recommendation, page 878– 887. 2021.
[40] Z. Wang, G. Lin, H. Tan, Q. Chen, and X. Liu. CKAN: Collaborative Knowledge-Aware Attentive Network for Recommender Systems, page 219–228. 2020.
[41] Z. Wang, J. Zhang, J. Feng, and Z. Chen. Knowledge graph embedding by translating on hyperplanes. In AAAI, page 1112–1119, 2014.
[42] J.-H. Yang, C.-M. Chen, C.-J. Wang, and M.-F. Tsai. Hop-rec: High-order proximity for implicit recommendation. In RecSys, page 140–144, 2018.
[43] R. Ying, R. He, K. Chen, P. Eksombatchai, W. L. Hamilton, and J. Leskovec. Graph convolutional neural networks for web-scale recommender systems. In KDD, page 974–983, 2018.
[44] F. Zhang, N. J. Yuan, D. Lian, X. Xie, and W.-Y. Ma. Collaborative knowledge base embedding for recommender systems. In KDD, page 353–362, 2016.
[45] W. Zhang, T. Du, and J. Wang. Deep learning over multi-field categorical data. In ECIR, pages 45–57, 2016.
[46] Y. Zhang, Q. Ai, X. Chen, and W. B. Croft. Joint representation learning for top-n recommendation with heterogeneous information sources. In CIKM, page 1449– 1458, 2017.
[47] L. Zheng, V. Noroozi, and P. S. Yu. Joint deep modeling of users and items using reviews for recommendation. In WSDM, page 425–434, 2017.
Link to school's url:Link url
Link record message
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
First Page Prev Page Next Page Last Page top
None related journal articles
 
system icon system icon