|
[1] Z. C. Lipton, “The mythos of model interpretability,” arXiv preprint arXiv:1606.03490, 2016. [2] N. Tintarev and J. Masthoff, “Designing and evaluating explanations for recommender systems,” in Recommender systems handbook. Springer, 2011, pp. 479–510. [3] J. B. Schafer, J. Konstan, and J. Riedl, “Recommender systems in e-commerce,” in Proceedings of the 1st ACM conference on Electronic commerce. ACM, 1999, pp. 158–166. [4] Q. Ai, V. Azizi, X. Chen, and Y. Zhang, “Learning heterogeneous knowledge base embeddings for explainable recommendation,” Algorithms, vol. 11, no. 9, p. 137, 2018. [5] X. Wang, D. Wang, C. Xu, X. He, Y. Cao, and T.-S. Chua, “Explainable reasoning over knowledge graphs for recommendation,” arXiv preprint arXiv:1811.04540, 2018. [6] D. Pedreschi, F. Giannotti, R. Guidotti, A. Monreale, L. Pappalardo, S. Ruggieri, and F. Turini, “Open the black box data-driven explanation of black box decision systems,” arXiv preprint arXiv:1806.09936, 2018. [7] K. Patel, J. Fogarty, J. A. Landay, and B. Harrison, “Investigating statistical machine learning as a tool for software development,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2008, pp. 667–676. [8] S. Kaufman, S. Rosset, C. Perlich, and O. Stitelman, “Leakage in data mining: Formulation, detection, and avoidance,” ACM Transactions on Knowledge Discovery from Data (TKDD), vol. 6, no. 4, p. 15, 2012. [9] M. Van Lent, W. Fisher, and M. Mancuso, “An explainable artificial intelligence system for small-unit tactical behavior,” in Proceedings of the national conference on artificial intelligence. Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press; 1999, 2004, pp. 900–907. [10] R. Guidotti, A. Monreale, S. Ruggieri, F. Turini, F. Giannotti, and D. Pedreschi, “A survey of methods for explaining black box models,” ACM Computing Surveys (CSUR), vol. 51, no. 5, p. 93, 2018. [11] M. Craven and J. W. Shavlik, “Extracting tree-structured representations of trained networks,” in Advances in neural information processing systems, 1996, pp. 24–30. [12] U. Johansson and L. Niklasson, “Evolving decision trees using oracle guides,” in 2009 IEEE Symposium on Computational Intelligence and Data Mining. IEEE, 2009, pp. 238–244. [13] H. F. Tan, G. Hooker, and M. T. Wells, “Tree space prototypes: Another look at making tree ensembles interpretable,” arXiv preprint arXiv:1611.07115, 2016. [14] Y. Lou, R. Caruana, and J. Gehrke, “Intelligible models for classification and regression,” in Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2012, pp. 150–158. [15] Y. Lou, R. Caruana, J. Gehrke, and G. Hooker, “Accurate intelligible models with pairwise interactions,” in Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2013, pp. 623–631. [16] A. Nguyen, A. Dosovitskiy, J. Yosinski, T. Brox, and J. Clune, “Synthesizing the preferred inputs for neurons in neural networks via deep generator networks,” in Advances in Neural Information Processing Systems, 2016, pp. 3387–3395. [17] J. Chen, L. Song, M. J. Wainwright, and M. I. Jordan, “Learning to explain: An information-theoretic perspective on model interpretation,” arXiv preprint arXiv: 1802.07814, 2018. [18] K. Simonyan, A. Vedaldi, and A. Zisserman, “Deep inside convolutional networks: Visualising image classification models and saliency maps,” arXiv preprint arXiv:1312.6034, 2013. [19] J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller, “Striving for simplicity: The all convolutional net,” arXiv preprint arXiv:1412.6806, 2014. [20] M. T. Ribeiro, S. Singh, and C. Guestrin, “Why should i trust you?: Explaining the predictions of any classifier,” in Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. ACM, 2016, pp. 1135–1144. [21] ——, “Anchors: High-precision model-agnostic explanations,” in Thirty-Second AAAI Conference on Artificial Intelligence, 2018. [22] R. Guidotti, A. Monreale, S. Ruggieri, D. Pedreschi, F. Turini, and F. Giannotti, “Local rule-based explanations of black box decision systems,” 2018. [23] S. M. Lundberg and S.-I. Lee, “A unified approach to interpreting model predictions,” in Advances in Neural Information Processing Systems, 2017, pp. 4765–4774. [24] P. W. Koh and P. Liang, “Understanding black-box predictions via influence functions,” arXiv preprint arXiv:1703.04730, 2017. [25] X. Zhang, A. Solar-Lezama, and R. Singh, “Interpreting neural network judgments via minimal, stable, and symbolic corrections,” arXiv preprint arXiv:1802.07384, 2018. [26] K. Xu, J. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhudinov, R. Zemel, and Y. Bengio, “Show, attend and tell: Neural image caption generation with visual attention,” in International conference on machine learning, 2015, pp. 2048–2057. [27] Q. Zhang, Y. N. Wu, and S.-C. Zhu, “Interpretable convolutional neural networks,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 8827–8836. [28] M. Tsang, H. Liu, S. Purushotham, P. Murali, and Y. Liu, “Neural interaction transparency (nit): Disentangling learned interactions for improved interpretability,” in Advances in Neural Information Processing Systems 31, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, Eds. Curran Associates, Inc., 2018, pp. 5809–5818. [Online]. Available: http://papers.nips.cc/paper/7822-neural-interaction-transparency-nit-disentangling-learned-interactions-for-improved-interpretability.pdf [29] O. Li, H. Liu, C. Chen, and C. Rudin, “Deep learning for case-based reasoning through prototypes: A neural network that explains its predictions,” arXiv preprint arXiv:1710.04806, 2017. [30] J. S. Breese, D. Heckerman, and C. Kadie, “Empirical analysis of predictive algorithms for collaborative filtering,” in Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence. Morgan Kaufmann Publishers Inc., 1998, pp. 43–52. [31] X. Su and T. M. Khoshgoftaar, “A survey of collaborative filtering techniques,” Advances in artificial intelligence, vol. 2009, 2009. [32] Y. Koren, “Factorization meets the neighborhood: a multifaceted collaborative filtering model,” in Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2008, pp. 426–434. [33] B. M. Sarwar, G. Karypis, J. A. Konstan, J. Riedl et al., “Item-based collaborative filtering recommendation algorithms.” Www, vol. 1, pp. 285–295, 2001. [34] L. Si and R. Jin, “Flexible mixture model for collaborative filtering,” in Proceedings of the 20th International Conference on Machine Learning (ICML-03), 2003, pp. 704–711. [35] M. Balabanović and Y. Shoham, “Fab: content-based, collaborative recommendation,” Communications of the ACM, vol. 40, no. 3, pp. 66–72, 1997. [36] P. Melville, R. J. Mooney, and R. Nagarajan, “Content-boosted collaborative filtering for improved recommendations,” Aaai/iaai, vol. 23, pp. 187–192, 2002. [37] S. Zhang, L. Yao, A. Sun, and Y. Tay, “Deep learning based recommender system: A survey and new perspectives,” ACM Computing Surveys (CSUR), vol. 52, no. 1, p. 5, 2019. [38] X. He, L. Liao, H. Zhang, L. Nie, X. Hu, and T.-S. Chua, “Neural collaborative filtering,” in Proceedings of the 26th International Conference on World Wide Web. International World Wide Web Conferences Steering Committee, 2017, pp. 173–182. [39] G. K. Dziugaite and D. M. Roy, “Neural network matrix factorization,” arXiv preprint arXiv:1511.06443, 2015. [40] S. Li, J. Kawale, and Y. Fu, “Deep collaborative filtering via marginalized denoising autoencoder,” in Proceedings of the 24th ACM International on Conference on Information and Knowledge Management. ACM, 2015, pp. 811–820. [41] S. Sedhain, A. K. Menon, S. Sanner, and L. Xie, “Autorec: Autoencoders meet collaborative filtering,” in Proceedings of the 24th International Conference on World Wide Web. ACM, 2015, pp. 111–112. [42] X. Zhao, L. Xia, L. Zhang, Z. Ding, D. Yin, and J. Tang, “Deep reinforcement learning for page-wise recommendations,” in Proceedings of the 12th ACM Conference on Recommender Systems. ACM, 2018, pp. 95–103. [43] G. Zheng, F. Zhang, Z. Zheng, Y. Xiang, N. J. Yuan, X. Xie, and Z. Li, “Drn: A deep reinforcement learning framework for news recommendation,” in Proceedings of the 2018 World Wide Web Conference on World Wide Web. International World Wide Web Conferences Steering Committee, 2018, pp. 167–176. [44] Q. Zhang, J. Wang, H. Huang, X. Huang, and Y. Gong, “Hashtag recommendation for multimodal microblog using co-attention network.” in IJCAI, 2017, pp. 3420–3426. [45] H. Wang, S. Xingjian, and D.-Y. Yeung, “Collaborative recurrent autoencoder: Recommend while learning to fill in the blanks,” in Advances in Neural Information Processing Systems, 2016, pp. 415–423. [46] R. Burke, “Integrating knowledge-based and collaborative-filtering recommender systems,” in Proceedings of the Workshop on AI and Electronic Commerce, 1999, pp. 69–72. [47] T. D. Noia, V. C. Ostuni, P. Tomeo, and E. D. Sciascio, “Sprank: Semantic path-based ranking for top-n recommendations using linked open data,” ACM Transactions on Intelligent Systems and Technology (TIST), vol. 8, no. 1, p. 9, 2016. [48] X. Yu, X. Ren, Y. Sun, Q. Gu, B. Sturt, U. Khandelwal, B. Norick, and J. Han, “Personalized entity recommendation: A heterogeneous information network approach,” in Proceedings of the 7th ACM international conference on Web search and data mining. ACM, 2014, pp. 283–292. [49] A. Bordes, N. Usunier, A. Garcia-Duran, J. Weston, and O. Yakhnenko, “Translating embeddings for modeling multi-relational data,” in Advances in neural information processing systems, 2013, pp. 2787–2795. [50] Z. Wang, J. Zhang, J. Feng, and Z. Chen, “Knowledge graph embedding by translating on hyperplanes,” in Twenty-Eighth AAAI conference on artificial intelligence, 2014. [51] Y. Lin, Z. Liu, M. Sun, Y. Liu, and X. Zhu, “Learning entity and relation embeddings for knowledge graph completion,” in Twenty-ninth AAAI conference on artificial intelligence, 2015. [52] G. Ji, S. He, L. Xu, K. Liu, and J. Zhao, “Knowledge graph embedding via dynamic mapping matrix,” in Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), vol. 1, 2015, pp. 687–696. [53] J. Feng, M. Huang, M. Wang, M. Zhou, Y. Hao, and X. Zhu, “Knowledge graph embedding by flexible translation,” in Fifteenth International Conference on the Principles of Knowledge Representation and Reasoning, 2016. [54] T. Trouillon, J. Welbl, S. Riedel, É. Gaussier, and G. Bouchard, “Complex embeddings for simple link prediction,” in International Conference on Machine Learning, 2016, pp. 2071–2080. [55] B. Yang, W.-t. Yih, X. He, J. Gao, and L. Deng, “Embedding entities and relations for learning and inference in knowledge bases,” arXiv preprint arXiv:1412.6575, 2014. [56] Z. Sun, Z.-H. Deng, J.-Y. Nie, and J. Tang, “Rotate: Knowledge graph embedding by relational rotation in complex space,” 2019. [57] F. Zhang, N. J. Yuan, D. Lian, X. Xie, and W.-Y. Ma, “Collaborative knowledge base embedding for recommender systems,” in Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. ACM, 2016, pp. 353–362. [58] R. He, W.-C. Kang, and J. McAuley, “Translation-based recommendation,” in Proceedings of the Eleventh ACM Conference on Recommender Systems. ACM, 2017, pp. 161–169. [59] H. Wang, F. Zhang, J. Wang, M. Zhao, W. Li, X. Xie, and M. Guo, “Ripplenet: Propagating user preferences on the knowledge graph for recommender systems,” in Proceedings of the 27th ACM International Conference on Information and Knowledge Management. ACM, 2018, pp. 417–426. [60] Y. Zhang, G. Lai, M. Zhang, Y. Zhang, Y. Liu, and S. Ma, “Explicit factor models for explainable recommendation based on phrase-level sentiment analysis,” in Proceedings of the 37th international ACM SIGIR conference on Research & development in information retrieval. ACM, 2014, pp. 83–92. [61] Y. Zhang and X. Chen, “Explainable recommendation: A survey and new perspectives,” 2018. [62] Y. Koren, R. Bell, and C. Volinsky, “Matrix factorization techniques for recommender systems,” Computer, no. 8, pp. 30–37, 2009. [63] X. Chen, Z. Qin, Y. Zhang, and T. Xu, “Learning to rank features for recommendation over multiple categories,” in Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval. ACM, 2016, pp. 305–314. [64] B. Abdollahi and O. Nasraoui, “Explainable matrix factorization for collaborative filtering,” in Proceedings of the 25th International Conference Companion on World Wide Web. International World Wide Web Conferences Steering Committee, 2016, pp. 5–6. [65] J. McAuley and J. Leskovec, “Hidden factors and hidden topics: understanding rating dimensions with review text,” in Proceedings of the 7th ACM conference on Recommender systems. ACM, 2013, pp. 165–172. [66] Y. Wu and M. Ester, “Flame: A probabilistic model combining aspect based opinion mining and collaborative filtering,” in Proceedings of the Eighth ACM International Conference on Web Search and Data Mining. ACM, 2015, pp. 199–208. [67] W. Lin, S. A. Alvarez, and C. Ruiz, “Efficient adaptive-support association rule mining for recommender systems,” Data mining and knowledge discovery, vol. 6, no. 1, pp. 83–105, 2002. [68] J. Davidson, B. Liebald, J. Liu, P. Nandy, T. Van Vleet, U. Gargi, S. Gupta, Y. He, M. Lambert, B. Livingston et al., “The youtube video recommendation system,” in Proceedings of the fourth ACM conference on Recommender systems. ACM, 2010, pp. 293–296. [69] F. Costa, S. Ouyang, P. Dolog, and A. Lawlor, “Automatic generation of natural language explanations,” in Proceedings of the 23rd International Conference on Intelligent User Interfaces Companion. ACM, 2018, p. 57. [70] S. Chang, F. M. Harper, and L. G. Terveen, “Crowd-based personalized natural language explanations for recommendations,” in Proceedings of the 10th ACM Conference on Recommender Systems. ACM, 2016, pp. 175–182. [71] X. Chen, Y. Zhang, H. Xu, Y. Cao, Z. Qin, and H. Zha, “Visually explainable recommendation,” arXiv preprint arXiv:1801.10288, 2018. [72] C. Chen, M. Zhang, Y. Liu, and S. Ma, “Neural attentional rating regression with reviewlevel explanations,” in Proceedings of the 2018 World Wide Web Conference on World Wide Web. International World Wide Web Conferences Steering Committee, 2018, pp. 1583–1592. [73] G. Peake and J. Wang, “Explanation mining: Post hoc interpretability of latent factor models for recommendation systems,” in Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. ACM, 2018, pp. 2060–2069. [74] H. Park, H. Jeon, J. Kim, B. Ahn, and U. Kang, “Uniwalk: Explainable and accurate recommendation for rating and network data,” arXiv preprint arXiv:1710.07134, 2017. [75] 刘知远, 孙茂松, 林衍凯, and 谢若冰, “知识表示学习研究进展,” 计算机研究与发展, vol. 53, no. 2, pp. 247–261, 2016. [76] R. Fu, J. Guo, B. Qin, W. Che, H. Wang, and T. Liu, “Learning semantic hierarchies via word embeddings,” in Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), vol. 1, 2014, pp. 1199–1209. [77] R. Xie, Z. Liu, J. Jia, H. Luan, and M. Sun, “Representation learning of knowledge graphs with entity descriptions,” in Thirtieth AAAI Conference on Artificial Intelligence, 2016. [78] Y. Lin, Z. Liu, H. Luan, M. Sun, S. Rao, and S. Liu, “Modeling relation paths for representation learning of knowledge bases,” arXiv preprint arXiv:1506.00379, 2015.
|