|
[1] Friedman, Jerome H. “Greedy function approximation: A gradient boosting machine.” The Annals of Statistics Vol. 29, No. 5 (2001), pp. 1189-1232. [2] Breiman, Leo.“Random Forests.” Machine Learning 45 (1). Springer: 5-32 (2001). [3] Aaron Fisher, Cynthia Rudin, Francesca Dominici. “All Models are Wrong but many are Useful: Variable Importance for Black-Box, Proprietary, or Misspecified Prediction Models, using Model Class Reliance” arXiv:1801.01489 (2018) [4] Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin. “Why Should I Trust You?: Explaining the Predictions of Any Classifier” arXiv:1602.04938 (2016). [5] Shapley, Lloyd S. “A value for n-person games.” Contributions to the Theory of Games 2.28 (1953): 307-317. [6] David M. Allen. “The Relationship between Variable Selection and Data Agumentation and a Method for Prediction” Technometrics Vol. 16, No. 1 (1974), pp. 125-127. [7] R. Dennis Cook. “Detection of Influential Observation in Linear Regression” Technometrics Vol. 19, No. 1 (Feb., 1977), pp. 15-18. [8] Sandra Wachter, Brent Mittelstadt, Chris Russell. “Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR” arXiv:1711.00399 (2017). [9] Tianqi Chen, Carlos Guestrin. “XGBoost: A Scalable Tree Boosting System” arXiv:1603.02754 (2016). [10] Scott M. Lundberg, Su-In Lee. “A Unified Approach to Interpreting Model Predictions” Advances in Neural Information Processing Systems 30 (2017). [11] Jerome H. Friedman. “Greedy Function Approximation: A Gradient Boosting Machine” The Annals of Statistics Vol. 29, No. 5 (2001), pp. 1189-1232. [12] Erik Strumbelj, Igor Kononenko. “An Efficient Explanation of Individual Classifications using Game Theory” The Journal of Machine Learning Research Volume 11, (2010) pp. 1-18. [13] David Martens, Foster Provost. “Explaining data-driven document classifications” MIS Quarterly Volume 38 Issue 1 (2014), pp. 73-100. [14] Kjersti Aas, Martin Jullum, Anders Løland. “Explaining individual predictions when features are dependent: More accurate approximations to Shapley values” arXiv:1903.10464 (2019). [15] N. V. Chawla, K. W. Bowyer, L. O. Hall, W. P. Kegelmeyer. “SMOTE: Synthetic Minority Over-sampling Technique” Journal of Artificial Intelligence Research Vol16 (2002).
|