Banerjee, Mousumi, Ying Ding, and Anne-Michelle Noone. 2012.
“Identifying Representative Trees from Ensembles.” Stat Med 31 (15): 1601–16.
https://doi.org/10.1002/sim.4492.
Bücker, Michael, Gero Szepannek, Alicja Gosiewska, and Przemyslaw Biecek. 2021.
“TAX4CS – Transparency, Auditability and eXplainability of Machine Learning Models in Credit Scoring.” Journal of the Operational Research Society, 1–21.
https://doi.org/10.1080/01605682.2021.1922098.
Cowan, Nelson. 2010.
“The Magical Mystery Four: How Is Working Memory Capacity Limited, and Why?” Curr Dir Psychol Sci 19 (1): 51–57.
https://doi.org/10.1177/0963721409359277.
European Commission. 2024.
“EU Artificial Intelligence Act.” https://artificialintelligenceact.eu/the-act/.
Fernández-Delgado, Manuel, Eva Cernadas, Senén Barro, and Dinani Amorim. 2014. “Do We Need Hundreds of Classifiers to Solve Real World Classification Problems?” J. Mach. Learn. Res. 15 (1): 3133–81.
Friedman, Jerome. 2001. “Greedy Function Approximation: A Gradient Boosting Machine.” Annals of Statistics 29: 1189–1232.
Gosiewska, Alicja, and Przemyslaw Biecek. 2019.
“Do Not Trust Additive Explanationss.” https://arxiv.org/pdf/1903.11420.
Harrison, D., and D. L. Rubinfeld. 1978. “Hedonic Prices and the Demand for Clean Air.” J. Of Environmental Economics and Managemen 5: 81–102.
James, Gareth, Daniela Witten, Trevor Hastie, and Robert Tibshirani. 2019. An Introduction to Statistical Learning. Second Edition. Springer.
Miller, George. 1956.
“The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information.” Psychological Review 63 (2): 81–97.
https://doi.org/10.1037/h0043158.
Molnar, Christoph. 2022.
Interpretable Machine Learning: A Guide for Making Black Box Models Explainable. 2nd ed.
https://christophm.github.io/interpretable-ml-book.
Molnar, Christoph, Gunnar König, Julia Herbinger, Timo Freiesleben, Susanne Dandl, Christian A. Scholbeck, Giuseppe Casalicchio, Moritz Grosse-Wentrup, and Bernd Bischl. 2022.
“General Pitfalls of Model-Agnostic Interpretation Methods for Machine Learning Models.” In
xxAI - Beyond Explainable AI: Int. Workshop at ICML 2020, edited by Andreas Holzinger, Randy Goebel, Ruth Fong, Taesup Moon, Klaus-Robert Müller, and Wojciech Samek. Springer.
https://doi.org/10.1007/978-3-031-04083-2_4.
Probst, Philipp, Anne-Laure Boulesteix, and Bernd Bischl. 2021. “Tunability: Importance of Hyperparameters of Machine Learning Algorithms.” J. Mach. Learn. Res. 20 (1): 1934–65.
Rudin, Cynthia. 2019.
Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead.
https://arxiv.org/abs/1811.10154.
Szepannek, Gero. 2017.
“On the Practical Relevance of Modern Machine Learning Algorithms for Credit Scoring Applications.” WIAS Report Series 29: 88–96.
https://doi.org/10.20347/wias.report.29.
Szepannek, Gero, and Karsten Lübke. 2022.
“Explaining Artificial Intelligence with Care.” KI - Künstliche Intelligenz.
https://doi.org/10.1007/s13218-022-00764-8.
———. 2023.
“How Much Do We See? On the Explainability of Partial Dependence Plots for Credit Risk Scoring.” Argumenta Oeconomica 50.
https://doi.org/10.15611/aoe.2023.1.07.
Therneau, Terry M., and Elizabeth J. Atkinson. 2015.
“An Introduction to Recursive Partitioning Using the RPART Routines.” In.
https://www.biostat.wisc.edu/~kbroman/teaching/statgen/2004/refs/therneau.pdf.
Woźnica, Katarzyna, Katarzyna Pękala, Hubert Baniecki, Wojciech Kretowicz, Elżbieta Sienkiewicz, and Przemysław Biecek. 2021.
“Do Not Explain Without Context: Addressing the Blind Spot of Model Explanations.” https://arxiv.org/pdf/2105.13787.
Wright, Marvin N., and Andreas Ziegler. 2017.
“Ranger: A Fast Implementation of Random Forests for High Dimensional Data in c++ and r.” Journal of Statistical Software 77 (1): 1–17.
https://doi.org/10.18637/jss.v077.i01.