Accuracy vs. Interpretability - Publication

Accuracy vs. Interpretability in Random Forests

Status: Under revision

Nicholas Johnston (City University London), Dr. Tillman Weyde (City University London), Dr. Gregory Slabaugh (City University London), Maximilian Hahn (Enrion GmbH), David Klein (Enrion GmbH)


Powerful machine learning models, such as Random Forests, are typically hard to understand for the user. Simpler methods, such as small decision trees or generalised linear models are more readily interpretable but lack predictive accuracy. We propose a method to determine feature importance values as partial explanations for Random Forest models. In a small survey of insurance salespeople, we found that ease and speed of understanding were more important to subjects than completeness of model explanation, but both qualities were desired. Finally, we obtained an initial quantification of the relative performance threshold that would persuade our subjects to switch from a simple and interpretable model to a complex one.