R/variable_importance.R
variable_importance_split.Rd
Variable importance can be calculated based on model-specific and model-agnostic approaches
variable_importance_split(object, ...) # S3 method for default variable_importance_split(object, ...) # S3 method for fitted_DL_reg_1 variable_importance_split(object) # S3 method for fitted_DL_reg_2 variable_importance_split(object) # S3 method for fitted_xgb_reg_1 variable_importance_split( object, path_plot, type = "model_specific", permutations = 10, unseen_data = F ) # S3 method for fitted_xgb_reg_2 variable_importance_split(object) # S3 method for fitted_stacking_reg_1 variable_importance_split(object) # S3 method for fitted_stacking_reg_2 variable_importance_split(object) # S3 method for fitted_stacking_reg_3 variable_importance_split(object) # S3 method for fitted_rf_reg_1 variable_importance_split(object) # S3 method for fitted_rf_reg_2 variable_importance_split(object) # S3 method for fitted_rf_reg_2 variable_importance_split(object) # S3 method for fitted_rf_reg_1 variable_importance_split(object)
object | an object of class |
---|---|
type |
|
permutations | By default, equal to 10. |
unseen_data |
Breiman L (2001). “Random forests.” Machine learning, 45(1), 5--32. Molnar C (2022). Interpretable Machine Learning: A Guide for Making Black Box Models Explainable, 2 edition. Available online, accessed on 30 March 2022. https://christophm.github.io/interpretable-ml-book.
Cathy C. Westhues cathy.jubin@uni-goettingen.de