Variable importance can be calculated based on model-specific and model-agnostic approaches

variable_importance_split(object, ...)

# S3 method for default
variable_importance_split(object, ...)

# S3 method for fitted_DL_reg_1
variable_importance_split(object)

# S3 method for fitted_DL_reg_2
variable_importance_split(object)

# S3 method for fitted_xgb_reg_1
variable_importance_split(
  object,
  path_plot,
  type = "model_specific",
  permutations = 10,
  unseen_data = F
)

# S3 method for fitted_xgb_reg_2
variable_importance_split(object)

# S3 method for fitted_stacking_reg_1
variable_importance_split(object)

# S3 method for fitted_stacking_reg_2
variable_importance_split(object)

# S3 method for fitted_stacking_reg_3
variable_importance_split(object)

# S3 method for fitted_rf_reg_1
variable_importance_split(object)

# S3 method for fitted_rf_reg_2
variable_importance_split(object)

# S3 method for fitted_rf_reg_2
variable_importance_split(object)

# S3 method for fitted_rf_reg_1
variable_importance_split(object)

Arguments

object

an object of class res_fitted_split

type

model_specific or model_agnostic

permutations

By default, equal to 10.

unseen_data

References

Breiman L (2001). “Random forests.” Machine learning, 45(1), 5--32. Molnar C (2022). Interpretable Machine Learning: A Guide for Making Black Box Models Explainable, 2 edition. Available online, accessed on 30 March 2022. https://christophm.github.io/interpretable-ml-book.

Author

Cathy C. Westhues cathy.jubin@uni-goettingen.de