Interpreting Linear Beta Coefficients Alongside Feature Importances in Machine Learning

9 Pages Posted: 29 Mar 2021

See all articles by James Ming Chen

James Ming Chen

Michigan State University - College of Law

Date Written: March 1, 2021

Abstract

Machine-learning regression models lack the interpretability of their conventional linear counterparts. Tree- and forest-based models offer feature importances, a vector of probabilities indicating the impact of each predictive variable on a model’s results. This brief note describes how to interpret the beta coefficients of the corresponding linear model so that they may be compared directly to feature importances in machine learning.

Keywords: machine learning, feature importances, linear regression, beta coefficients

JEL Classification: C18, C33

Suggested Citation

Chen, James Ming, Interpreting Linear Beta Coefficients Alongside Feature Importances in Machine Learning (March 1, 2021). Available at SSRN: https://ssrn.com/abstract=3795099 or http://dx.doi.org/10.2139/ssrn.3795099

James Ming Chen (Contact Author)

Michigan State University - College of Law ( email )

318 Law College Building
East Lansing, MI 48824-1300
United States

Do you have negative results from your research you’d like to share?

Paper statistics

Downloads
47
Abstract Views
514
PlumX Metrics