Which performance metric is better R2 or adjusted R2?

Adjusted R2 because the performance of predictors impacts it. R2 is independent of predictors and shows performance improvement through increase if the number of predictors is increased.

The choice between R2 (coefficient of determination) and adjusted R2 depends on the specific context of the model evaluation.

  • R2 (Coefficient of Determination): This metric measures the proportion of the variance in the dependent variable that is predictable from the independent variables. It ranges from 0 to 1, where 1 indicates perfect prediction.
  • Adjusted R2: It is an extension of R2 that adjusts for the number of predictors in the model. While R2 tends to increase as more predictors are added, adjusted R2 penalizes the inclusion of irrelevant predictors. It provides a more reliable measure when comparing models with different numbers of predictors.

In general, if you are comparing models with different numbers of predictors or assessing the impact of adding new predictors to a model, adjusted R2 is often considered a better metric. It provides a more balanced evaluation, preventing the inflation of R2 due to the inclusion of unnecessary variables.

However, if you are solely interested in the goodness of fit and predictive performance of a model without concerns about the number of predictors, R2 might be sufficient.

So, the correct answer depends on the specific requirements and goals of your analysis. If you prioritize simplicity and have a fixed set of predictors, R2 may be appropriate. If you want a more nuanced evaluation considering model complexity, adjusted R2 would be a better choice.