Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about reading back-normalized SHAP values after normalizing data #16151

Open
MoonCapture opened this issue Apr 11, 2024 · 1 comment
Assignees
Labels

Comments

@MoonCapture
Copy link

This is my code, when I am training my H2o automated machine learning model, I first normalize the data, how do I get the inverse normalized data values when I SHAP interpret the model? (Win11 Python H2O=3.46.0.1)
SHAP value.
Thanks!

shap_01_plot = best_model_01.shap_explain_row_plot(df_test_normalized, row_index=0, background_frame=df_train_normalized)
fig = plt.gcf()
fig.set_size_inches(5,5) 
plt.grid(ls='--')
plt.title('')
plt.xticks(fontsize=15)
plt.yticks(fontsize=15)

image

@tomasfryda
Copy link
Contributor

@MoonCapture There is no parameter to do that so you would have to implement it by yourself.

I would approach this by calculating linear approximation similarly as is done in Generalized DeepSHAP.

You will need to get SHAP predictions by best_model_01.predict_contributions(df_test_normalized[0,:], background_frame=df_train_normalized, output_space=True, output_per_reference=True).

First you have to ensure that the output of the SHAP values are in the same space as the predictions (i.e. if the model uses link function you might have to apply inverse link function on the SHAP values) this is what parameter output_space=True does.

Then you will need contribution to the change of prediction against every single point from background_frame, that's what output_per_reference is for.

Relevant part of the doc string:

:param output_space: If True, linearly scale the contributions so that they sum up to the prediction.
                     NOTE: This will result only in approximate SHAP values even if the model supports exact SHAP calculation.
                     NOTE: This will not have any effect if the estimator doesn't use a link function.
:param output_per_reference: If True, return baseline SHAP, i.e., contribution for each data point for each reference from the background_frame.
                             If False, return TreeSHAP if no background_frame is provided, or marginal SHAP if background frame is provided.
                             Can be used only with background_frame.

Next you denormalize the SHAP values. This depends on the way you normalize the data, if you can inverse the normalization just by multiplication then it's simple just multiply all values. If you need to use addition as well then this I would do only for the Bias after the multiplication. If the normalization procedure you use is more complicated, use eq. 3 from Explaining a series of models by propagating Shapley values. (or you can check my implementation of simplified G-DeepSHAP in our StackedEnsembles (simplified because it is applied only on two layers (basemodels -> metalearner))).

Next you should check that the Bias is denormalized prediction on the background frame point.
Pseudocode:

abs(denormalize(best_model_01.predict(background_frame[i, :])) - denorm_shap_pred[denorm_shap_pred["BackgroundRowIdx"]==i, "Bias"]) < 1e-6

Then you can also check that row sums of the denorm_shap_pred (excluding RowIdx and BackgroundRowIdx) are roughly the same as denormalized prediction (i.e. denormalized contributions + denormalized bias == denormalized prediction).

Next if you're confident that those values are close enough (depending on the model the epsilon can be 1e-6 up to 1e-3 (XGBoost uses floats in our implementation for prediction and double for contributions so there will be the epsilon closer to 1e-3)), you take the average contribution across the background frame. Something like:

denorm_shap_pred.drop("BackgroundRowIdx").groupby("RowIdx").mean()

And that should be the result you are looking for. It's not exact SHAP value since G-DeepSHAP gives only approximation if there is some non-linearity but at least you can compute it in reasonable time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants