We don't need shap values for each classes
Explanations given by shap values should be passed as shap_values[label] to not propagate the dimension. The current approach works fine but is hard to grasp + brakes the chaining of shap explained models which is a concept smell.
Shap explanations for a given point only make sense if they are the shap explanations for the predicted class