Describe the workflow you want to enable
Similar to TreeSHAP/TreeSHAP-IQ for Shapley values and interactions, I want to create 'SV' explanations for machine learning models based on product kernels such as Support Vector Machines (SVMs) and Gaussian Processes (GPs). This would enable many different opportunities to compute Shapley values in different application domains like HPO (heavy use of GPs) or uncertainty quantification (again GPs are King).
Describe your proposed solution
Create a ProductKernelExplainer class, which is a subclass of Explainer working akin to shapiq.TreeExplainer. The ProductKernelExplainer will expose an explain function which is basically a wrapper to an internal ProductKernelComputer's compute function. There is a very nice paper with an research code implementation already out for this.
To do so we would need to implement the following logic:
Implementations
To test the implementation we would need to
Tests
Additionally it would be nice to also have these implementations ready:
Additional Info
Describe alternatives you've considered, if relevant
No response
Additional context
The method is proposed in a very nice paper linked below.
References:
Impact
High (Major improvement)
Describe the workflow you want to enable
Similar to TreeSHAP/TreeSHAP-IQ for Shapley values and interactions, I want to create
'SV'explanations for machine learning models based on product kernels such as Support Vector Machines (SVMs) and Gaussian Processes (GPs). This would enable many different opportunities to compute Shapley values in different application domains like HPO (heavy use of GPs) or uncertainty quantification (again GPs are King).Describe your proposed solution
Create a
ProductKernelExplainerclass, which is a subclass of Explainer working akin toshapiq.TreeExplainer. TheProductKernelExplainerwill expose anexplainfunction which is basically a wrapper to an internalProductKernelComputer's compute function. There is a very nice paper with an research code implementation already out for this.To do so we would need to implement the following logic:
Implementations
ProductKernelExplainerfollowing the implementation of our other explainers.ProductKernelComputerthe main computation class to do the computation defined (basically what's implemented here)SVR(SV Regressor)SVC(SV Classifier)GaussianProcessRegressorGaussianProcessClassifier(research implementation does not support this yet)ProductKernelGame(insideshapiq_games) similar to the TreeGame which can be queried with any coalition S for a local explanation instance and model. This is useful for testing the implementation ofProductKernelExplainer/ProductKernelComputeragainst theshapiq.ExactComputerTo test the implementation we would need to
Tests
ProductKernelExplaineroutput for the cross-product of models and class-labels vs the implementation here similar to how we test against shap's implementationProductKernelExplaineragainst the output ofshapiq.ExactComputerusing theProductKernelGamefor some unit testsValidProductKernelExplainerIndices(only'SV'for the time being) similar to the other integration testsAdditionally it would be nice to also have these implementations ready:
Additional Info
ProductKernelExplaineralso be returned from calling the bare Explainer object (similar howTabPFNExplainerget's returned for TabPFN models orTreeExplainerfor tree-based models called here).Describe alternatives you've considered, if relevant
No response
Additional context
The method is proposed in a very nice paper linked below.
References:
Impact
High (Major improvement)