Adapters¶
from_shap_explanation
¶
from_shap_explanation(explanation: Any, *, feature_names: Sequence[str] | None = None, class_index: int | None = None) -> tuple[ndarray, list[str]]
Convert a SHAP Explanation (or compatible object) to (values, names).
| PARAMETER | DESCRIPTION |
|---|---|
explanation
|
Either a
TYPE:
|
feature_names
|
Required only when
TYPE:
|
class_index
|
For multi-class explanations (3D
TYPE:
|
Source code in src/concept_graph_xai/adapters/shap.py
from_permutation_importance
¶
from_permutation_importance(result: Any, feature_names: Sequence[str], *, use: str = 'importances_mean') -> tuple[ndarray, list[str]]
Convert a sklearn Bunch (from permutation_importance) to arrays.
| PARAMETER | DESCRIPTION |
|---|---|
result
|
The Bunch returned by
TYPE:
|
feature_names
|
Names matching the order of features used during the permutation run.
TYPE:
|
use
|
Which attribute on the Bunch to expose. Defaults to
TYPE:
|
Source code in src/concept_graph_xai/adapters/sklearn_perm.py
from_feature_importances_
¶
Pull model.feature_importances_ into the canonical (values, names).