ShapBPT: Image Feature Attributions using Data-Aware Binary Partition Trees
Published in AAAI-26 | 40th Annual AAAI Conference on Artificial Intelligence, 2026
Pixel-level feature attributions play a key role in Explainable Computer Vision (XCV) by revealing how visual features influence model predictions. While hierarchical Shapley methods based on the Owen formula offer a principled explanation framework, existing approaches overlook the multiscale and morphological structure of images, resulting in inefficient computation and weak semantic alignment.
To bridge this gap, we introduce ShapBPT, a data-aware XCV method that integrates hierarchical Shapley values with a Binary Partition Tree (BPT) representation of images. By assigning Shapley coefficients directly to a multiscale, image-adaptive hierarchy, ShapBPT produces explanations that align naturally with intrinsic image structures while significantly reducing computational cost. Experimental results demonstrate improved efficiency and structural faithfulness compared to existing XCV methods, and a 20-subject user study confirms that ShapBPT explanations are consistently preferred by humans.
- Main Technical Track: ShapBPT for improved Image Feature Attributions using Binary Partition Trees
- Conference: AAAI-2026 (40th Annual AAAI Conference on Artificial Intelligence)
- Link to talk: https://aaai.org/wp-content/uploads/2025/12/Main-track-poster-presentations_20251210.pdf
Contributions 📃
In this research, we introduces;
- A novel hierarchical model-agnostic XCV method for images, named \emph{ShapBPT}, that integrates an adaptive multi-scale partitioning algorithm with the Owen approximation of the Shapley coefficients. We repurpose the BPT (Binary Partition Tree) algorithm~\cite{salembier2000BPT} to effectively construct hierarchical structures for explainability. This approach overcomes the limitations of the inflexible hierarchies of state-of-the-art methods such as SHAP.
- An empirical assessment of the proposed method on natural color images showcasing its efficacy across various scoring targets, in comparison to established state-of-the-art XCV methods, and a controlled human-subject study comparing explanation interpretability across methods.
Method Availability
- The method is available under: https://github.com/amparore/shap_bpt.
- The tests and reproducible results are available under: https://github.com/rashidrao-pk/shap_bpt_tests.
- Technical Appendix available here.
Datasets and Models
- Dataset: ImageNet, MC Coco, MVTec, CelebA-HQ.
- Model: ViT, SwinViT, ResNet-50, Yolo-v11, Custom CNN, VAE-GAN.
Experiments Summary
| ID | Dataset | Size | Model | Short Description |
|---|---|---|---|---|
| E1 | ImageNet-S50 | 574 | ResNet50 | Common ImageNet setup |
| E2 | ImageNet-S50 | 574 | Ideal | Linear ideal model |
| E3 | ImageNet-S50 | 621 | SwinViT | Vision Transformer |
| E4 | MS-COCO | 274 | YOLO11s | Object detection |
| E5 | CelebA | 400 | CNN | Facial attribute localization |
| E6 | MVTec | 280 | VAE-GAN | Anomaly Detection |
| E7 | ImageNet-S50 | 593 | ViT-Base16 | Vision Transformer |
| E8 | — | — | — | User preference study using E1 saliency maps |
Authors ✍️
| Sr. No. | Author Name | Affiliation | Google Scholar |
|---|---|---|---|
| 1. | Muhammad Rashid | University of Torino, Dept. of Computer Science, Torino, Italy | Muhammad Rashid |
| 2. | Elvio G. Amparore | University of Torino, Dept. of Computer Science, Torino, Italy | Elvio G. Amparore |
| 3. | Enrico Ferrari | Rulex Innovation Labs, Rulex Inc., Genova, Italy | Enrico Ferrari |
| 4. | Damiano Verda | Rulex Innovation Labs, Rulex Inc., Genova, Italy | Damiano Verda |
Keywords 🔍
Shapley Values · Binary Partition Trees · eXplainable AI · XAI · Image Feature Attributions
Recommended citation:
