ShapBPT in Perspective: A Consolidated Review and an eXplainable Anomaly Detection Case Study

Published in QualITA Workshop | ICPE 2026 (ACM), 2026

This paper presents **ShapBPT in Perspective**, a consolidated review and practical case study of **ShapBPT** for **eXplainable Anomaly Detection (XAD)**. The work connects hierarchical Shapley-based explanations with real-world anomaly detection scenarios, showing how image feature attributions can support the interpretation of black-box anomaly detection systems. The study focuses on the use of **ShapBPT** to explain anomaly detection decisions by highlighting image regions that contribute most to abnormal predictions. By using a data-aware **Binary Partition Tree (BPT)** hierarchy, ShapBPT provides structured, multiscale explanations that better follow image morphology compared to fixed-grid explanation strategies. * **Paper**: [ACM Digital Library](https://dl.acm.org/doi/10.1145/3777911.3800638) * **Code**: [XAD GitHub Repository](https://github.com/rashidrao-pk/XAD) * **Workshop**: [QualITA Workshop](https://qualitawg.github.io/) * **Main Venue**: [ICPE 2026](https://icpe2026.spec.org/) * **Venue**: QualITA Workshop, **ICPE 2026 (ACM)** * **Location**: Florence, Italy Contributions 馃搩 === In this work, we present: 1. A consolidated review of **ShapBPT** and its role in explainable computer vision. 2. An **eXplainable Anomaly Detection** case study showing how ShapBPT can be used to interpret anomaly detection outputs. 3. A practical connection between hierarchical Shapley explanations and real-world anomaly detection systems. 4. Open-source code for reproducibility and further research. How it Works? === How ShapBPT works for Anomaly Detection --- Workflow of Explainable Anomaly Detection using ShapBPT
The method explains anomaly detection decisions by assigning attribution scores to image regions. Instead of relying on fixed geometric partitions, ShapBPT uses a **data-aware image hierarchy** based on Binary Partition Trees. This allows the explanation to follow meaningful visual structures in the image. Datasets and Models === * **Task**: Explainable Anomaly Detection * **Method**: ShapBPT * **Model Type**: Black-box anomaly detection models * **Explanation Type**: Pixel-level / region-level feature attribution Authors 鉁嶏笍 === | Sr. No. | Author Name | Affiliation | Google Scholar | | :--: | :--: | :--: | :--: | | 1. | Muhammad Rashid | University of Torino, Dept. of Computer Science, Torino, Italy | [Muhammad Rashid](https://scholar.google.com/citations?user=F5u_Z5MAAAAJ&hl=en) | | 2. | Elvio G. Amparore | University of Torino, Dept. of Computer Science, Torino, Italy | [Elvio G. Amparore](https://scholar.google.com/citations?user=Hivlp1kAAAAJ&hl=en&oi=ao) | | 3. | Enrico Ferrari | Rulex Innovation Labs, Rulex Inc., Genova, Italy | [Enrico Ferrari](https://scholar.google.com/citations?user=QOflGNIAAAAJ&hl=en&oi=ao) | | 4. | Damiano Verda | Rulex Innovation Labs, Rulex Inc., Genova, Italy | [Damiano Verda](https://scholar.google.com/citations?user=t6o9YSsAAAAJ&hl=en&oi=ao) | ### Sample Keywords 馃攳 === ShapBPT 路 Explainable Anomaly Detection 路 XAI 路 Image Feature Attributions 路 Shapley Values 路 Binary Partition Trees 路 ICPE 2026 路 QualITA Workshop Citation === ```bibtex @inproceedings{rashid2026shapbptperspective, author = {Rashid, Muhammad and Amparore, Elvio G.}, title = {ShapBPT in Perspective: A Consolidated Review and an eXplainable Anomaly Detection Case Study}, booktitle = {QualITA Workshop, ICPE 2026}, year = {2026}, publisher = {ACM}, doi = {10.1145/3777911.3800638} }

Recommended citation: Rashid, Muhammad and Amparore, Elvio G. (2026). ShapBPT in Perspective: A Consolidated Review and an eXplainable Anomaly Detection Case Study. QualITA Workshop, ICPE 2026 (ACM).
Download Paper