Home
Muhammad Rashid
Machine Learning Researcher specializing in Explainable AI (XAI),
Computer Vision, and Visual Anomaly Detection for real-world and safety-critical systems.
๐ฌ Research Focus
- Explainable AI for trustworthy decision-making
- Visual anomaly detection in industrial and robotic environments
- Pixel-level feature attribution using Shapley-based methods
- Robust and efficient explanation methods for computer vision
๐ AAAI 2026: ShapBPT โ Image Feature Attribution using Data-Aware Binary Partition Trees
๐ ICPE 2026 / QualITA Workshop: ShapBPT in Perspective: A Consolidated Review and an eXplainable Anomaly Detection Case Study
โ๏ธ Systems & Engineering
- Real-time anomaly detection for robotics safety
- ROS2-based live inference pipeline
- VAE / VAE-GAN models for industrial anomaly detection
- Multi-camera monitoring and safety-area segmentation
- End-to-end workflow: training โ threshold calibration โ live deployment
๐ Achieved:
- ~12 FPS real-time inference
- ~99.6% detection accuracy in industrial scenarios
- Explainable anomaly maps for safety-critical decision support
๐ง Open Source & Tools
- ShapBPT โ AAAI 2026 explainability framework
- XAD โ Explainable anomaly detection with ShapBPT
- LIME Stratified Sampling โ Improved explanation stability
- AI on Edge Devices โ Lightweight deployment pipelines
- XAI evaluation tools for saliency, attribution, and benchmarking
๐ Selected Publications
- ShapBPT: Image Feature Attribution using Data-Aware Binary Partition Trees
AAAI 2026 - ShapBPT in Perspective: A Consolidated Review and an eXplainable Anomaly Detection Case Study
QualITA Workshop, ICPE 2026 (ACM) - Using Stratified Sampling to Improve LIME Image Explanations
AAAI 2024 - Can I Trust My Anomaly Detection System?
XAI World 2024โก๏ธ See full publication list
๐ Featured Projects
โก๏ธ View all projects
๐ง Real-Time Anomaly Detection for Robotics
- Industrial safety monitoring system
- Multi-area detection: RoboArm, Conveyor Belt, Pallet Left, Pallet Right
- ROS2 integration with live inference and anomaly scoring
- Visual anomaly maps for decision interpretation
๐ง ShapBPT
- AAAI 2026 explainability method for images
- Uses data-aware Binary Partition Trees
- Provides hierarchical Shapley-based feature attributions
- More image-structure-aware than standard SHAP partition explainer
๐ Links:
PDF arXiv Code Tests PyPI User Study Poster
๐งช XAD: Explainable Anomaly Detection
- Applies ShapBPT to anomaly detection systems
- Explains why an image or region is considered anomalous
- Supports interpretation of black-box anomaly detection models
- Code: github.com/rashidrao-pk/XAD
๐ LIME Stratified Sampling
- Improves stability of LIME image explanations
- Reduces variance in perturbation-based sampling
- Provides more reliable local explanations
๐ Experience
- ๐ PhD Researcher โ University of Turin, Italy
- ๐ญ Industrial Research โ RuleX Innovation Labs
- ๐ช๐บ EU Project โ DistriMuSe: Robotics Safety & AI
- ๐ Visiting Researcher โ University of Granada, Spain
๐ค Collaboration
Iโm interested in collaborations on:
- Explainable AI and trustworthy machine learning
- Visual anomaly detection
- Industrial AI and robotics safety
- Computer vision for real-world systems
- Vision-language models and explainability
๐ซ Contact
- ๐ง Email:
{FIRSTNAME}.{LASTNAME}@unito.it - ๐ GitHub: github.com/rashidrao-pk
- ๐ Google Scholar: Muhammad Rashid
