Home

Muhammad Rashid

Machine Learning Researcher specializing in Explainable AI (XAI),
Computer Vision, and Visual Anomaly Detection for real-world and safety-critical systems.


๐Ÿ”ฌ Research Focus

  • Explainable AI for trustworthy decision-making
  • Visual anomaly detection in industrial and robotic environments
  • Pixel-level feature attribution using Shapley-based methods
  • Robust and efficient explanation methods for computer vision

๐Ÿ“Œ AAAI 2026: ShapBPT โ€” Image Feature Attribution using Data-Aware Binary Partition Trees
๐Ÿ“Œ ICPE 2026 / QualITA Workshop: ShapBPT in Perspective: A Consolidated Review and an eXplainable Anomaly Detection Case Study


โš™๏ธ Systems & Engineering

  • Real-time anomaly detection for robotics safety
  • ROS2-based live inference pipeline
  • VAE / VAE-GAN models for industrial anomaly detection
  • Multi-camera monitoring and safety-area segmentation
  • End-to-end workflow: training โ†’ threshold calibration โ†’ live deployment

๐Ÿš€ Achieved:

  • ~12 FPS real-time inference
  • ~99.6% detection accuracy in industrial scenarios
  • Explainable anomaly maps for safety-critical decision support

๐Ÿง  Open Source & Tools

  • ShapBPT โ€” AAAI 2026 explainability framework
  • XAD โ€” Explainable anomaly detection with ShapBPT
  • LIME Stratified Sampling โ€” Improved explanation stability
  • AI on Edge Devices โ€” Lightweight deployment pipelines
  • XAI evaluation tools for saliency, attribution, and benchmarking

๐Ÿ“„ Selected Publications


โžก๏ธ View all projects

๐Ÿ”ง Real-Time Anomaly Detection for Robotics

  • Industrial safety monitoring system
  • Multi-area detection: RoboArm, Conveyor Belt, Pallet Left, Pallet Right
  • ROS2 integration with live inference and anomaly scoring
  • Visual anomaly maps for decision interpretation

Code


๐Ÿง  ShapBPT

  • AAAI 2026 explainability method for images
  • Uses data-aware Binary Partition Trees
  • Provides hierarchical Shapley-based feature attributions
  • More image-structure-aware than standard SHAP partition explainer

๐Ÿ”— Links:

PDF arXiv Code Tests PyPI User Study Poster


๐Ÿงช XAD: Explainable Anomaly Detection

  • Applies ShapBPT to anomaly detection systems
  • Explains why an image or region is considered anomalous
  • Supports interpretation of black-box anomaly detection models
  • Code: github.com/rashidrao-pk/XAD

๐Ÿ“Š LIME Stratified Sampling

  • Improves stability of LIME image explanations
  • Reduces variance in perturbation-based sampling
  • Provides more reliable local explanations

๐ŸŒ Experience

  • ๐ŸŽ“ PhD Researcher โ€” University of Turin, Italy
  • ๐Ÿญ Industrial Research โ€” RuleX Innovation Labs
  • ๐Ÿ‡ช๐Ÿ‡บ EU Project โ€” DistriMuSe: Robotics Safety & AI
  • ๐ŸŒ Visiting Researcher โ€” University of Granada, Spain

๐Ÿค Collaboration

Iโ€™m interested in collaborations on:

  • Explainable AI and trustworthy machine learning
  • Visual anomaly detection
  • Industrial AI and robotics safety
  • Computer vision for real-world systems
  • Vision-language models and explainability

๐Ÿ“ซ Contact