Portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 1
Published in Multimedia Tools and Applications, 2018
Object Detection and Classification: A Joint Selection and Fusion Strategy of Deep Convolutional Neural Network and SIFT Point Features
Recommended citation: Rashid,Muhammad et al. (2024). "." In Multimedia Tools and Applications.
Download Paper
Published in Journal of Experimental & Theoretical Artificial Intelligence, 2019
Deep CNN and geometric features-based gastrointestinal tract diseases detection and classification from wireless capsule endoscopy images
Published in Multimedia Tools and Applications, 2019
Classification of gastrointestinal diseases of stomach from WCE using improved saliency-based method and discriminant features selection
Recommended citation: Rashid,Muhammad et al. (2024). "." In Multimedia Tools and Applications .
Download Paper
Published in Biomedical Research, 2019
Region-based active contour JSEG fusion technique for skin lesion segmentation from dermoscopic images
Published in Neural Computing and Applications , 2019
An integrated framework of skin lesion detection and recognition through saliency method and optimal deep neural network features selection
Published in Sustainability, 2020
A sustainable deep learning framework for object recognition using multi-layers deep features fusion and selection
Published in Current Medical Imaging, 2021
An Optimized Approach for Breast Cancer Classification for Histopathological Images Based on Hybrid Feature Set
Published in Mathematics, 2023
A novel light u-net model for left ventricle segmentation using MRI
Published in AAAI-24 - 38th AAAI Conference on Artificial Intelligence, 2024
A improved version of LIME to generate meaninful explanations when LIME degenerates meaningless explanations.
Recommended citation: Rashid,Muhammad et al. (2024). "." Proceedings of the AAAI Conference on Artificial Intelligence. 38(13).
Download Paper | Download Slides
Published in xAI 2024 | Explainable Artificial Intelligence, 2024
A Case study to highlight use of VAE-GAN based Gen-AI approach to detect Anomalies in Industrial Inspection systems.
Recommended citation: Rashid,Muhammad et al. (2024). "." In World Conference on Explainable Artificial Intelligence, pp. 243-254. Cham: Springer Nature Switzerland, 2024.
Download Paper | Download Slides
Published in AAAI-26 | 40th Annual AAAI Conference on Artificial Intelligence, 2026
A Novel XAI method to integrate the Data Aware Method (BPT) into Generating Image Features Attributions
a href="https://rashidrao-pk.github.io/files/AAAI_26_poster.pdf">Download Poster
Published:
We investigate the use of a stratified sampling approach for LIME Image, a popular model-agnostic explainable AI method for computer vision tasks, in order to reduce the artifacts generated by typical Monte Carlo sampling. Such artifacts are due to the undersampling of the dependent variable in the synthetic neighborhood around the image being explained, which may result in inadequate explanations due to the impossibility of fitting a linear regressor on the sampled data. We then highlight a connection with the Shapley theory, where similar arguments about undersampling and sample relevance were suggested in the past. We derive all the formulas and adjustment factors required for an unbiased stratified sampling estimator. Experiments show the efficacy of the proposed approach..
Published:
Generative models based on variational autoencoders are a popular technique for detecting anomalies in images in a semi-supervised context. A common approach employs the anomaly score to detect the presence of anomalies, and it is known to reach high level of accuracy on benchmark datasets. However, since anomaly scores are computed from reconstruction disparities, they often obscure the detection of various spurious features, raising concerns regarding their actual efficacy. This case study explores the robustness of an anomaly detection system based on variational autoencoder generative models through the use of eXplainable AI methods. The goal is to get a different perspective on the real performances of anomaly detectors that use reconstruction differences. In our case study we discovered that, in many cases, samples are detected as anomalous for the wrong or misleading factors.
Published:
Pixel-level feature attributions play a key role in Explainable Computer Vision (XCV) by revealing how visual features influence model predictions. While hierarchical Shapley methods based on the Owen formula offer a principled explanation framework, existing approaches overlook the multiscale and morphological structure of images, resulting in inefficient computation and weak semantic alignment.