#GradCAM
Closing the gap in oral cancer detection. 🦷

A new smartphone-based imaging system uses AI + autofluorescence to help dentists detect oral cancer early—fitting seamlessly into routine exams.

Read the #BiophotonicsDiscovery news story here: https://bit.ly/4qRrysc
November 11, 2025 at 3:51 PM
Job: Wissenschaftliche*r Mitarbeiter*in AI and Security

Aufgaben (u.a.)
Erforschung und Implementierung moderner Vision-Modelle (z. B. YOLO, RT-DETRv2, ViT, Swin, ResNet, GNNs)
Entwicklung von Klassifikationssystemen (RISE, GradCAM, ViT Shapley u.a.)

jobs.fraunhofer.de/job/Darms...
November 11, 2025 at 10:00 AM
They conducted a methodological and experimental evaluation of five popular explanation methods ( #Occlusion, #LIME, #GradCAM, #LRP, #DeepLIFT) and ten metrics covering five categories: faithfulness, robustness, localization, complexity, and randomization.
October 27, 2025 at 2:34 PM
Researchers introduced OCEAN, an object‑centric multi‑agent framework that matches top vision accuracy while providing explanations users rate as more intuitive than GradCAM or LIME. https://getnews.me/object-centric-multi-agent-framework-advances-transparent-visual-ai/ #objectcentric #explainableai
September 30, 2025 at 1:37 PM
PolypSeg-GradCAM, a U-Net/Grad-CAM hybrid, hit a mean IoU of 0.9257 and Dice scores over 0.96 on the Kvasir-SEG set of 1,000 colonoscopy images in the benchmark study. Read more: https://getnews.me/explainable-ai-boosts-polypseg-gradcam/ #polypseggradcam #colonoscopy #ai
September 25, 2025 at 6:10 PM
Akwasi Asare, Ulas Bagci
PolypSeg-GradCAM: Towards Explainable Computer-Aided Gastrointestinal Disease Detection Using U-Net Based Segmentation and Grad-CAM Visualization on the Kvasir Dataset
https://arxiv.org/abs/2509.18159
September 24, 2025 at 12:04 PM
Akwasi Asare, Ulas Bagci: PolypSeg-GradCAM: Towards Explainable Computer-Aided Gastrointestinal Disease Detection Using U-Net Based Segmentation and Grad-CAM Visualization on the Kvasir Dataset https://arxiv.org/abs/2509.18159 https://arxiv.org/pdf/2509.18159 https://arxiv.org/html/2509.18159
September 24, 2025 at 6:30 AM
Faster video processing by selecting tokens (visual attention by regressing gradcam output - pretty clever- another ICCV paper shown in advance 😅)
September 16, 2025 at 9:45 AM
Knowing where to go may shape what you see 👀. Analysing EEG data with CNN coupled with GradCAM we investigated the temporal dynamic of navigational affordances extraction, and how knowing where to go interfere with this process.
September 8, 2025 at 3:23 PM
Akwasi Asare, Mary Sagoe, Justice Williams Asare: FUTransUNet-GradCAM: A Hybrid Transformer-U-Net with Self-Attention and Explainable Visualizations for Foot Ulcer Segmentation https://arxiv.org/abs/2508.03758 https://arxiv.org/pdf/2508.03758 https://arxiv.org/html/2508.03758
August 7, 2025 at 6:36 AM
We first created heatmaps of relevant image regions for individual image dimensions using GradCam. The results made sense: For example, an alleged technology dimension highlighted the button of a flashlight as informative about technology. 10/n
June 23, 2025 at 8:03 PM
capture global relationships in images, are particularly effective for medical imaging tasks. Transfer learning helps to mitigate data constraints by fine-tuning pre-trained models. Furthermore, Explainable AI (XAI) methods such as GradCAM, GradCAM++, [4/6 of https://arxiv.org/abs/2505.16039v1]
May 23, 2025 at 5:59 AM
VGG19 and Xception achieve the highest accuracies, with 98.90% and 98.66% respectively. Additionally, Explainable AI (XAI) techniques such as GradCAM, GradCAM++, LayerCAM, ScoreCAM and FasterScoreCAM are used to enhance transparency by highlighting [5/7 of https://arxiv.org/abs/2505.16033v1]
May 23, 2025 at 5:58 AM
Mapping (GradCAM), Score-CAM, Faster Score-CAM, and XGRADCAM. Our methodology consistently outperforms current approaches, achieving 99.66\% accuracy in 4-class classification, 99.63\% in 3-class classification, and 100\% in binary classification [5/8 of https://arxiv.org/abs/2505.13906v1]
May 21, 2025 at 6:01 AM
GradCAM, provide visual interpretation based on heatmaps but lack conceptual clarity. Prototype-based approaches, like ProtoPNet and PIPNet, offer a more structured explanation but rely on fixed patches, limiting their robustness and semantic [2/5 of https://arxiv.org/abs/2504.12197v1]
April 17, 2025 at 6:09 AM
inpainting and gradient-based filtering using GradCAM. Experiments on the image caption generation task using the MS COCO 2017 dataset, BLEU, ROUGE, and BERTScore quantitative evaluation, and qualitative evaluation using an activation heatmap showed [4/5 of https://arxiv.org/abs/2503.22941v1]
April 1, 2025 at 5:54 AM
(CNNs), GradCAM explanations and Deep Semi-Supervised Anomaly Detection. The average classification accuracy on two faulty classes is improved by 8% and maintenance operators are required to manually reclassify only 15% of the images. We compare the [5/7 of https://arxiv.org/abs/2503.15415v1]
March 20, 2025 at 6:09 AM
that although attention maps show promise under certain conditions and generally surpass GradCAM in explainability, they are outperformed by transformer-specific interpretability methods. Our findings indicate that the efficacy of attention maps as a [5/6 of https://arxiv.org/abs/2503.09535v1]
March 13, 2025 at 6:02 AM
XgradCAM indicates higher confidence increase (e.g., 0.12 in glioma tumor) compared to GradCAM++ (0.09) and LayerCAM (0.08).
Implications. Based on the experimental results and recent advancements, we outline future research directions to enhance [6/7 of https://arxiv.org/abs/2503.08420v1]
March 12, 2025 at 6:03 AM
pitfalls. We employ Class Activation Maps (CAMs), like GradCAM and HiResCAM, to assess model explainability. We evaluate the CNNs' performance and interpretability on two standard datasets, MalImg and Big2015, and a newly [5/9 of https://arxiv.org/abs/2503.02441v1]
March 5, 2025 at 5:56 AM
We even show you can do this without a specialized heatmap model if you have a good classifier for the badness you want to eliminate by fine-tuning. Simply use a pixel attribution technique like GRADCAM to generate the heatmap !
January 19, 2025 at 3:48 PM
December 23, 2024 at 5:40 AM
Good question! Investigating which areas of the images are important for the result? Like some sort of GradCam but for diffusion models, which IMHO is an entirely new paper by itself. But interesting future work for sure: explanation methods for diffusion models, that sounds new to me 🤔
December 10, 2024 at 9:29 PM
📃Scientific paper: Generalizing GradCAM for Embedding Networks

➡️ Continued on ES/IODE

July 18, 2024 at 2:33 PM