Caroline Mazini-Rodrigues

Unsupervised discovery of interpretable visual concepts

Abstract

Providing interpretability of deep-learning models to non-experts, while fundamental for a responsible real-world usage, is challenging. Attribution maps from xAI techniques, such as Integrated Gradients, are a typical example of a visualization technique containing a high level of information, but with difficult interpretation. In this paper, we propose two methods, Maximum Activation Groups Extraction (MAGE) and Multiscale Interpretable Visualization (Ms-IV), to explain the model’s decision, enhancing global interpretability. MAGE finds, for a given CNN, combinations of features which, globally, form a semantic meaning, that we call concepts. We group these similar feature patterns by clustering in concepts, that we visualize through Ms-IV. This last method is inspired by Occlusion and Sensitivity analysis (incorporating causality) and uses a novel metric, called Class-aware Order Correlation (CAOC), to globally evaluate the most important image regions according to the model’s decision space. We compare our approach to xAI methods such as LIME and Integrated Gradients. Experimental results evince the Ms-IV higher localization and faithfulness values. Finally, qualitative evaluation of combined MAGE and Ms-IV demonstrates humans’ ability to agree, based on the visualization, with the decision of clusters’ concepts; and, to detect, among a given set of networks, the existence of bias.

Continue reading

Bridging human concepts and computer vision for explainable face verification

By Miriam Doh, Caroline Mazini-Rodrigues, Nicolas Boutry, Laurent Najman, Mancas Matei, Hugues Bersini

2023-10-10

In 2nd international workshop on emerging ethical aspects of AI (BEWARE-23)

Abstract

With Artificial Intelligence (AI) influencing the decision-making process of sensitive applications such as Face Verification, it is fundamental to ensure the transparency, fairness, and accountability of decisions. Although Explainable Artificial Intelligence (XAI) techniques exist to clarify AI decisions, it is equally important to provide interpretability of these decisions to humans. In this paper, we present an approach to combine computer and human vision to increase the explanation’s interpretability of a face verification algorithm. In particular, we are inspired by the human perceptual process to understand how machines perceive face’s human-semantic areas during face comparison tasks. We use Mediapipe, which provides a segmentation technique that identifies distinct human-semantic facial regions, enabling the machine’s perception analysis. Additionally, we adapted two model-agnostic algorithms to provide human-interpretable insights into the decision-making processes.

Continue reading

Gradients intégrés renforcés

Abstract

Les visualisations fournies par les techniques d’Intelligence Artificielle Explicable xAI) pour expliquer les réseaux de neurones convolutionnels (CNN’s) sont parfois difficile á interpréter. La richesse des motifs d’une image qui sont fournis en entrées (les pix l d’une image) entraîne des corrélations complexes entre les classes. Les techniques basées sur les gradients, telles que les gradients intégrés, mettent en évidence l’import nce de ces caractéristiques. Cependant, lorsqu’on les visualise sous forme d’images, on peut e retrouver avec un bruit excessif et donc une difficulté á interpréter les explic tions fournies. Nous proposons la méthode intitulée Gradients Intégrés Renforcés (RI ), une variation des gradients intégrés, qui vise á mettre en évidence les régions nfluentes des images dans la décision des réseaux. Cette méthode vise á réduire la sur ace des zones á analyser lors de la visualisation des résultats, générant ainsi moins e bruit apparent. Des expériences á base d’occlusions démontrent que les régions chois es par notre méthode jouent effectivement un rôle important en terme de classification.

Continue reading