Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ arXiv.org e-Print Ar...arrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
http://arxiv.org/pdf/2002.0374...
Part of book or chapter of book
Data sources: UnpayWall
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ISTI Open Portal
Conference object . 2020
Data sources: ISTI Open Portal
image/svg+xml Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao Closed Access logo, derived from PLoS Open Access logo. This version with transparent background. http://commons.wikimedia.org/wiki/File:Closed_Access_logo_transparent.svg Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao
https://doi.org/10.1007/978-3-...
Other literature type . Part of book or chapter of book . 2020
License: Springer TDM
image/svg+xml Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao Closed Access logo, derived from PLoS Open Access logo. This version with transparent background. http://commons.wikimedia.org/wiki/File:Closed_Access_logo_transparent.svg Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao
CNR ExploRA
Conference object . 2020
Data sources: CNR ExploRA
https://doi.org/10.48550/arxiv...
Article . 2020
License: arXiv Non-Exclusive Distribution
Data sources: Datacite
versions View all 8 versions
addClaim

This Research product is the result of merged Research products in OpenAIRE.

You have already added 0 works in your ORCID record related to the merged Research product.
addClaim

This Research product is the result of merged Research products in OpenAIRE.

You have already added 0 works in your ORCID record related to the merged Research product.

Black Box Explanation by Learning Image Exemplars in the Latent Feature Space

Authors: Guidotti, Riccardo; Monreale, Anna; Matwin, Stan; Pedreschi, Dino;

Black Box Explanation by Learning Image Exemplars in the Latent Feature Space

Abstract

We present an approach to explain the decisions of black box models for image classification. While using the black box to label images, our explanation method exploits the latent feature space learned through an adversarial autoencoder. The proposed method first generates exemplar images in the latent feature space and learns a decision tree classifier. Then, it selects and decodes exemplars respecting local decision rules. Finally, it visualizes them in a manner that shows to the user how the exemplars can be modified to either stay within their class, or to become counter-factuals by "morphing" into another class. Since we focus on black box decision systems for image classification, the explanation obtained from the exemplars also provides a saliency map highlighting the areas of the image that contribute to its classification, and areas of the image that push it into another class. We present the results of an experimental evaluation on three datasets and two black box models. Besides providing the most useful and interpretable explanations, we show that the proposed method outperforms existing explainers in terms of fidelity, relevance, coherence, and stability.

Country
Italy
Subjects by Vocabulary

Microsoft Academic Graph classification: Computer science Feature vector Stability (learning theory) Black box Contextual image classification business.industry Decision tree learning Pattern recognition Decision rule Autoencoder Morphing Artificial intelligence business

Keywords

FOS: Computer and information sciences, Computer Science - Machine Learning, Computer Vision and Pattern Recognition (cs.CV), Computer Science - Computer Vision and Pattern Recognition, Machine Learning (cs.LG), Adversarial autoencoder, Image exemplars, Explainable AI

33 references, page 1 of 4

1. S. Bach, A. Binder, et al. On pixel-wise explanations for non-linear classi er decisions by layer-wise relevance propagation. PloS one, 10(7):e0130140, 2015.

2. J. Bien et al. Prototype selection for interpretable classi cation. AOAS, 2011.

3. L. Breiman. Random forests. Machine learning, 45(1):5{32, 2001.

4. C. Chen, O. Li, A. Barnett, J. Su, and C. Rudin. This looks like that: deep learning for interpretable image recognition. arXiv:1806.10574, 2018.

5. F. Doshi-Velez and B. Kim. Towards a rigorous science of interpretable machine learning. arXiv:1702.08608, 2017.

6. H. J. Escalante, S. Escalera, I. Guyon, et al. Explainable and interpretable models in computer vision and machine learning. Springer, 2018.

7. R. C. Fong and A. Vedaldi. Interpretable explanations of black boxes by meaningful perturbation. In ICCV, pages 3429{3437, 2017.

8. M. Frixione et al. Prototypes vs exemplars in concept representation. KEOD, 2012.

9. N. Frosst et al. Distilling a neural network into a soft decision tree. arXiv:1711.09784, 2017. [OpenAIRE]

10. I. Goodfellow et al. Generative adversarial nets. In NIPS, 2014.

  • BIP!
    Impact byBIP!
    citations
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    34
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Top 10%
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Top 10%
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Top 10%
  • citations
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    34
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Top 10%
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Top 10%
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Top 10%
    Powered byBIP!BIP!
Powered by OpenAIRE graph
Found an issue? Give us feedback
citations
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
34
Top 10%
Top 10%
Top 10%
Funded byView all
NSERC
Project
  • Funder: Natural Sciences and Engineering Research Council of Canada (NSERC)
,
EC| SoBigData
Project
SoBigData
SoBigData Research Infrastructure
  • Funder: European Commission (EC)
  • Project Code: 654024
  • Funding stream: H2020 | RIA
sysimport:crosswalk:repository
,
EC| AI4EU
Project
AI4EU
A European AI On Demand Platform and Ecosystem
  • Funder: European Commission (EC)
  • Project Code: 825619
  • Funding stream: H2020 | RIA
iis
,
EC| Track and Know
Project
Track and Know
Big Data for Mobility Tracking Knowledge Extraction in Urban Areas
  • Funder: European Commission (EC)
  • Project Code: 780754
  • Funding stream: H2020 | RIA
sysimport:crosswalk:repository
moresidebar

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.