Skip to content

Commit

Permalink
Fixes from comments
Browse files Browse the repository at this point in the history
  • Loading branch information
GalyaZalesskaya committed Mar 22, 2023
1 parent fc74550 commit 1e245d3
Show file tree
Hide file tree
Showing 3 changed files with 48 additions and 33 deletions.
76 changes: 44 additions & 32 deletions docs/source/guide/explanation/additional_features/xai.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,26 +2,29 @@ Explainable AI (XAI)
====================

**Explainable AI (XAI)** is a field of research that aims to make machine learning models more transparent and interpretable to humans.
The goal is to help users understand how and why AI systems make decisions and provide insight into their inner workings. It allows us to detect, analyze, and prevent common mistakes like the lack of data diversity for certain objects.
The goal is to help users understand how and why AI systems make decisions and provide insight into their inner workings. It allows us to detect, analyze, and prevent common mistakes, for example, when the model uses irrelevant features to make a prediction.
XAI can help to build trust in AI, make sure that the model is safe for development and increase its adoption in various domains.

Most XAI tools generate **saliency maps** as a part of the process. It is a visual representation, suitable for human comprehension, that highlights the most important parts of the image that the network has focused on the most.
It looks like a heatmap, where warm-colored areas represent the areas with main focuses.
Most XAI methods generate **saliency maps** as a result. Saliency map is a visual representation, suitable for human comprehension, that highlights the most important parts of the image from the model point of view.
It looks like a heatmap, where warm-colored areas represent the areas with main focus.


.. image:: ../../../../utils/images/xai_example.jpg
.. figure:: ../../../../utils/images/xai_example.jpg
:width: 600
:alt: this image shows the result of XAI algorithm

These images are taken from `D-RISE paper <https://arxiv.org/abs/2006.03204>`_.


We can generate saliency maps for a certain model that was trained in OpenVINO™ Training Extensions, using ``otx explain`` command line. Learn more about its usage in :doc:`../../tutorials/base/explain` tutorial.

*************************
Classification algorithms
*************************
*********************************
XAI algorithms for classification
*********************************

.. image:: ../../../../utils/images/xai_cls.jpg
:width: 600
:align: center
:alt: this image shows the comparison of XAI classification algorithms


Expand All @@ -34,7 +37,8 @@ For classification networks these algorithms are used to generate saliency maps:
- `Recipro-CAM​ <https://arxiv.org/pdf/2209.14074>`_ uses Class Activation Mapping (CAM) to weigh the activation map for each class, so it can generate different saliency per class. Recipro-CAM is a fast gradient-free Reciprocal CAM method. The method involves spatially masking the extracted feature maps to exploit the correlation between activation maps and network predictions for target classes.


Below we show the comparison of described algorithms:
Below we show the comparison of described algorithms. ``Access to the model internal state`` means the necessity to modify the model's outputs and dump inner features.
``Per-class explanation support`` means generation different saliency maps for different classes.

+-------------------------------------------+----------------+----------------+-------------------------------------------------------------------------+
| Classification algorithm | Activation Map | Eigen-Cam | Recipro-CAM |
Expand All @@ -45,39 +49,47 @@ Below we show the comparison of described algorithms:
+-------------------------------------------+----------------+----------------+-------------------------------------------------------------------------+
| Single-shot | Yes | Yes | No (re-infer neck + head H*W times, where HxW – feature map size) |
+-------------------------------------------+----------------+----------------+-------------------------------------------------------------------------+
| Class discrimination | No | No | Yes |
| Per-class explanation support | No | No | Yes |
+-------------------------------------------+----------------+----------------+-------------------------------------------------------------------------+
| Execution speed | Fast | Fast | Medium |
+-------------------------------------------+----------------+----------------+-------------------------------------------------------------------------+


*************************
Detection algorithms
*************************
****************************
XAI algorithms for detection
****************************

For detection networks these algorithms are used to generate saliency maps:

- **Activation Map​** - the same approach as for classification networks, which uses the outputs from feature extractor. This is an algorithm is used to generate saliency maps for two-stage detectors.

To generate a saliency map for the detection task, we use the **DetClassProbabilityMap** algorithm.
It's the naive approach for detection that takes the raw classification head output and uses class probability maps to calculate regions of interest for each class. So, it creates different salience maps for each class.
For now, this algorithm is implemented for single-stage detectors only.​
- **DetClassProbabilityMap** - this approach takes the raw classification head output and uses class probability maps to calculate regions of interest for each class. So, it creates different salience maps for each class. This algorithm is implemented for single-stage detectors only.

.. image:: ../../../../utils/images/xai_det.jpg
:width: 600
:align: center
:alt: this image shows the detailed description of XAI detection algorithm


The main limitation of this method is that, due to training loss design, activation values drift towards the center of the object. It limits the getting of clear explanations in the near-edge image areas.​

+-------------------------------------------+-------------------------------------------------------------------------+
| Detection algorithm | DetClassProbabilityMap |
+===========================================+=========================================================================+
| Need access to model internal state | Yes |
+-------------------------------------------+-------------------------------------------------------------------------+
| Gradient-free | Yes |
+-------------------------------------------+-------------------------------------------------------------------------+
| Single-shot | Yes |
+-------------------------------------------+-------------------------------------------------------------------------+
| Class discrimination | No |
+-------------------------------------------+-------------------------------------------------------------------------+
| Box discrimination | No |
+-------------------------------------------+-------------------------------------------------------------------------+
| Execution speed | Fast |
+-------------------------------------------+-------------------------------------------------------------------------+
The main limitation of this method is that, due to training loss design of most single-stage detectors, activation values drift towards the center of the object while propagating through the network.
This prevents from getting clear explanation in the input image space using intermediate activations.

Below we show the comparison of described algorithms. ``Access to the model internal state`` means the necessity to modify the model's outputs and dump inner features.
``Per-class explanation support`` means generation different saliency maps for different classes. ``Per-box explanation support`` means generation standalone saliency maps for each detected prediction.


+-------------------------------------------+----------------------------+--------------------------------------------+
| Detection algorithm | Activation Map | DetClassProbabilityMap |
+===========================================+============================+============================================+
| Need access to model internal state | Yes | Yes |
+-------------------------------------------+----------------------------+--------------------------------------------+
| Gradient-free | Yes | Yes |
+-------------------------------------------+----------------------------+--------------------------------------------+
| Single-shot | Yes | Yes |
+-------------------------------------------+----------------------------+--------------------------------------------+
| Per-class explanation support | No | No |
+-------------------------------------------+----------------------------+--------------------------------------------+
| Per-box explanation support | No | No |
+-------------------------------------------+----------------------------+--------------------------------------------+
| Execution speed | Fast | Fast |
+-------------------------------------------+----------------------------+--------------------------------------------+
5 changes: 4 additions & 1 deletion docs/source/guide/tutorials/base/explain.rst
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,10 @@ we can define the ``--explain-algorithm`` parameter.
- ``eigencam`` - for Eigen-Cam classification algorithm
- ``classwisesaliencymap`` - for Recipro-CAM classification algorithm, this is a default method

For detection task, the ``classwisesaliencymap`` is only supported, so we don't need to specify it.
For detection task, we can choose between the following methods:

- ``activationmap`` - for activation map detection algorithm
- ``classwisesaliencymap`` - for DetClassProbabilityMap algorithm (works for single-stage detectors only)

.. note::

Expand Down
Binary file modified docs/utils/images/xai_example.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit 1e245d3

Please sign in to comment.