Skip to content

vanlalpeka/interpretable_AI

Repository files navigation

Introduction

This repository is a work-in-progress. The notebooks are used to explore different methods for interpretable AI, and will be updated as I learn more about the topic.

Table of Contents

Interpretable AI

Interpretable AI is a subfield of AI that focuses on making AI models more interpretable. This is important for a number of reasons, including:

  • Understanding how the model makes predictions
  • Debugging the model
  • Ensuring that the model is fair and unbiased
  • Building trust with users
  • Complying with regulations

There are a number of methods for making AI models more interpretable. Some of these methods include:

  • Feature importance
  • Partial dependence plots
  • Shapley values
  • LIME
  • Rule-based models
  • Surrogate models
  • And many more

Notebooks

The following notebooks are available in this repository:

References