Skip to content

Enhancing Explainability in Fake News Detection uses SHAP and BiLSTM models to improve the transparency and interpretability of detecting fake news, providing insights into the model's decision-making process.

License

Notifications You must be signed in to change notification settings

harshjuly12/Enhancing-Explainability-in-Fake-News-Detection-A-SHAP-Based-Approach-for-Bidirectional-LSTM-Models

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

19 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Enhancing Explainability in Fake News Detection: A SHAP-Based Approach for Bidirectional LSTM Models

Table of Contents

  1. Introduction
  2. Dataset
  3. Features
  4. Technologies Used
  5. Installation
  6. Usage
  7. Example Output
  8. Contributing
  9. License
  10. Author

Introduction

Fake news detection has become a crucial task in the era of digital information. Identifying and mitigating the spread of misinformation is essential for maintaining the integrity of information ecosystems. This project focuses on enhancing the explainability of fake news detection models using SHAP (SHapley Additive exPlanations) to interpret Bidirectional LSTM (BiLSTM) models.

The goal is to provide a transparent and interpretable model that can help in understanding the underlying factors contributing to the classification of news articles as real or fake. By leveraging SHAP values, we aim to highlight the most influential features in the decision-making process, thus aiding in the explainability of the model.

Dataset

The dataset used in this project includes labeled news articles, containing both fake and real news. Each sample is preprocessed to extract relevant features for model training and evaluation.

Features

  • Data preprocessing and feature extraction
  • Implementation of Bidirectional LSTM model
  • Model interpretability using SHAP
  • Visualization of SHAP values for feature importance

Technologies Used

  • Python 3.x
  • Jupyter Notebook
  • TensorFlow/Keras
  • Scikit-learn
  • Pandas
  • Numpy
  • SHAP
  • Matplotlib
  • Seaborn

Installation

  1. Clone the repository:

    git clone https://github.com/your-username/Fake-News-Detection-SHAP-BiLSTM.git
    cd Fake-News-Detection-SHAP-BiLSTM
  2. Create a virtual environment:

    python -m venv venv
  3. Activate the virtual environment:

    • On Windows:
      venv\Scripts\activate
    • On macOS/Linux:
      source venv/bin/activate
  4. Install the required packages:

    pip install -r requirements.txt

Usage

  1. Open the Jupyter Notebook:

    jupyter notebook SHAP_BiLSTM_Model.ipynb
  2. Run the cells in the notebook to preprocess the data, train the BiLSTM model, and evaluate the results.

Example Output

Here are some example outputs from the project:

  • Model Accuracy: 94.5%
  • SHAP Summary Plot

Contributing

Contributions are welcome! Please open an issue or submit a pull request for any improvements or bug fixes.

License

This project is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

See the LICENSE file for details. License: CC BY-NC-SA 4.0

Author

For any questions or suggestions, please contact:

About

Enhancing Explainability in Fake News Detection uses SHAP and BiLSTM models to improve the transparency and interpretability of detecting fake news, providing insights into the model's decision-making process.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published